Fossil User Forum

New user question - best practice
Login

New user question - best practice

New user question - best practice

(1) By cbl (tpakiwi) on 2024-02-25 17:14:24 [source]

Quick (?) question: has anyone set up fossil with build in a docker-compose file that also defines a few other team applications on a central server? Assuming I get the build part past my its-the-weekend user error problems, has anyone successfully set up fossil this way for a multi-repo environment (i.e., http://repository.$HOSTNAME/$REPO1 http://repository.$HOSTNAME/$REPO2)? I guess the question lurking under the surface is whether I'm trying to force fossil into what has been a legacy git/gitea framework or whether this is a valid and viable setup and whether there are any considerations for the docker-compose. Many thanks in advance.

(2.1) By Warren Young (wyoung) on 2024-02-25 23:33:29 edited from 2.0 in reply to 1 [link] [source]

My first question is, "Why?"

The purpose of Compose is when you have a multi-container application and it all has to come up in a particular order and go down together. What third-party services are you using together with Fossil that have this characteristic?

The only possibilities that come to mind are a front-end web proxy and an email server, but there are multiple reasons why there is no especially good reason these need to be so tightly coupled:

  1. You may not need a proxy at all. Back when Fossil had TLS support for client-side operations only, it was common to stand Fossil servers up behind something that could translate TLS into unencrypted HTTP, but we just passed the two-year anniversary of that limitation's demise.

  2. If you need a proxy for another reason, it is clear that Fossil won't be available until the proxy is up, but that doesn't mean the proxy is useless without Fossil or vice versa. Their lifetimes aren't entirely independent, but they aren't tightly-coupled, either. The normal operation mode where the proxy comes up at system boot and the Fossil server comes up as a user-level service is sufficient sequencing for any normal use case.

  3. If you have Fossil configured for email alerts, that's similarly asynchronous and loosely-coupled. You need to use the mail DB or maildir options with containers, which means you've got a separate process that intermediates between these two containers. Make it flexible enough not to remove the alert from the mail DB until it's been enqueued for sending. Now the three containers' lifetimes are independent.

I presume you're trying to get away from Git tooling to simplify your life. KISS, man!

If you feel you have legitimate cause for all this complexity, you'll need to justify it to budge me off this position.

(3) By cbl (tpakiwi) on 2024-02-26 18:25:39 in reply to 2.1 [link] [source]

I appreciate the response, and I appreciate you challenging what I'm asking for.

It's a convenience thing (in the context of applications for which it is a convenience, if you'll pardon the circularity): execute one docker-compose up to get everything running. I'd be fine having an entirely separate docker container (I have Portainer running on the box, and I'd considered a custom App Template, or whatever they call it) and just remembering to start fossil after bringing everything up. This wouldn't be a common activity. I think I'd prefer to stick with docker at this point, at least until I get the time to look at transitioning everything to nix.

As for the proxy itself, the argument for it is primarily that everything else is behind nginx proxy manager with a wildcard cert, and if I'm running Nextcloud and documentation that way I might as well do one more. But you're hitting on exactly the sanity check/best practices part of the question: are my assumptions outdated? In a small-team, shared server, multi-repo environment, how is fossil commonly deployed?

(4) By Stephan Beal (stephan) on 2024-02-26 19:07:26 in reply to 3 [link] [source]

In a small-team, shared server, multi-repo environment, how is fossil commonly deployed?

If we look at the projects hosted by Fossil's own project lead as examples which fit those criteria:

The setup is that Fossil runs as a CGI script via a public-facing web server (see the 4th link in that list, but any web server will do).

In the above cases, a single user sets up the central copy of a repo, announces it to the world, then hands out dev accounts as they see fit. Most of those projects have a small handful of committers, with fossil itself being the outlier with something like 80 developer accounts1.

In practice, that's pain-free and easy to maintain.


  1. ^ Reminder to self: certainly we have a better way of counting that then running code from the JS console on the /setup_ulist page? let li = []; document.querySelectorAll('tr td:nth-of-type(2)').forEach(e=>li.push(e)); console.debug(li.filter(e=>e.innerText.indexOf('v')>=0).length)

(5) By Warren Young (wyoung) on 2024-02-26 20:53:34 in reply to 3 [link] [source]

execute one docker-compose up to get everything running

Then you are using Compose as an orchestrator, not to bind interdependent services together. What I want you to think about in that case is, do you have a better option for an orchestrator?

Personally, I use Podman with systemd for this. My choice of reverse proxy (nginx) comes up as an OS-level service,1 and the Fossil instances backing it come up as user-level services some time later. That's all the sequencing I require.

…I might as well do one more

Yes, no argument. My only point is that there's no particular reason to bind nginx and Fossil together in a single composition. While I will concede that it is better if one comes up after the other, it is by no means required, and if it doesn't happen that way, it will soon fix itself as the orchestrators work to get their respective pieces running.

how is fossil commonly deployed?

The above-linked advice is how I do it on my public web instance.

For the internal instances, I have them set up under systemd as well, but without the container layer, like this. I do use TLS here, but because my needs internally are far simpler than on the public server, I get by with the internal TLS service feature. That instance runs in --repolist mode.


  1. ^ …based on this config

(6) By cbl (tpakiwi) on 2024-02-27 00:03:16 in reply to 5 [link] [source]

I appreciate the thoughtful response. I might use this whole thing as an opportunity to look at migrating from Docker more broadly.

(One more wrinkle is that everyone is on a tailscale network, so depending on how you look at it, I have more flexibility in how I deploy what is effectively an internal application or I can get away with bad choices more easily.)

(7) By Andy Bradford (andybradford) on 2024-02-27 03:46:25 in reply to 3 [link] [source]

> In a small-team, shared server,  multi-repo environment, how is fossil
> commonly deployed?

I tend to prefer to keep it simple  and use SSH for this. If you already
have a file server or "home"  server where folks already SSH for things,
then it's fairly easy to setup. You can limit access by traditional Unix
groups or you can make it even more strict if you prefer.

Andy