Fossil Forum

Multi-level repository

Multi-level repository

Multiple master repository synchronization

(1.1) By andygoth on 2020-07-30 20:37:22 edited from 1.0 [link] [source]

I'm planning something like the following:

  • server1
    • node1a
    • node1b
    • node1c
    • node1d
  • server2
    • node2a
    • node2b
    • node2c
    • node2d

In normal operation, every computer has connectivity to every other computer. If a failure occurs, it's most likely that server1/node1* be split from server2/node2*.

I'd like for my repository to be replicated across all ten of these computers, with server1 being the upstream remote for node1* and server2 the remote for node2*. Furthermore, I'd like for changes pushed to server1 to automatically be synced to server2, and vice versa. How would this be accomplished? I know I approximate it with a cron job, but there's also the "Transfers" administration page. Might that help?

Now that there's a multi-remote feature, maybe I can instead have all node* systems have both server1 and server2 as the remote. Do I have that right? Would this make it so every commit followed by a push to both and every update is preceded by a pull from both? For two servers, I'm fine with this design, but if I had (say) a dozen, maybe it might be better to make the upstream servers responsible for staying synchronized rather than having it be the client's job to talk to all of them all the time.

(2) By anonymous on 2020-07-31 05:39:24 in reply to 1.1 [link] [source]

there's also the "Transfers" administration page. Might that help?

You could have the post transfer script send an HTTP request to trigger another script to perform the sync to the other server.

(3) By andygoth on 2020-08-07 00:57:24 in reply to 2 [link] [source]

There's a major wrinkle I neglected to mention.

I'm not using HTTP, I'm using SSH.

(4) By andygoth on 2020-08-11 16:20:26 in reply to 1.1 [source]

For now, the best I've been able to come up with is configuring node2{a,b,c,d} to use server1 as their remote. I configured server1 and server2 to have each other as remotes. Last, I put an hourly cron job in server1 to sync with server2.

Not ideal, since if server1 goes down, server2 could be up to an hour behind. I could ask all eight node* systems to then sync with server2 to locate any artifacts it may be missing, but it's quite possible for server1 and node1* to all go down at the same time.