Fossil User Forum

Stupid Fossil Trick: saving on SSD/SD wear and tear on high-I/O test suites
Login

Stupid Fossil Trick: saving on SSD/SD wear and tear on high-I/O test suites

Stupid Fossil Trick: saving on SSD/SD wear and tear on high-I/O test suites

(1) By Stephan Beal (stephan) on 2025-05-06 12:11:32 [link] [source]

This Stupid Fossil Trick is for those who frequently run high-I/O test suites in fossil-hosted projects (e.g. sqlite's)...

Working from a RAM drive is risky because if the computer crashes, the RAM contents go with it. However, fossil provides an easy way to edit code in persistent storage and then ship it to a RAM disk for processing:

[me@host:/src/foo] $ ... edit my code...

Then, in another terminal window...

[me@host:/ramdisk] $ fossil patch pull --force /src/foo && m all test

That fossil patch pull can pull/push between two local checkouts was unknown to me until last week when, on a whim, i tried it out with the goal stated in this post's subject line: sparing my poor storage some wear and tear.

Of course, the same can also be achieved with tools like rsync.

Not quite related but perhaps useful: the "m" command shown above is an alias for ~/bin/makeup -j4:

#!/bin/bash
# A quick-hack drop-in replacement for "make" which recursively
# searches upwards in the dir hierarchy for a makefile, and runs the
# real "make" from the first such directory.
#
# The problem this solves is: it's common for me to work in emacs in
# source trees which don't have makefiles in every directory.  Tapping
# (ctrl-x m) to run make annoyingly (but rightfully) fails in such
# directories. By mapping this script to emacs's compile-command var,
# that problem disappears. i should have done this two decades ago :|.

# Reminder: we can't find the real make with "which make" because that
# might, depending on how this script is named, resolve to this
# script. Thus we unfortunately hard-code the make path here:
theMake=/usr/bin/make

# According to the GNU make docs, it looks for makefiles with these
# names in this order:
# https://www.gnu.org/software/make/manual/html_node/Makefile-Names.html
mfiles="GNUmakefile makefile Makefile"

# Returns 0 if any file in the list of $mfiles is found in the current
# directory, else returns non-0.
function cwdHasMakefile(){
    for f in $mfiles; do
        [[ -f "$f" ]] && return 0
    done
    return 1
}

prev= # previous directory
while true; do
    if [[ "x/" = "x$prev" ]]; then
        # Interestingly, "cd .." from / does not fail, so we
        # check for that case here.
        break
    fi
    if cwdHasMakefile; then
        exec "$theMake" "$@"
    fi
    prev=${PWD}
    # If cd fails for any reason, simply bail. This can happen
    # if, e.g., /home is not accessible on a shared hoster.
    cd .. >/dev/null || break
done
echo "$0: no makefile found." 1>&2
exit 127

(2) By Warren Young (wyoung) on 2025-05-06 13:30:54 in reply to 1 [link] [source]

~/bin/makeup -j4

Move the flag into the script so that it can programmatically determine the best value based on the local CPU capabilities. The right value for the old Pi3 build-bot isn't the right one for your desktop space-heater.

My mmake script ("multi-make") calls my portable core-counting script and multiplies it by 1.5×, empirically determined to be the point where compilation stops getting faster. That tends to give a value too high for CPUs without hyperthreading type features, but rarely so much that the resulting parallel build pushes the system into swapping, negating the advantage.

It also abstracts the difference between BSD and GNU make, a defensible design choice when your project doesn't use any constructs not supported by both.

(3) By Doug (doug9forester) on 2025-05-06 22:44:54 in reply to 2 [link] [source]

Does fossil support me doing the following:

Setup:
Make a fossil master repository on my (Windows) desktop computer
Make a fossil slave on my VPS linux machine
manually check out a branch from the master on desktop
manually check out the same branch on slave machine

then over and over again -

edit on the desktop ....
check in on the desktop
run something to synchronize the slave with master (? what)
run something else to update the checkout on on the slave (? what)

rinse and repeat

(4) By Stephan Beal (stephan) on 2025-05-06 22:50:03 in reply to 3 [link] [source]

check in on the desktop run something to synchronize the slave with master (? what)

You don't need to check in between runs: "fossil patch pull/push" is exactly for this type of thing. With that, you can do any amount of back and forth before committing (as it were). That requires an ssh connection between the two systems, though.

(5) By Doug (doug9forester) on 2025-05-06 23:07:26 in reply to 4 [link] [source]

How do I setup to be able to do this? I have an ssh connection.

(6) By Richard Hipp (drh) on 2025-05-06 23:12:39 in reply to 3 [link] [source]

Make a fossil master repository on my (Windows) desktop computer
Make a fossil slave on my VPS linux machine

Yes, Fossil will do that. However, it usually works better (for all software, not just Fossil) to make the Linux VPS the server and the Windows desktop the client.

Let's suppose you have a Fossil repo on your Linux VPS. Maybe you have a website on the Linux machine, or maybe not - you can use SSH to communicate if not. Once the repository is on the Linux machine, you go to your Windows desktop and clone it. The URL can start with https: or ssh: according to how you have the Linux side configured. Then on your Windows desktop, you "fossil open" the clone you just made. You make changes. Then you type "fossil commit". The changes are committed on your local desktop then automatically synced up to your server.

In the scenario of the previous paragraph, Fossil won't automatically update a checkout you have on the linux side. But you could do that using a cron job that runs ever 5 minutes. Or you could do it from your desktop by running "fossil patch push ssh://linuxhost/path/to/checkout"

(7) By Doug (doug9forester) on 2025-05-06 23:57:54 in reply to 6 [source]

Still lost: I have more than one repository on my VPS machine. How does the clone command know which one I want to clone? The URL doesn't specify it.

(8) By Richard Hipp (drh) on 2025-05-07 00:06:46 in reply to 7 [link] [source]

On the Linux server that is running this website, in the directory /home/www/Fossils there lives a Fossil repository named "alhttpd.fossil" (for the Althttpd project). I clone it to my desktop thusly:

fossil clone ssh://root@a1.sqlite.org//home/www/Fossils/althttpd.fossil althttpd.fossil

Note the double // after the hostname. That is because I wanted to give a full pathname for the Fossil repository. If I had wanted to use a pathname relative to the login directory for user "root", then I would have only used a single /.

Note also that the local name for the new clone is given at the end.

(9) By Andy Bradford (andybradford) on 2025-05-07 03:45:58 in reply to 7 [link] [source]

> How does the clone command know which one I want to clone?

fossil isn't very  different from scp (part of SSH).  How would scp know
which files to copy from remote to local? You specify them in the path:

scp user@remote:/path/to/video.mp4 Downloads/localvideo.mp4

Similarly, you use  give fossil a path to whichever  repository you want
to clone,  however, it uses  a more  normalized URL syntax[1]  where the
colon  (:) would  denote a  port number  to use  instead of  a delimiter
between the host and the path. For example:

fossil clone ssh://user@remote//path/to/project.fossil repositories/project.fossil

This is documented in the "fossil help clone" and also:

https://fossil-scm.org/home/help/clone

[1] scp does support the URI form using scp:// scheme, it's just less common.

Andy