Fossil Forum

fsck can Delete the Newest Version of a Fossil Repository File
Login

fsck can Delete the Newest Version of a Fossil Repository File

fsck can Delete the Newest Version of a Fossil Repository File

(1) By Martin Vahi (martin_vahi) on 2022-03-28 06:29:06 [link] [source]

The scheme:

step_1) Fossil server software runs happily, reading and writing to a repository file.

step_2) File system corrupts for whatever reason (power outage, etc.).

step_2) At next boot-up fsck does its job as it is expected to do and deletes hopelessly broken files to fix the file system. It just so happens that the newest and not-yet-backed-up version of the fossil repository is among the corrupt files that "needs" to be deleted. And it does get deleted as it should.

step_3) The owner of the repository file wonders, what to do in a situation, where all the backups are at least 1 month old, sometimes older.

Yes, that is the scheme, how I just, about 4h before writing this text here, lost all of the Fossil repository files that had a related running server instance at my local desktop. Those were my newest versions. The file system was ext4 and there was a root cron-job that executed "sync" at every minute.

I suspect that may be if the Fossil server could somehow use 2 files at once, one of the files residing at some other machine, then this problem might be substantially mitigated. I do not believe that any RAID could help here, because RAIDs are against hardware failures, not software failures. I lost those fossilrepository files by cutting power to my desktop, because if there is a cron-job that calls sync every minute, then "why not" to do a reset that way. I have never had any problems with that approach before and today (2022_03_28) I am 40 years old, so I have had plenty of time to reset machines that way :-D

Thank You for reading this post.

(2) By Warren Young (wyoung) on 2022-03-28 07:40:10 in reply to 1 [link] [source]

a root cron-job that executed "sync" at every minute.

You were syncing against what? Is that not itself a backup?

I'm not sure I see how ext4 can lose all of your Fossil repos under this scheme. I wonder if it's the directory holding the Fossil repos that was corrupted, not all the repos at once. Assuming you're calling "fossil all sync", only one repo file should be in danger at any one time, since ext4 implements fsync correctly, and the process is single-threaded.

Check the volume's lost+found folder. Your repos might have been moved there, and might be recoverable.

if the Fossil server could somehow use 2 files at once, one of the files residing at some other machine,

Of course Fossil can do that; the "D" in DVCS stands for "distributed."

Follow the instructions in the backup doc to set up an off-machine backup scheme.

I do not believe that any RAID could help here

Copy-on-write filesystems with snapshotting can prevent the sort of problems you're having here, particularly if you have redundancy. I'm most familiar with ZFS, but I believe btrfs, APFS, and ReFS offer some of the same benefits.

I am 40 years old, so I have had plenty of time to reset machines that way :-D

I suspect this isn't the first time you've learned the value of backups, then.

Follow the 3-2-1 rule: 3 copies minimum, 2 different media, one copy off-site.

(4.2) Originally by Martin Vahi (martin_vahi) with edits by Stephan Beal (stephan) on 2022-03-31 22:48:33 from 4.1 in reply to 2 [link] [source]

Follow the instructions in the backup doc to set up an off-machine backup scheme.

Thank You for the answer. I guess I have to think about those matters. As of 2022_03_31 I still do not quite have an answer to the replication part, because working with a repository that has tens of GiB of data can be pretty slow.

As of 2022_03_31 my "solution" is to upload the ~70GiB repository (contains presentation videos in WebM format, trees from GitHub in tar.xz files, etc.) to a paid file sharing service about twice a year. With passwords changed to protect the live instance. The problem is that the normal cloning of the live repository fails due to the limited number of operating system processes that can simultaneously open a file for reading. In my case the live instance is served by wrapping Fossil to PHP. Each PHP request starts a new Fossil process that exits after assembling the answer to the PHP request. Search bots do not care, whether somebody is cloning the repository during the time, when they want to index the live instance of the Fossil repository, nor should they care, but cloning fails, if too many operating system processes try to access a single repository file.

Follow the 3-2-1 rule: 3 copies minimum, 2 different media, one copy off-site.

As my Silktorrent wiki might indicate, I've been thinking a lot about the various IPv4 related issues in the context of privacy, DoS-attacks, censorship, various kinds of war related network connectivity issues. As of 2022_03_31 the best ideas that I have come up with, or at least read from somewhere, consists of the following 3 thoughts:

Thought_01: If plain tar-files are renamed according to their secure hash and size so that those fields can be easily extracted from the file name, id est the file name has some easy-to-parse format, then it is really difficult to change the content of the tar-file without introducing a mismatch between the tar-file name and the actual secure hash of the tar-file. As long as anonymity is guaranteed, such tar-files can be distributed over untrusted channels and from untrusted servers, including P2P file sharing services. List of software dependencies might be described as a list of such tar-files. A MIT-licensed dumb script for creating and verifying such tar-files.

Thought_02: It is theoretically impossible to totally block network connections by flooding, if the network is assembled from routers, where the outbound data flow of each of the router's N IO-ports is equally divided among the inbound data flows of the remaining N-1 IO-ports of the router. No content filtering is needed. Should be simple and cheap to implement, specially, if the Thought_03 is used for addressing. That scheme would determine the guaranteed minimum data flow speeds. It's OK to use "free bandwidth" opportunistically to temporarily offer bigger speeds. (Should be good enough to get text messages through despite the exponential reduction of speed in the worst case scenario, specially if there is sufficient amount of long-distance lines with routers/relay-stations with only 2 IO-ports.)

Thought_03: If a labeled connected graph that has a finite number of vertices has vertex ID-s/labels that are positive whole numbers in range 0..255 (read: 1B of storage space for the ID/label), then it is possible to describe at least one path between any 2 vertices of that graph provided that the IDs/labels of the vertices meet the following criteria:

  • no vertices that have an edge between them share an ID/label;
  • no immediate neighbours of a vertex share an ID/label.

Routers described at Thought_02 and end_devices are the vertices of that graph. Just like there can be multiple routes from one house to another in a city, there can be multiple paths from one vertex to another in that graph. This addressing scheme will never "run out of IP-addresses" even if all of the IDs/labels are just 1B or even less and the graph/network consists of billions of vertices/routers_and_end_devices. The task of finding an optimal route and the task of addressing are 2 separate tasks. It's OK to use multiple routes for any pair of vertices.

I suspect that the combination of Thought_02 and Thought_03 might allow to create network hardware that is relatively simple to implement from firmware/HDL point of view. Anonymization would be implemented in an overlay network like Tor. Classical IPv4/IPv6 might be tunneled at this network like VPN is used in IPv4 networks. Sound based phone calls do not take that much network bandwidth, so that kind of network might be the bases of decentralized phoning. Video calls would be available opportunistically. The goal of the specification is to figure out, how to guarantee at least some bare minimum real-time communication. Existing classical long-distance IPv4/IPv6 internet connections, including satellite connections, can be used for creating long-distance tunnels between cities.

Thank You for reading my post.

[[Edited by Stephan to remove parts related to a current political hot-topic. Please keep any and all politics out of the forum.]]

(3) By ravbc on 2022-03-28 07:47:17 in reply to 1 [source]

if there is a cron-job that calls sync every minute, then "why not" to do a reset that way

You should take more care in choosing filesystems and mount options for them if you want to stress test them like that... Journaled filesystems (such as ext4) are there to provide consistency of data in the event of a crash, rather than protect all of a file's data from the crash. And 1 minute between syncs is waaay to long to expect no data loss at all.

(5) By Martin Vahi (martin_vahi) on 2022-03-31 23:44:15 in reply to 3 [link] [source]

Thank You for the answer. Well, I'll gamble on NilFS/NilFS2.

The peculiarity of the NilFS2 is that it does not even have the fsck utility, because by design it can get by without it and the developers of the NilFS2 did not bother to create the fsck utility. Supposedly one of the drawbacks of the NilFS2 is that it should get pretty slow the moment there are about 300 (three hundred) files-folders in a single folder. As of 2022_03_31 I do not know, if that limit exists, but to the best of my current (2022_03_31) knowledge the only situation, where there are more than ~300 files in a single folder, is if the files are somehow autogenerated or they are some other people's software projects or photo collections or something else that other people have created. There's also the peculiarity that file deletion does not release HDD/SDD space, but it consumes extra HDD/SDD space, because a deletion operation is done by writing to the journal that a certain file/folder is deleted. HDD/SDD space is released by a daemon that analyses the journal. Therefore the NilFS/NilFS2 is not suitable for work loads with a lot of disk writes.

As bad as it sounds, I have an opinion that laws of nature, including mathematics, do not care, how good or nasty anybody is. I'm an atheist, but I like the saying that sometimes even the Devil can speak the truth, if the truth happens to fit with the motives of the Devil. Think of all the nastiness of wars and what doctors have learned from the awful carnage. I distinguish between moral acceptability of an experiment and the scientific results of the experiment. For example I find The Standford Prison Experiment and the Milgram Experiment to be totally immoral, but that does not mean that as long as the scientific results of the experiments are sound, that people should suffer more by ignoring the scientific results of the experiments. That is to say, if Hitler or Putin delivered some sound scientific results, then I would not hesitate to take advantage of the scientific results, no matter how awful the means for obtaining those results were. Again, there's a distinction between accepting the experiment and accepting the scientific results of the experiment.

With that kind of approach to morality, I say that I know that Mr. Reiser is a murderer, but the ReiserFS is a collective team effort and in my opinion the name of the file system is its main, if not the only, flaw. As of 2022_03_31 I have found the ReiserFS to be really useful in a scenario, where an old laptop boots from an USB-HDD that stores the "/" on an ReiserFS. That way it is possible to keep the old Windows on the laptop's internal HDD/SDD for possible testing and old games. I suspect that the reason, why the ReiserFS works so well in that setting, has something to do with the minimization of disk writes, although I have not studied it to know for sure.

With those observations, my current (2022_03_31) plan is to keep VirtualBox virtual appliance runtime-images on ReiserFS and the rest of the /home on NilFS2 and the "/" preferably on ReiserFS, but if the ReiserFS is not available, then I'll see, if the XFS works for me in that narrow role. I also use a few MiB sized ramfs partition in combination with a root cron-job for implementing a Linux distribution independent version of a Bash script that gets executed only once per boot. The cron-job executes a Bash script that looks for a file on the ramfs partition and if the file exists, then exits, otherwise executes some code and creates the file. A shutdown deletes the RAM partition with everything in it, allowing the once-per-boot code to be executed again after boot.

Thank You for reading my post.

(6.1) By Martin Vahi (martin_vahi) on 2022-04-01 00:06:18 edited from 6.0 in reply to 3 [link] [source]

I forgot to mention that I do not use a single computer as my workstation. I use one "terminal" and the rest of the computers are linked to the "terminal" by sshfs and VNC/RDP.

sshfs allows one to mount any computer with an SSH account as if it were an USB-memory-stick. RDP supports copy-paste between the remote computer desktop and the computer that runs the RDP client. I have found the Remmina to be a very practical RDP client. If the copy-paste functionality stops working, then I just close the Remmina session and reopen the Remmina session. The remote desktop session does not get closed, if the Remmina session closes. On top of that I have found that it helps, if the scheduler of all HDD/SDD is set to "deadline". Like:

# on Linux as root
echo "deadline" > /sys/block/sda/queue/scheduler # sets it
cat /sys/block/sda/queue/scheduler # displays the current value
                                   # and possible supported values

That's the core of the Bash script that gets executed only once per boot. I hope this post saves somebody's time at setting up a home working environment.

Thank You for reading my post.

(7) By Warren Young (wyoung) on 2022-04-01 00:21:11 in reply to 6.1 [link] [source]

linked to the "terminal" by sshfs

That’s inadvisable.