Fossil Forum

Forum
Login

Fossil Big File Support?

By anonymous on 2018-09-18 22:41:33 [link]

Unless I'm wrong Fossil currently only supports files that are less than 2GiB (or something close to it). I understand this limitation comes from SQLite's maximum BLOB size.

It'd be great if Fossil supported binary files larger than the current 2GiB size by effectively storing it in multiple BLOBs if necessary.

By jungleboogie on 2018-09-18 23:32:59 [link]

https://sqlite.org/limits.html

Check out #14.

By jungleboogie on 2018-09-18 23:36:43 [link]

While it may be possible to store files larger than 2GB, it might not be the best thing to do. Typically source control is for text files, although it's used and abuse abused with art assets, video files, etc. And that's not just regarding fossil, but source control in general.

Check the mailing list archives for a similar question from others. If you do this, you'll likely want to disable to checksum, as that can take a considerable amount of time to complete.

By anonymous on 2018-09-19 21:01:14 [link]

Yes, I did disable the checksum option. I agree that Fossil isn't designed to store art assets etc., but IMHO the simplicity of fossil makes it a great archiving tool even for binary files.

By KevinYouren on 2018-09-20 00:54:48 [link]

I would suggest you also consider a "proof of concept" archiving tool, which is also written by Dr Hipp.



https://sqlite.org/sqlar/

I use it to store photos and pdfs.

regs, Kev

By anonymous on 2018-09-20 02:58:11 [link]

I did it look into it prior using Fossil as an archival too. Unless I'm wrong, It does have the same BLOB size limit. But even putting the size limit aside, I find using fossil a lot more easier and more feature rich like built-in syncing, multiple branches, wiki, de-dup, no accidental deletion etc., Best of all, single db file so makes it effortless to backup/restore. More or less I use fossil as wrapper layer to sqlar with some goodies.

By KevinYouren on 2018-09-20 10:25:15 [link]

Sounds good.

I only have 11 distinct files greater than 1G, out of 1.55 million files (about 860,000 distinct files). I have 2 Ubuntu instances, and an  LFS (Linux From Scratch) instance on my laptop. And another laptop that is a clone. 

I did find the BLOB limit during testing of SQLAR, when I had  3G GPG file in  a sub-directory.

I used to split files into pieces when I backed up to diskette - I didn't have a tape drive. USBs solved that. I now have multiple removable drives.

regs, Kev

By wyoung on 2018-09-20 10:40:48 [link]

It sounds like you're trying to use Fossil as an alternative to rsync: a method to keep two or more machines' filesystems in sync.

Isn't that awfully expensive in terms of disk space? At the very least, it doubles storage space, ignoring compression. Every time you update one of those OS disk images, you're likely to balloon the size of the Fossil repo.

On top of that, every time you check in changes to a file managed by Fossil, you temporarily need up to about 3x the size of that BLOB's size to compute the diff: 1x for the checked-in version, 1x for the new version, and up to 1x for all of the delta data, with the worst case being that the new version can't be delta-compressed at all. Fossil could move to a rolling diff model, reducing the worst case to the BLOB size + 2 * sizeof(rolling_buffer), but it's still a lot of RAM for large BLOBs.

There are rafts of machine syncing and private cloud storage alternatives out there. I don't think Fossil should try to morph into yet another of these. To the extent that Fossil does overlap this area, it's in filling a rather specialized niche.

By stephan on 2018-09-20 13:17:37 [link]

Warren wrote:

There are rafts of machine syncing and private cloud storage alternatives out there.

For those who haven't heard of it yet, Syncthing is a cross-platform, open source solution for hosting one's own syncable files. It's kind of like having (and maintaining) your own private dropbox service. i haven't used it but have heard good things about it.

Warren wrote:

I don't think Fossil should try to morph into yet another of these.

Amen to that!

By KevinYouren on 2018-09-20 22:21:45

I tend to agree with both Warren and Stephan. Fossil has core use cases.

I am retired and work alone. I am a fringe user, with specialized requirements. An outlier, no longer mainstream.

I use Fossil like a diary system.
I use SQLAR as an alternative to tar.gz and zip.

I do not use Fossil as a backup mechanism.

I use these because they have SQLITE at the core. Is good.

As a user of IBM mainframe change management software from 1981 onwards, I have a different perspective. I was always used software that stored source, compared it, compiled it, reported problems, and then distributed from unit test to system test to integration test to user-acceptance test to production. Emergency fixes in Prod were catered for.

regs, Kev

By stephan on 2018-09-19 02:07:00 [link]

One important reason not to store huge files is that fossil's memory requirements are largely a function of the blob sizes. e.g. when performing deltas, which is does when committing changes, it needs memory to store/work with the original copy, the changed copy, and the delta all at once. It's very possible to run out of memory when working with large blobs, especially on embedded devices like a Raspberry Pi (one fellow, a couple years ago, was trying to use a blob which was something like 4x the size of his RAM and virtual memory, and wondered why fossil kept dying).

By anonymous on 2018-09-20 00:29:46 [link]

How practical is it to version such big binary files?

It may be easier and indeed faster to version references to such assets. After a check out you may fetch the actual files using a script. No need to wait for Fossil to unpack the big files from the db.

Otherwise, keeping huge binaries in the repo kinda short-circuits the Fossil's utility and makes it slower on status and commit. It also makes cloning such repo a needlessly long waiting affair.

By anonymous on 2018-09-20 02:59:01 [link]

My intention is not necessarily to 'version' them like text files but archive binary files and prevent accidental deletion.

By jungleboogie on 2018-09-20 03:15:33 [link]

That sounds like a backup program. You may like borg, a python backup utility.

By anonymous on 2018-09-20 04:46:07 [link]

Sorta like a backup utility but just a bit different. See this post: https://fossil-scm.org/forum/forumpost/16d7c4c287

In any case, currently I split them manually myself and fossil has no issues dealing with them. Currently the repo is over 120GB and fossil works flawlessly.

By jungleboogie on 2018-09-20 04:48:55 [link]

Huh, pretty cool. Have any figures on how long it takes to open as a fossil file?

By anonymous on 2018-09-20 15:32:28 [link]

Looks like this mostly works for you. However by the same token, an accidental deletion may equally happen to the Fossil repo... Even more, if you keep all your fossils in the same 'basket' so to speak. If the directory is deleted by accident, all of the repos within are gone too.

Using Fossil as a kind of backup tool may not be optimal in the long run. Not sure if Fossil is preserving file attributes, like permissions and owner.

By stephan on 2018-09-20 15:43:48 [link]

The only permission fossil records is the executable bit, and that was added relatively late in fossil's design. Fossil does not record any file owner info. git behaves similarly in this regard, ignoring all permissions except the executable bit.

By drh on 2018-09-20 16:21:03 [link]

In a sense, Fossil was originally designed to do backup!

Remember, Fossil was designed to support SQLite development. Part of that support includes providing automatic backups. We accomplish this by having peer server repositories in separate data centers hosting by independent ISPs, and having those peer repositories automatically sync with one another. When we make a change to SQLite, that change goes into our local repo and (via autosync) is immediately also pushed to one of the peer servers. Within a few hours, the change is also replicated to the other servers. In this way, we can lose entire data centers and/or developer sites, with no actual loss of content.

I have various private repositories in which I keep things slides for all talks I've ever presented, personal and corporate financial records, and so forth. These private repositories are also replicated (though on a different set of private servers) to avoid any single point failure.

When I go to set up a new laptop, I simply install Fossil then clone a few repositories, and I suddenly have all these historical records available for immediate access. Prior to taking that laptop on a trip, I simply run "fossil all sync" to make sure everything is up-to-date.

All that said, I'm not trying to back up multi-gigabyte files. The biggest artifacts I have are OpenOffice presentation files which can be a dozen megabytes or so. Fossil works great as a backup mechanism in that context. I'm not sure it would work as well as a backup mechanism for gigabyte-sized videos and high-res photo and album collections.

By veedeehjay on 2018-09-20 16:48:41 [link]

apart from the multi-gig issue, I would argue that something like this (e.g. syncing stuff between desktop and laptop)  really is better done with a bidirectional file synchronizer capable of detecting conflicts (files changed on both sides/machines etc.)  e.g. `unison' should appeal to fossil users: https://github.com/bcpierce00/unison (yes, they use the wrong DVCS ;)).

I understand that one could to some extent use your approach but I really prefer full sync of my home directory (including file-system based syncing of existing fossil repos...) between desktop and laptop, e.g.