Fossil in a 9 MiB Docker container
(1) By Warren Young (wyoung) on 2022-08-06 04:28:04 [link] [source]
I've just spent several hours carving the prior CDROM-sized Docker container down to under 9 MiB. Although it's a big change, I checked it in on trunk since it's a non-core feature, and we're early in the development cycle. If it bugs someone with a commit bit, feel free to shunt it to a branch.
Personally, I think it's awesome as-is. :)
(2) By Marcelo Huerta (richieadler) on 2022-08-06 20:15:08 in reply to 1 [source]
The command to create the image is now indicated as
docker build -t fossil --no-cache .
and this fossil
image is mentioned immediately later, but then other operations mention fossil_static
as the name of the image (and also the container).
(24) By Marcelo Huerta (richieadler) on 2022-08-18 00:59:35 in reply to 3 [link] [source]
Sadly, now since f9384383 I haven't been able to generate the Docker image in Windows. I get this error:
[+] Building 0.2s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2B 0.0s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4273675404/Dockerfile: no such file or directory
(25) By Warren Young (wyoung) on 2022-08-18 01:38:24 in reply to 24 [link] [source]
Manually do what Autosetup does automatically on sensible build platforms: copy Dockerfile.in
to Dockerfile
, then replace @FOSSIL_CI_PFX@
with any valid checkin name.
(26) By Marcelo Huerta (richieadler) on 2022-08-19 01:23:55 in reply to 25 [link] [source]
That worked, thanks.
(29) By Marcelo Huerta (richieadler) on 2022-08-30 02:50:45 in reply to 25 [link] [source]
Did something in bc09e28a
change to prevent the copy of fossil outside of the container?
When I generate the image with that build and I try to run the step
docker cp fossil-static-tmp:/jail/bin/fossil .
to extract the file locally, it starts the copy (I see a fossil
file with the proper size) and the prompt returns immediately, but after a while it removes the file, as if the copy had failed.
It's not a premature interruption due to the container being removed, because I ran the steps manually and the copy always fails even if the container still exists.
(30.1) By Warren Young (wyoung) on 2022-08-30 07:33:40 edited from 30.0 in reply to 29 [link] [source]
Did something…change to prevent the copy of fossil outside of the container?
Nope:
$ make container-image
$ docker create --name fossil-static-tmp fossil:bc09e28a26de
$ docker cp fossil-static-tmp:/jail/bin/fossil .
$ file fossil
fossil: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, stripped
after a while it removes the file, as if the copy had failed.
Sounds like antimalware weirdness. Remember what I said about "sensible build platforms" above? Yeah… :)
(31) By Marcelo Huerta (richieadler) on 2022-08-30 14:08:51 in reply to 30.1 [link] [source]
Sounds like antimalware weirdness.
That it was. False alarm from Windows Defender. Apparently I missed the messages for some reason.
Thank you for the heads up.
(4.3) By Warren Young (wyoung) on 2022-08-16 12:38:18 edited from 4.2 in reply to 1 [link] [source]
In case anyone wants it, here's a single-stage version of the Dockerfile
:
FROM alpine:latest
ARG UID=499
ENV JAIL=/jail
WORKDIR $JAIL
RUN apk update \
&& apk upgrade --no-cache \
&& apk add --no-cache \
gcc make moreutils \
musl-dev openssl-dev zlib-dev \
&& addgroup -g ${UID} fossil \
&& adduser -h ${JAIL} -g Fossil -G fossil -u ${UID} -S fossil \
&& mkdir -m 700 ${JAIL}/dev ${JAIL}/museum \
&& mknod -m 600 ${JAIL}/dev/null c 1 3 \
&& mknod -m 600 ${JAIL}/dev/urandom c 1 9 \
&& mkdir build ; cd build \
&& wget -O - https://fossil-scm.org/home/tarball/src | tar -xzf - \
&& m=src/main.mk ; grep -v '/skins/[a-ce-z]' $m | sponge $m \
&& src/configure CFLAGS='-Os -s' && make -j11 install \
&& ln -s fossil /usr/local/bin/f \
&& s=/usr/local/bin/sqlite3 \
&& echo -e '#!/bin/sh\nfossil sqlite3 --no-repository "$@"' > $s \
&& chmod +x $s \
&& rm -rf ${JAIL}/build \
&& if apk add upx ; then upx -9 /usr/local/bin/fossil ; apk del upx ; fi \
&& apk del gcc make moreutils *-dev \
&& chown fossil:fossil ${JAIL} ${JAIL}/museum
EXPOSE 8080/tcp
CMD [ \
"fossil", "server", \
"--create", \
"--jsmode", "bundled", \
"--user", "admin", \
"repo.fossil"]
EDIT: Updated to track the current shipping version of Dockerfile.in
.
Things of note:
By skipping the second stage, we leave a basic Alpine environment behind outside the jail. This makes it a more useful in-container debugging and development environment.
It doesn't build a static Fossil binary or rebuild BusyBox from source, on purpose. The base Alpine environment ships and uses all of Fossil's default dependencies, so it would increase the container size by about 2 MiB if you did a static build. (Another way to say it is that the overhead of shared libraries is amortized better in this version.)
One of the nice things about a 2-stage build is that you don't need explicit cleanup steps; we can leave all our junk behind in the first stage and create the second one clean by cherry-picking elements from the first. This version's single
RUN
command is therefore more complex in some ways than the split steps in the checked-in version, else we'd leave behind the build system, development libraries, and build products. (It nets out about equal due to the prior item, though.)
Tantalizing idea: Busybox includes a sendmail
implementation. Could the container be induced to send email alerts?
Which of Fossil's many options for sending email will work inside this container? The pipe-to-sendmail option, while being admirably simple, is out unless we're willing to discard the inner chroot jail. This leads us to option #2, the SQLite message spool option. The Alpine package system includes Jim Tcl and a SQLite3 package which should be suitable to run the Tcl email transport daemon, at a small extra cost, roughly 350k.
I'm not especially interested in pursuing this further, but it does show how powerful this little container is. It comes to 11 MiB on x86_64, yet it can act as a full-featured DVCS server, including a forum, with email alerts.
(27) By anonymous on 2022-08-28 07:16:36 in reply to 4.3 [link] [source]
If I have the museum volume pointed at a directory full of Fossil repository files, can I replace the repo.fossil
argument with the -repolist
option so that it will serve all of them from one base URL?
(28.2) By Warren Young (wyoung) on 2022-08-28 17:18:42 edited from 28.1 in reply to 27 [link] [source]
It's still Fossil under those layers, so it works just fine:
$ docker build -t fossil-ss:local-uid --build-arg UID=501 .
$ docker run \
--detach \
-v ~/museum/home:/jail/museum \
-p 9000:8080 \
--name fossil-home-repolist \
fossil-ss:local-uid \
fossil server --jsmode bundled --user admin --chroot /jail --repolist museum
$ docker exec -it fossil-home-repolist sh
$ chown -R fossil:fossil /jail/museum
Details:
- The "fossil-ss" bit documents the fact that we built the image from the single-stage
Dockerfile
, in order to distinguish it from other builds. - The "local-uid" tag indicates that we changed the default 499 UID.
- Since we overrode the image's CMD value in the "run" step, we name the container after what it now does, so we don't get confused thinking it operates like the stock version up-thread.
- The default UID mapping on the bind mount will be
root:root
, so we have to fix it the first time. This is truly a one-time fix: if you recreate the container, the permission changes somehow persist even though this is a bind mount, not a Docker-managed volume.
(5) By Warren Young (wyoung) on 2022-08-06 22:02:24 in reply to 1 [link] [source]
Oooh, this is fun: Alpine includes Fossil in its package tree. It's still shipping 2.18, but it gets us a very fast install, producing a functional container in under two seconds here:
FROM alpine:latest
ENV JAIL=/jail
WORKDIR $JAIL
RUN apk update \
&& apk add --no-cache fossil \
&& mkdir -m 700 ${JAIL}/dev \
&& mknod -m 600 ${JAIL}/dev/null c 1 3 \
&& mknod -m 600 ${JAIL}/dev/urandom c 1 9
EXPOSE 8080/tcp
CMD [ \
"/usr/bin/fossil", "server", \
"--create", \
"--jsmode", "bundled", \
"--user", "admin", \
"repo.fossil"]
(6) By John Rouillard (rouilj) on 2022-08-06 23:40:18 in reply to 5 [link] [source]
Am I missing something or does the repo go away when the container goes away? Don't you need a volume somewhere that you map from the underlying OS to maintain persistence of data?
(7) By Warren Young (wyoung) on 2022-08-07 01:01:50 in reply to 6 [link] [source]
There's a whole section on this in the Docker docs. This being the Fossil project, I think that ends our discussion of the matter here. :)
(8) By Warren Young (wyoung) on 2022-08-13 23:51:44 in reply to 7 [link] [source]
I have better answers now.
I had to make some changes to Fossil to permit it, but the stock Docker container now serves its repo from /jail/museum/repo.fossil
, which allows you to map a Docker volume onto that directory at container creation time. That in turn allows the repo to have an independent lifetime from the container. If you go out of you way make use of this ability per the updated docs, you can rebuild the container without destroying the precious repo.
It's possible to avoid using Docker volumes by relying on Fossil's DVCS nature: sync the repo somewhere else so you have a version you can copy back into the container, then reestablish the sync link after the container is rebuilt and running again.
Better, use one of Fossil's backup methods, so that when you go to copy the repo back into a fresh container, it's a faithful copy of what was inside the container before you destroyed it.
If you stop the container before destroying it so the SQLite transactions are all nicely settled, you can "docker cp
" the repo file out before you destroy the container, then cp
it back in after recreating the container. This is expedient, but personally, I'd combine it with one of the above methods. Two is one, one is none.
(9) By John Rouillard (rouilj) on 2022-08-14 00:56:52 in reply to 8 [link] [source]
Placing the working files in a docker volume is exactly what I was driving at in my original question. It's what I did when developing a docker container for my open source project. Your response basically stopped me from bothering to follow up.
This change looks good and follows what seems to match docker best practices. I saw you had to do a code change to permit relative paths to the repo. I think this is a reasonable and expected change (in my docker I require a relative path inside the container).
The main problem with using fossil sync to preserve the repo is that usernames/passwords aren't synced in a usable fashion. IIUC the secret needed for password verification isn't synced making the copy unable to be used to replace the original.
Hence your reference to backup.md. SQL level backup IIUC allows an
exact replica (including secret and user tables) to be backed up and
restored. However to do this you need to docker exec
into the
container.
With the volume mount, you should be able to perform the backup from outside the container and use a cron job etc. scheduled outside of the docker for automated backups.
Thanks for adding this. Since I usually follow the tip of fossil, I was creating new dockers every few days. I'll try using your changes to replace my existing fossil setup.
(15) By Warren Young (wyoung) on 2022-08-14 16:43:10 in reply to 9 [link] [source]
a code change to permit relative paths to the repo
That commit was a mere refinement to the essential change. I didn't realize I needed it at first because I was testing using "~/tmp/…
" paths, which a POSIX shell expands to absolute paths. It wasn't until I tried building a container with relative paths atop this checked-in feature that I realized the order of operations inside enter_chroot_jail()
was backwards. With the primary fix in place, the container did work with absolute paths; it just made the Dockerfile
a smidge harder to read, so I went ahead and did the refinement.
(And I had to check all this in first to do that container work, since the container build process pulls Fossil from tip-of-trunk.)
With the volume mount, you should be able to perform the backup from outside the container
That risks database corruption, since SQLite's locking mechanisms can't function across the shared volume boundary. Shutting the container down and doing a "docker cp
" is better in this instance.
Also, the method of directly accessing the volume mount point only works on Linux. Under macOS and Windows, there's a hidden virtual machine that runs all the containers, since these platforms lack the Linux-specific container mechanisms that Docker is built atop, and that's where the shared volumes live. You can access this VM's internal storage if you know how, but it isn't a simple copy across the local filesystem as when running containers natively on Linux.
I'll try using your changes to replace my existing fossil setup.
If you have ideas worth pushing into the stock container, I'm certainly willing to consider them. Simplicity is important — I'll reject things like enabling Tcl and the JSON API and what-all — but since this is my first real container project, I wouldn't be surprised if I've missed other important matters like this image vs container distinction.
(10) By Kirill M (Kirill) on 2022-08-14 07:36:07 in reply to 8 [link] [source]
Should it say mkdir -p …
in
RUN mkdir -m 700 dev museum
to avoid error if museum already exists?
(11) By Warren Young (wyoung) on 2022-08-14 08:06:11 in reply to 10 [link] [source]
Docker creates everything afresh. That situation cannot happen in this context.
(12) By Kirill M (Kirill) on 2022-08-14 08:18:15 in reply to 11 [link] [source]
Also when making external directory available inside container? I’d guess it would be created prior to mkdir
being run, this I asked. Not using Docker, so I don’t know.
(13) By Warren Young (wyoung) on 2022-08-14 09:02:14 in reply to 12 [link] [source]
The line you’re talking about is used in creating the Docker “image” file, but the external volume isn’t attached until the container proper is created from the image.
Conversely, if you don’t mkdir museum
here, the container dies on start in the internal repo storage case.
Have a play. Learn a thing. 🥸
(14) By Kirill M (Kirill) on 2022-08-14 10:54:05 in reply to 13 [link] [source]
Yeah, the Docker and K8s world is for now foreign to me, as it had not had any practical use for myself.
Thanks so much for explaination!
(16) By Warren Young (wyoung) on 2022-08-15 14:48:05 in reply to 1 [link] [source]
I've gotten the image down to 7.84 MiB here, primarily by building BusyBox from source and removing utilities that have no justification inside a Fossil server container. It only adds about 10 seconds to the build time.
I tried enabling LTO, but it only saved ~50 kiB at a cost of nearly tripling the build time, so I reverted that as unjustifiable.
I also tried a dynamic build so Fossil and BusyBox would share several libraries, but the savings were swamped by huge unused sections of these libraries, which a static link trims away. libcrypto
in particular is huge, even on Alpine.
(17) By John Rouillard (rouilj) on 2022-08-15 16:53:36 in reply to 16 [link] [source]
I tried enabling LTO, but it only saved ~50 kiB at a cost of nearly tripling the build time, so I reverted that as unjustifiable.
Do you mean gcc's LTO mode?
(18) By Warren Young (wyoung) on 2022-08-15 19:52:05 in reply to 17 [link] [source]
Of course. What other expansion fits that context?
(19) By Florian Balmer (florian.balmer) on 2022-08-16 04:46:05 in reply to 16 [link] [source]
If "every KiB counts", you may also consider disabling individual stock skins ("bootstrap" takes ~100 KiB, for example) and/or compressing the built-in files to save ~450 KiB.
(Compressing the built-in files comes at the runtime cost of uncompressing them; it's on my list of interesting things to figure out if it's possible to directly serve compressed files for the web UI.)
(20) By Warren Young (wyoung) on 2022-08-16 07:10:17 in reply to 19 [link] [source]
If "every KiB counts"
The threshold is on the order of megs in my use case.
My motivation for embarking on this project is that my networking equipment provider of choice just added container support to their OS, which means that, in theory, I could run Fossil on my office switch and on my home router. I mean, c'mon, who doesn't want to do that? 😛
The problem is, about half of the manufacturer's offerings ship with only 16 MiB of on-board flash storage. They picked that size because the RouterOS firmware image is about 12 MiB these days, and the storage is there primarily for use during the upgrade process.
Between upgrades, though, the storage is mostly sitting there unused. If you can get your container and its referenced user data sufficiently far under the delta, 4 MiB, you can leave it running through the upgrade.
Since the Fossil binary alone is bigger than that, Plan B is to script the rebuild of the container so you can destroy it, upgrade RouterOS, and redeploy it, squatting on the otherwise fallow space.
I tell you all of this because the lazy way to create this container is the single-stage method up-thread, which produces an image that's nearly 16 MiB. There's a fair chance it won't even upload to the target device, much less unpack, run, and manage a useful Fossil repository. This means my actual target needs to be under 8 MiB minimum, and it'd be great to get it under 4 MiB.
There are RouterOS devices with more on-board flash storage, and there are devices with USB, microSD, and mini-PCIe slots for adding storage, but I still want to get this to work on MikroTik's smallest ARM devices. Innovation is born of constraint, after all.
disabling individual stock skins
Nice idea; thank you. I'm now shipping only d*
, which gets us default
and darkmode
. I considered making the regex more complex to get rid of darkmode
, but then I thought it was nice to offer at least one alternative, and this is a fine one.
compressing the built-in files
Doesn't doing that interfere with Fossil's built-in HTTP gzip compression?
That sparked an idea, though: compress the executables with UPX. The container image is now 3.71 MiB here. Woot!
(21) By Florian Balmer (florian.balmer) on 2022-08-16 10:28:55 in reply to 20 [link] [source]
I'm now shipping only
d*
, which gets usdefault
anddarkmode
.
I think Fossil still knows about the skins even if the corresponding built-in files are removed, and may abort with errors if missing skins files are requested.
This may be acceptable for your use case -- otherwise, if Dockerfile.in should
modify the Fossil build files ad hoc, i.e. without introducing new global
#define
s, the corresponding lines from src/skins.c may also be matched by
regex (or, at least after adding regex-friendly comments to the aBuiltinSkin[]
array).
compressing the built-in files
Doesn't doing that interfere with Fossil's built-in HTTP gzip compression?
Since Fossil requires most of the built-in files as "plain text", i.e. to concatenate CSS and JS files to larger units, the approach (of my messy draft code) is:
- have tools/mkbuiltin.c compress the files with zlib
- have src/builtin.c:
builtin_file()
uncompress the files with zlib
Then the usual CGI output compression by Fossil and/or the web server is applied.
It's only for the --jsmode separate
case where the (compressed) built-in file
could probably go directly to CGI output.
For your needs, this case is obsoleted by UPX -- but I forgot to update my numbers: the new extsrc/pikchr.wasm adds another 100 KiB and seems highly compressible, so may be a good candidate to be stored compressed, and delivered directly without the need to ever be uncompressed by Fossil.
(22.1) By Warren Young (wyoung) on 2022-08-16 11:15:35 edited from 22.0 in reply to 21 [link] [source]
[Fossil] may abort with errors if missing skins files are requested.
Yes, it panics, which is especially bad if you try removing "default
". (Ask me how I know.)
Still, it feels like a "don't do that then" type of pilot error. Rather than edit the skin array, it'd be better if Fossil just recognized when the skins have been stripped and cope. Even better, the skin list should be generated at configure time from available files, not hard-coded.
concatenate CSS and JS files to larger units,
My container is using "--jsmode bundled
", so yes, they do have to be in plain text for my use case. I value faster HTTP round-trips and better HTTP caching over small executables.
It would be nice if there was a way to get the build to produce these skin files pre-gzipped, but I'm not sufficiently motivated to make it do that.
I've just discovered that UPX is broken on ARM, at least under Alpine on Docker under QEMU. (I suspect there's a CPU emulation bug here, not an actual ARM incompatibility.) That nearly made me look into this pre-gzipping idea, but then I thought about the skin editor and started getting hives.
(23) By Florian Balmer (florian.balmer) on 2022-08-16 11:33:25 in reply to 22.1 [link] [source]
Just in case that this may be useful, in whatever way, here's my patch. The
BUILTIN_CAB
paths (Windows-only) require another file memcab.c, which needs
more cleanup (mostly to re-add the extra trailing null byte assumed by Fossil).
The #ifdef
s have comments with matching brackets, so it's easy to remove the
BUILTIN_CAB
path, for example.
This is my next personal Fossil thingy to finish, but I can shape it to be useful for the main project, if required. But space savings are below 1 MiB.
=== compress_builtins.patch ====================================================
Combined version of mkbuiltin3.patch and mkbuiltin4.patch.
TODO:
[x] Check if patches completely transferred.
[x] Clean up preprocessor directive handling.
[ ] Use global variable to define BUILTIN_XXX values for src/*, tools/* and win/fossil.rc
Index: src/builtin.c
==================================================================
--- src/builtin.c
+++ src/builtin.c
@@ -19,10 +19,14 @@
** byte arrays.
*/
#include "config.h"
#include "builtin.h"
#include <assert.h>
+#define BUILTIN_CAB
+#if defined(BUILTIN_ZIP) /* { */
+#include <zlib.h>
+#endif /* BUILTIN_ZIP } */
/*
** The resources provided by this file are packaged by the "mkbuiltin.c"
** utility program during the built process and stored in the
** builtin_data.h file. Include that information here:
@@ -53,13 +57,60 @@
}
/*
** Return a pointer to built-in content
*/
+#if defined(BUILTIN_ZIP) /* { */
+void builtin_load(int i){
+ if( aBuiltinFiles[i].pData==0 ){
+ size_t cbAlloc = aBuiltinFiles[i].nByte+1;
+ unsigned long cbSizeUncompressed = aBuiltinFiles[i].nByte;
+ aBuiltinFiles[i].pData = fossil_malloc(cbAlloc);
+ if( uncompress(
+ aBuiltinFiles[i].pData,&cbSizeUncompressed,
+ aBuiltinFiles[i].pDataCompressed,aBuiltinFiles[i].nByte)!=Z_OK
+ || cbSizeUncompressed!=aBuiltinFiles[i].nByte ){
+ fossil_fatal("Error uncompressing built-in data [%i]\n", i);
+ }
+ memset(aBuiltinFiles[i].pData+aBuiltinFiles[i].nByte,0,1);
+ }
+}
+#elif defined(BUILTIN_CAB) /* }..{ */
+#include "memcab.c"
+void builtin_load(int i){
+ MDIINFO mdii;
+ MEMFILE mfCab;
+ char *zNameInCab;
+ if( i<0 || i>=count(aBuiltinFiles) ) return;
+ if( aBuiltinFiles[i].pData ) return;
+ zNameInCab = fossil_strdup(aBuiltinFiles[i].zName);
+ if( zNameInCab ){
+ char *z = zNameInCab;
+ while( z[0] ){
+ if( z[0]=='/' ) z[0] = '\\';
+ z++;
+ }
+ }
+ ZeroMemory(&mfCab,sizeof(MEMFILE));
+ mfCab.pbData = GetResource(MAKEINTRESOURCE(1),RT_RCDATA,&mfCab.cbData);
+ if( !mfCab.pbData ||
+ !MDICreate(&mdii) ||
+ !MDICopy(&mdii, &mfCab, zNameInCab) ||
+ !MDIDestroy(&mdii) ){
+ fossil_fatal("Error uncompressing built-in file.");
+ }
+ fossil_free(zNameInCab);
+ aBuiltinFiles[i].pData = mdii.mfOut.pbData;
+ aBuiltinFiles[i].nByte = mdii.mfOut.cbData;
+}
+#endif /* BUILTIN_CAB } */
const unsigned char *builtin_file(const char *zFilename, int *piSize){
int i = builtin_file_index(zFilename);
if( i>=0 ){
+#if defined(BUILTIN_ZIP) || defined(BUILTIN_CAB) /* { */
+ builtin_load(i);
+#endif /* BUILTIN_ZIP || BUILTIN_CAB } */
if( piSize ) *piSize = aBuiltinFiles[i].nByte;
return aBuiltinFiles[i].pData;
}else{
if( piSize ) *piSize = 0;
return 0;
@@ -79,10 +130,13 @@
*/
void test_builtin_list(void){
int i, size = 0;;
for(i=0; i<count(aBuiltinFiles); i++){
const int n = aBuiltinFiles[i].nByte;
+#if defined(BUILTIN_CAB) /* { */
+ builtin_load(i);
+#endif /* BUILTIN_CAB } */
fossil_print("%3d. %-45s %6d\n", i+1, aBuiltinFiles[i].zName,n);
size += n;
}
if(find_option("verbose","v",0)!=0){
fossil_print("%d entries totaling %d bytes\n", i, size);
@@ -144,10 +198,13 @@
cgi_set_content_type(zType);
pOut = cgi_output_blob();
while( zList[0] ){
int i = atoi(zList);
if( i>0 && i<=count(aBuiltinFiles) ){
+#if defined(BUILTIN_ZIP) || defined(BUILTIN_CAB) /* { */
+ builtin_load(i-1);
+#endif /* BUILTIN_ZIP || BUILTIN_CAB } */
blob_appendf(pOut, "/* %s */\n", aBuiltinFiles[i-1].zName);
blob_append(pOut, (const char*)aBuiltinFiles[i-1].pData,
aBuiltinFiles[i-1].nByte);
}
while( fossil_isdigit(zList[0]) ) zList++;
@@ -339,10 +396,13 @@
switch( builtin.eDelivery ){
case JS_INLINE: {
CX("<script nonce='%h'>\n",style_nonce());
do{
int i = builtin.aReq[builtin.nSent++];
+#if defined(BUILTIN_ZIP) || defined(BUILTIN_CAB) /* { */
+ builtin_load(i);
+#endif /* BUILTIN_ZIP || BUILTIN_CAB } */
CX("/* %s %.60c*/\n", aBuiltinFiles[i].zName, '*');
cgi_append_content((const char*)aBuiltinFiles[i].pData,
aBuiltinFiles[i].nByte);
}while( builtin.nSent<builtin.nReq );
CX("</script>\n");
@@ -490,13 +550,19 @@
switch( i ){
case 0: /* name */
sqlite3_result_text(ctx, pFile->zName, -1, SQLITE_STATIC);
break;
case 1: /* size */
+#if defined(BUILTIN_ZIP) || defined(BUILTIN_CAB) /* { */
+ builtin_load(builtin_file_index(pFile->zName));
+#endif /* BUILTIN_ZIP || BUILTIN_CAB } */
sqlite3_result_int(ctx, pFile->nByte);
break;
case 2: /* data */
+#if defined(BUILTIN_ZIP) || defined(BUILTIN_CAB) /* { */
+ builtin_load(builtin_file_index(pFile->zName));
+#endif /* BUILTIN_ZIP || BUILTIN_CAB } */
sqlite3_result_blob(ctx, pFile->pData, pFile->nByte, SQLITE_STATIC);
break;
}
return SQLITE_OK;
}
Index: tools/mkbuiltin.c
==================================================================
--- tools/mkbuiltin.c
+++ tools/mkbuiltin.c
@@ -34,10 +34,23 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
+#define BUILTIN_CAB
+#if defined(BUILTIN_ZIP) /* { */
+#ifdef _WIN32
+#include "..\compat\zlib\zlib.h"
+#pragma comment( lib, "..\\compat\\zlib\\zlib" )
+#else
+#include "../compat/zlib/zlib.h"
+#pragma comment( lib, "z" )
+#endif
+#elif defined(BUILTIN_CAB) /* }..{ */
+#include "..\src\memcab.c"
+#endif /* BUILTIN_CAB } */
+
/*
** Read the entire content of the file named zFilename into memory obtained
** from malloc() and return a pointer to that memory. Write the size of the
** file into *pnByte.
*/
@@ -109,10 +122,13 @@
*/
typedef struct Resource Resource;
struct Resource {
char *zName;
int nByte;
+#if defined(BUILTIN_ZIP) /* { */
+ int nByteCompressed;
+#endif /* BUILTIN_ZIP } */
int idx;
};
typedef struct ResourceList ResourceList;
struct ResourceList {
@@ -278,16 +294,25 @@
int j, n;
ResourceList resList;
Resource *aRes;
int nRes;
unsigned char *pData;
+#if defined(BUILTIN_ZIP) /* { */
+ unsigned long cbDataCompressed;
+ unsigned char *pDataCompressed;
+#endif /* BUILTIN_ZIP } */
int nErr = 0;
int nSkip;
int nPrefix = 0;
#ifndef FOSSIL_DEBUG
int nName;
#endif
+#if defined(BUILTIN_CAB) /* { */
+ MCIINFO mcii;
+ HANDLE hFile;
+ char *z;
+#endif /* BUILTIN_CAB } */
if( argc==1 ){
fprintf(stderr, "usage\t:%s "
"[--prefix path] [--reslist file] [resource-file1 ...]\n",
argv[0]
@@ -334,10 +359,17 @@
remove_duplicates(&resList);
nRes = resList.nRes;
aRes = resList.aRes;
qsort(aRes, nRes, sizeof(aRes[0]), (QsortCompareFunc)compareResource);
+
+#if defined(BUILTIN_CAB) /* { */
+ if( !MCICreate(&mcii) ){
+ fprintf(stderr, "Failed to create CAB archive in memory.\n");
+ exit(1);
+ }
+#endif /* BUILTIN_CAB } */
printf("/* Automatically generated code: Do not edit.\n**\n"
"** Rerun the \"mkbuiltin.c\" program or rerun the Fossil\n"
"** makefile to update this source file.\n"
"*/\n");
@@ -368,10 +400,56 @@
}
#endif
aRes[i].nByte = sz - nSkip;
aRes[i].idx = i;
+#if defined(BUILTIN_ZIP) /* { */
+ cbDataCompressed = compressBound(aRes[i].nByte);
+ pDataCompressed = malloc(cbDataCompressed);
+ if( !pDataCompressed ){
+ fprintf(stderr, "Failed to allocate compression buffer.\n");
+ exit(1);
+ }
+ if( compress2(
+ pDataCompressed,&cbDataCompressed,
+ pData+nSkip,aRes[i].nByte,Z_BEST_COMPRESSION)!=Z_OK ){
+ fprintf(stderr, "Failed to compress built-in file.\n");
+ exit(1);
+ }
+ aRes[i].nByteCompressed = cbDataCompressed;
+ printf("/* Content of file %s */\n", aRes[i].zName);
+ printf("static unsigned char bidata%d[%d] = {\n ",
+ i, aRes[i].nByteCompressed);
+ for(j=0, n=0; j<aRes[i].nByteCompressed; j++){
+ printf("%3d", pDataCompressed[j]);
+ if( j==aRes[i].nByteCompressed-1 ){
+ printf(" };\n");
+ }else if( n==14 ){
+ printf(",\n ");
+ n = 0;
+ }else{
+ printf(", ");
+ n++;
+ }
+ }
+ free(pData);
+ free(pDataCompressed);
+#elif defined(BUILTIN_CAB) /* }..{ */
+ z = aRes[i].zName;
+ if( strlen(z)>=nPrefix ) z += nPrefix;
+ while( z[0]=='.' || z[0]=='/' || z[0]=='\\' ){ z++; }
+ aRes[i].zName = z;
+ while( z[0] ){
+ if( z[0]=='/' ) z[0] = '\\';
+ z++;
+ }
+ if( !MCIAddFile(&mcii,pData+nSkip,aRes[i].nByte,aRes[i].zName,FALSE) ){
+ fprintf(stderr, "Failed to compress built-in file.\n");
+ exit(1);
+ }
+ free(pData);
+#else /* BUILTIN_CAB }..{ */
printf("/* Content of file %s */\n", aRes[i].zName);
printf("static const unsigned char bidata%d[%d] = {\n ",
i, sz+1-nSkip);
for(j=nSkip, n=0; j<=sz; j++){
printf("%3d", pData[j]);
@@ -384,11 +462,83 @@
printf(", ");
n++;
}
}
free(pData);
+#endif /* !BUILTIN_ZIP && !BUILTIN_CAB } */
+ }
+#if defined(BUILTIN_CAB) /* { */
+ if( !MCIDestroy(&mcii) ){
+ fprintf(stderr, "Failed to finish CAB archive in memory.\n");
+ exit(1);
+ }
+ hFile = CreateFileW(
+ L"builtin_data.cab",
+ GENERIC_WRITE,
+ FILE_SHARE_READ|FILE_SHARE_WRITE|FILE_SHARE_DELETE,
+ NULL,
+ CREATE_ALWAYS,
+ FILE_ATTRIBUTE_NORMAL|FILE_FLAG_SEQUENTIAL_SCAN,
+ NULL);
+ if( hFile!=INVALID_HANDLE_VALUE ){
+ DWORD dwWritten;
+ WriteFile(hFile,mcii.mfCab.pbData,mcii.mfCab.cbData,&dwWritten,NULL);
+ }
+#endif /* BUILTIN_CAB } */
+#if defined(BUILTIN_ZIP) /* { */
+ printf("typedef struct BuiltinFileTable BuiltinFileTable;\n");
+ printf("struct BuiltinFileTable {\n");
+ printf(" const char *zName;\n");
+ printf(" unsigned char *pData;\n");
+ printf(" const unsigned char *pDataCompressed;\n");
+ printf(" int nByte;\n");
+ printf(" int nByteCompressed;\n");
+ printf("};\n");
+ printf("static BuiltinFileTable aBuiltinFiles[] = {\n");
+ for(i=0; i<nRes; i++){
+ char *z = aRes[i].zName;
+ if( strlen(z)>=nPrefix ) z += nPrefix;
+ while( z[0]=='.' || z[0]=='/' || z[0]=='\\' ){ z++; }
+ aRes[i].zName = z;
+ while( z[0] ){
+ if( z[0]=='\\' ) z[0] = '/';
+ z++;
+ }
+ }
+ qsort(aRes, nRes, sizeof(aRes[0]), (QsortCompareFunc)compareResource);
+ for(i=0; i<nRes; i++){
+ printf(" { \"%s\", 0, bidata%d, %d, %d },\n",
+ aRes[i].zName, aRes[i].idx, aRes[i].nByte, aRes[i].nByteCompressed);
+ }
+#elif defined(BUILTIN_CAB) /* }..{ */
+ printf("typedef struct BuiltinFileTable BuiltinFileTable;\n");
+ printf("struct BuiltinFileTable {\n");
+ printf(" const char *zName;\n");
+ printf(" unsigned char *pData;\n");
+ printf(" int nByte;\n");
+ printf("};\n");
+ printf("static BuiltinFileTable aBuiltinFiles[] = {\n");
+ for(i=0; i<nRes; i++){
+ z = aRes[i].zName;
+ if( strlen(z)>=nPrefix ) z += nPrefix;
+ while( z[0]=='.' || z[0]=='/' || z[0]=='\\' ){ z++; }
+ aRes[i].zName = z;
+ while( z[0] ){
+ if( z[0]=='/' ) z[0] = '\\';
+ z++;
+ }
+ }
+ qsort(aRes, nRes, sizeof(aRes[0]), (QsortCompareFunc)compareResource);
+ for(i=0; i<nRes; i++){
+ z = aRes[i].zName;
+ while( z[0] ){
+ if( z[0]=='\\' ) z[0] = '/';
+ z++;
+ }
+ printf(" { \"%s\", 0, 0 },\n", aRes[i].zName);
}
+#else /* BUILTIN_CAB }..{ */
printf("typedef struct BuiltinFileTable BuiltinFileTable;\n");
printf("struct BuiltinFileTable {\n");
printf(" const char *zName;\n");
printf(" const unsigned char *pData;\n");
printf(" int nByte;\n");
@@ -407,9 +557,10 @@
qsort(aRes, nRes, sizeof(aRes[0]), (QsortCompareFunc)compareResource);
for(i=0; i<nRes; i++){
printf(" { \"%s\", bidata%d, %d },\n",
aRes[i].zName, aRes[i].idx, aRes[i].nByte);
}
+#endif /* !BUILTIN_ZIP && !BUILTIN_CAB } */
printf("};\n");
free_reslist(&resList);
return nErr;
}
Index: win/fossil.rc
==================================================================
--- win/fossil.rc
+++ win/fossil.rc
@@ -190,5 +190,10 @@
#ifndef CREATEPROCESS_MANIFEST_RESOURCE_ID
#define CREATEPROCESS_MANIFEST_RESOURCE_ID 1
#endif /* CREATEPROCESS_MANIFEST_RESOURCE_ID */
CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST "fossil.exe.manifest"
+
+#define BUILTIN_CAB
+#ifdef BUILTIN_CAB
+1 RCDATA "..\wbuild\builtin_data.cab"
+#endif
=== compress_builtins.patch ====================================================
(32) By Warren Young (wyoung) on 2022-09-04 12:30:59 in reply to 1 [link] [source]
Because I nerd hard, I have decided to spend my Labor Day weekend playing with containers. 🤓
I've gone through all of the common options for building and running containers without stepping into Kubernetes land, then distilled what I found in a new OCI Containers doc, extracted from the old "build" doc.
(It was overwhelming the purpose of the build doc before I added the new material!)
It now covers Podman, runc
, crun
, and systemd-nspawn
. The capper is where it shows how to build on a local machine under Docker, push that up to Docker Hub, and pull it down to run as root under Podman so we can keep the chroot
feature.
That last is now under battle-test: it's how my site runs now, with all of the various Fossil repos running as systemd
services that start Podman containers on boot, created from tangentsoft/fossil:latest
.
I'm leaving that repo public because I don't see an especially good reason to make it private, but beware: for the next several days, maybe weeks, it's likely to be unstable. I plan to get it settled down, offering stable base images, but this will occur in stages as I move it from the present build method to do things a little more cleverly than Fossil's stock Dockerfile
. I want to do things like move my custom BusyBox image up into a new base image so it doesn't have to be rebuilt as often. That, too, will become a public resource, once it's stabilized.
Although all of this is going to be happening outside the Fossil project proper, I will be documenting how I build all of these images. There will be no secret sauce. I'm just going to let anyone who wants to draft off me to do so.
All of you who've had problems lately building these images can get around that now by pulling my image. For now, it's the same as you get from "make container-image", though that will be changing.
Enjoy!
(33.1) By Warren Young (wyoung) on 2022-09-06 02:15:12 edited from 33.0 in reply to 32 [link] [source]
I deem this usable.
I've broken Fossil's Dockerfile
up into more layers to increase the chance that any given layer will be cached from one run to the next. Ideally, updating to a new version of Fossil and rebuilding the container image rebuilds just Fossil and not all the rest of it.
It's still slower than a typical "fossil up && make -j11
" on the same machine since it's always a complete rebuild, but for my case, it nets out faster since my web site runs on the cheapest thing Digital Ocean rented at the time I picked it. Just getting through a sqlite3.c
rebuild on the VPS took longer than a full rebuild on local hardware.
I've released the tools I use to do this, if anyone is interested in following along. It's useful for any site where you have two or more background fossil server
instances, and they're reverse-proxied by something else, not directly serving to the public themselves. The fslsrv
script arranges for the containers to come up at boot and stay up until you rebuild them.
This incidentally tests two different use cases in the new Dockerfile. The make container-image
target now builds the Fossil source tarball using the "fossil tarball
" command rather than pulling it from the project's public repo server. Making that fall back to the remote URL when you aren't building in a Fossil repo checkout directory was tricky, but because these tools of mine live outside the main Fossil repo, I test that fallback case every time I update the version of Fossil on my web site.
I won't promise to keep my hands off the Docker stuff in the Fossil repo, but it does seem to be stabilizing.
Next up: Try to prove that WAL is unsafe across a container boundary! (Details)
(34) By Warren Young (wyoung) on 2022-11-30 23:20:06 in reply to 1 [link] [source]
I finally got around to working out how to get the systemd-container
infrastructure to run the stock Fossil container. It's a considerable hassle, and it required some changes to the Dockerfile
to make it work, but I've got it sorted now, so the rest of you can just take the tutorial cookie-cutter.
Why might you care? Because this optional feature of systemd
adds only about 1.4 MiB to the OS footprint when you install it, as compared to the 38 MiB of Podman, the ~200 MiB of nerctl + containerd, or the ~400 MiB of Docker Engine. Since the container builds a UPX-packed executable on x86 hosts, the space is paid for from "go" as compared to the stock binaries.
(Incidentally, the Podman overhead is less than the repo clone, checkout tree(s), object files, unstripped executables, libraries, build tools and so forth needed to build from source on the host machine. If you use my pre-built images, it's a net space savings.)