Fossil Forum

Warning: /dev/null and /dev/urandom not available
Login

Warning: /dev/null and /dev/urandom not available

Warning: /dev/null and /dev/urandom not available

(1.2) Originally by Nezteb with edits by Warren Young (wyoung) on 2023-03-27 05:05:41 from 1.1 [source]

After setting up my Fossil instance using Docker, I am greeted with the following warnings on my admin page:

WARNING: Device "/dev/null" is not available for reading and writing.

WARNING: Device "/dev/urandom" is not available for reading. This means that the pseudo-random number generator used by SQLite will be poorly seeded.

I'm using an Alpine image with no explicit user account, but I would have figured /dev/null/ and /dev/urandom would be available to the executable.

Is there some config I'm missing to get rid of these issues? 🤔

(2) By Warren Young (wyoung) on 2023-03-23 05:45:08 in reply to 1.0 [link] [source]

I found your answer by three separate searches for "urandom":

…the latter being in Fossil's own Dockerfile, which makes me wonder why you're reinventing the wheel in the first place.

(3) By Nezteb on 2023-03-23 06:54:15 in reply to 2 [link] [source]

Whoops, I committed the cardinal sin of not searching for my question first. Thank you for the links!

The only reason I'm using a different Dockerfile is because it's what I found when I searched for "fossil" on Docker Hub. I'll switch that over!

(4) By Warren Young (wyoung) on 2023-03-23 07:08:08 in reply to 3 [link] [source]

Yes, my builds — made from the official Dockerfile with few changes — are still too new to outrank the ones that have been there for years.

I'm not precisely advocating that you switch directly to them, but that if what you find doesn't do what you want out of the box and the built-in flexibilty isn't enough to achieve your desired end, that you at least learn the lessons that went into building them.

The document referenced from the top of the official Dockerfile distills a whole lot of lessons learned. If you're traveling to a common tourist destination, find the existing maps first. You don't have to take the same route or ride the same rides when you get there, but there's no point traveling through Milwaukee to get from Dallas to Anaheim.

(5) By Nezteb on 2023-03-23 07:24:01 in reply to 4 [link] [source]

Ah good to know!

Having copy-pasted the official Dockerfile with a single change (the HTTPS flag), I'm getting a build error both locally and from my Fly.io builder:

❯ docker build --progress plain --no-cache --pull -t fossil .
#1 [internal] load build definition from Dockerfile
#1 sha256:9b3de3779bfc288c463270366ec74ef43832ec4e7e765264aa823a8f625652da
#1 transferring dockerfile: 74B done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:242e695264780099292fb61fbc35fbc17506d394cdbb154bb7daf8d4eb22396b
#2 transferring context: 2B done
#2 DONE 0.0s

#3 [internal] load metadata for docker.io/library/alpine:latest
#3 sha256:13549c58a76bcb5dac9d52bc368a8fb6b5cf7659f94e3fa6294917b85546978d
#3 DONE 0.6s

#4 [stage-1 1/7] WORKDIR /jail
#4 sha256:079e2454c69dd867fbd0e6ae9e0b986a04257c01497473dca98ce7eb2db365fb
#4 CACHED

#5 [builder 1/9] FROM docker.io/library/alpine:latest@sha256:ff6bdca1701f3a8a67e328815ff2346b0e4067d32ec36b7992c1fdc001dc8517
#5 sha256:82c5770c3bf3bb0fdeaaf50b1da199a102df55cddafba30ae6737669ecade0bd
#5 DONE 0.0s

#6 [builder 2/9] WORKDIR /tmp
#6 sha256:7bd7d0e58f0a340ccc63e9d4029e109fad5c740ef85ac2bed1f3bfe93cb302ca
#6 CACHED

#8 [internal] load build context
#8 sha256:0c6cc569b2ad5c176cd65628aeb9d08627b7a334f1a09b2932c1c68d18da179c
#8 transferring context: 2B done
#8 DONE 0.0s

#7 [builder 3/9] RUN set -x                                                                 && apk update                                                          && apk upgrade --no-cache                                              && apk add --no-cache                                                       gcc make                                                               linux-headers musl-dev                                                 openssl-dev openssl-libs-static                                        zlib-dev zlib-static
#7 sha256:b382aab6118cc7d34ce154720dcc67b99637e6c5c6611994570488da56e74ab4
#7 CACHED

#9 [builder 4/9] COPY containers/busybox-config /tmp/bbx/.config
#9 sha256:dd8aa45d327011b1e4a86042a80ec04a210d16b6c1498c5c39629f6583ff5d05
#9 ERROR: "/containers/busybox-config" not found: not found

#12 [builder 6/9] RUN set -x     && tar --strip-components=1 -C bbx -xzf bbx/src.tar.gz                 && ( cd bbx && yes "" | make oldconfig && make -j11 )
#12 sha256:08cb83a0809c8a4880439fb9b78a605de6f134de6c1bb943fec9e66b85e6ed20
#12 CACHED

#11 [builder 5/9] ADD https://github.com/mirror/busybox/tarball/1_35_0 /tmp/bbx/src.tar.gz
#11 sha256:8f6bf77942e344dcf4253c32a37c477db1a46176a002fd04d7c3f62932a9067d
#11 CACHED

#13 [builder 7/9] COPY containers/os-release /etc/os-release
#13 sha256:f114e5fe46f0580b081c260d5dbc308a72457e0762030fa59cf7e85943f47d60
#13 ERROR: "/containers/os-release" not found: not found

#10 https://github.com/mirror/busybox/tarball/1_35_0
#10 sha256:e1a3fa2653059db1f15b512be1244c71fa6823aea53c7442c915e206122b9ad7
#10 CANCELED

#14 https://fossil-scm.org/home/tarball/src?r=trunk
#14 sha256:39d3cc4a8861d74009a960656c9cf8b1339ed8b2ca001b7c058fc359e1d76985
#14 ...

#7 [builder 3/9] RUN set -x                                                                 && apk update                                                          && apk upgrade --no-cache                                              && apk add --no-cache                                                       gcc make                                                               linux-headers musl-dev                                                 openssl-dev openssl-libs-static                                        zlib-dev zlib-static
#7 sha256:b382aab6118cc7d34ce154720dcc67b99637e6c5c6611994570488da56e74ab4
#7 0.188 + apk update
#7 CACHED

#7 [builder 3/9] RUN set -x                                                                 && apk update                                                          && apk upgrade --no-cache                                              && apk add --no-cache                                                       gcc make                                                               linux-headers musl-dev                                                 openssl-dev openssl-libs-static                                        zlib-dev zlib-static
#7 sha256:b382aab6118cc7d34ce154720dcc67b99637e6c5c6611994570488da56e74ab4
#7 0.195 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/aarch64/APKINDEX.tar.gz
#7 CANCELED

#14 https://fossil-scm.org/home/tarball/src?r=trunk
#14 sha256:39d3cc4a8861d74009a960656c9cf8b1339ed8b2ca001b7c058fc359e1d76985
#14 CANCELED
------
 > [builder 4/9] COPY containers/busybox-config /tmp/bbx/.config:
------
------
 > [builder 7/9] COPY containers/os-release /etc/os-release:
------
failed to compute cache key: "/containers/os-release" not found: not found

I'll probably debug that tomorrow.

(6) By Warren Young (wyoung) on 2023-03-23 07:48:58 in reply to 5 [link] [source]

Strike two. A search here in the forum on your key error string brings you to this.

(7) By Nezteb on 2023-03-23 15:07:58 in reply to 6 [link] [source]

To be fair I hit that error and then went straight to bed, so I hadn't searched for other threads yet. 😅

You need to ./configure the local Fossil tree first.

  1. Would that be worth mentioning in a comment in the Dockerfile itself?

  2. So this Dockerfile cannot be successfully built without having the Fossil tree locally first? For the purposes of deployment that won't work for me. 😅

I was looking through the project's Makefile, but I don't see anything about ./configure

I tried removing the lines of the Dockerfile that do a host COPY, but then get:

#16 [stage-1 4/7] RUN [ "/bin/busybox", "--install", "/bin" ]
#16 sha256:bfdf50c6a38b065f26789f791f00bd2762235ce101c3e945a6276370d85b1d30
#16 0.133 exec /bin/busybox: no such file or directory
#16 ERROR: executor failed running [/bin/busybox --install /bin]: exit code: 1
------
 > [stage-1 4/7] RUN [ "/bin/busybox", "--install", "/bin" ]:
------
executor failed running [/bin/busybox --install /bin]: exit code: 1

For now I might just stick with my earlier Dockerfile and just add the relevant mknod calls for /dev/null and /dev/urandom. BusyBox doesn't seem strictly necessary, especially since most PaaS providers don't let you start a shell in your container anyway. It'd also be one less dependency to worry about. 😄

(8) By Warren Young (wyoung) on 2023-03-23 17:51:43 in reply to 7 [link] [source]

For the purposes of deployment that won't work for me.

I've ripped the problematic feature out.

In exchange for you getting your way on this, would you please explain something to me? In what world is generating containers from source acceptable, but configuring the source tree first unacceptable? Serious question. I want to understand.

I tried removing the lines of the Dockerfile that do a host COPY, but then get…

My new Dockerfile does that, too, and it builds here just fine. While I suppose your problem is in one of the 15 prior steps you cut off in your selective quoting of the build output, switching to the current Dockerfile is a better use of time than having me do remote debugging on your local change.

BusyBox doesn't seem strictly necessary

It is indeed not necessary…

…until you need to debug something internal to the container while it's still running. Given how this thread is going so far, I'm going to guess that eventuality isn't far off.

most PaaS providers don't let you start a shell in your container

Starting from this list of top PaaS providers, I did a quick web search for the method of starting a shell inside the container of each:

I stopped there since my total hit 53% of the market at that point, making me wonder how you're defining "most"?

That doesn't include Google Cloud Run since it seems to assume everything is a web app, so if it isn't speaking HTTP, it doesn't exist. That's a typical blinkered Google worldview, not representative of "most" of anything other than the web market itself, a rather circular definition. If you're willing to accept a hacky workaround, you can add another 11 percentage points to the total.

That list of top services doesn't include Fly.io, your apparent platform of choice, apparently due to it being one of those that makes up the remaining ~30% of this highly fragmented market, with under 1% share each, but you can get a shell inside a container there, too.

If you're tempted to object that some of those links above talk about indirections like kubectl exec … /bin/sh rather than the far more direct docker exec sh, that's a reflection of this demented world where everything has to be a Kubernetes cluster for some reason. Me, I have this oddball idea that a single x86_64 box is a tremendous amount of power, but apparently I'm a weirdo outlier. 🙄

One final thought: if BusyBox and its /bin/sh constitute too much of a dependency, doesn't that rule out Alpine-based containers, since they have /bin/sh, implemented by BusyBox?

(9) By Nezteb on 2023-03-23 20:11:16 in reply to 8 [link] [source]

For the record I'm not trying to critique Fossil or the Docker setup for it. I love the software, and I fully acknowledge that my setup/"requirements" are weird. This is just my own personal project; it's an exercise in "can I do it this way?". Maybe the answer is "not quite".

You don't have to change anything on my behalf, though I appreciate the thought. The only reason I opted to copy the official Dockerfile was because you said "...the latter being in Fossil's own Dockerfile, which makes me wonder why you're reinventing the wheel in the first place", and I agree with you; if I can avoid reinventing the wheel, I'd like to!

My entire goal here is to tinker, learn, and get a simple/working setup that other people who use Fly.io might find useful. I don't want to rock the boat or piss anyone off. 😅

In what world is generating containers from source acceptable, but configuring the source tree first unacceptable?

It's not "unacceptable", it's just something I'd like to avoid if possible. If it's not, that's fine, honestly. I haven't cloned Fossil at all on my machine. I have a single Dockerfile and fly.toml. In this particular case, I'm going to try just doing the ./configure from within the container (which the original Dockerfile I copied did). Now that I know about your image, I might try a simple Dockerfile that just does:

FROM tangentsoft/fossil:latest
# ...

If that doesn't end up working for me for some reason, oh well! I'll try something else. 😄

I have several services deployed to Fly.io, including Gitea, Conduit, Pleroma, and Livebook. For each of those, I was able to get working/minimal services with only a Dockerfile and a fly.toml, mostly because I was able to utilize their official Dockerfiles or hosted images and just apply some configuration on top to get them working for me. I was able to configure/build everything in those two files, no need for source code. Worst case scenario, a service I want to host might need a local JSON/YAML file that I need to COPY over in the Dockerfile. I find that appealing, but I don't expect anyone else to think the same.

Starting from this list of top PaaS providers,

My "most PaaS providers don't let you start a shell in your container" statement was wrong, yeah that's my bad. However, most of those providers require you to install their custom CLI (Fly.io does too, I'm not throwing stones), or jump through several setup hoops to configure a remote shell. I straight up don't want to deal with that; I fully acknowledge I am lazy in that regard. Fly.io isn't special either, it requires several commands to fully set up, but I find them very simple.

…until you need to debug something internal to the container while it's still running

If I was doing this for work and not a personal project, I'd definitely like the ability to get a shell. If the thing I'm installing/using out of personal interest needs a manual shell intervention step to fully setup (whether it's hosted on Fly.io or otherwise, whether it's Fossil or something else), I'd probably just not host it myself and use an existing hosted instance. That is not an insult to the software itself; if it wasn't built with that in mind, it really doesn't bother me.

One final thought: if BusyBox and its /bin/sh constitute too much of a dependency, doesn't that rule out Alpine-based containers, since they have /bin/sh, implemented by BusyBox?

It's not even a matter of the dependency, it's just me trying to pare down the Dockerfile to get rid of everything I don't need. 😄 Very unrelated, but; I find the distroless images project super neat!

(10.3) By Warren Young (wyoung) on 2023-03-24 10:25:59 edited from 10.2 in reply to 9 [link] [source]

You don't have to change anything on my behalf

Fait accompli. You're the third person I recall stumbling in this spot. Since I added this feature in support of a niche use case1 that, as far as I can tell, isn't even being used by anyone other than me, and then only for hobbyist fun, it isn't pulling its own weight, and so out it goes.

It'd be different if we had two substantial user populations, one of which benefits from the feature, but we don't. I've already gotten all the value I can squeeze out of that since I don't use the feature in production for good and plentiful reasons.

It's not "unacceptable"…

I'm not seeing a lot of daylight between "that won't work for me" and "I find it unacceptable," but okay, I get it: you want your interaction with the Fossil source code at a strongly-demarcated, automated remove away.

jump through several setup hoops to configure a remote shell.

Containers are not VMs. IMHO, it would be a serious mistake to run sshd or similar inside a container for anything but temporary local debugging.

The cleanest alternative I found in my survey above was Alibaba's method: they have a command that generates a temporary Linux VM on demand that's attached to your rented section of their cloud, which you can then SSH into and use as a jump host. Even that's an indirection.

I'm curious how you think it could be otherwise without creating a worse problem.

If the thing I'm installing/using out of personal interest needs a manual shell intervention step to fully setup

I didn't say the internal shell was necessary to set Fossil up, merely implied that you might need it if you ever had to debug it while it was running. The nature of debugging is that I can't predict what problems you will run into, else it'd be solved already.

use an existing hosted instance

Maybe you want Chisel, then.

trying to pare down the Dockerfile to get rid of everything I don't need

Without BusyBox, the RUN commands in the second stage will fail because they implicitly call /bin/sh inside the container to run that shell script.

You can get rid of BusyBox entirely by moving to a three-stage build system:


Index: Dockerfile
==================================================================
--- Dockerfile
+++ Dockerfile
@@ -20,18 +20,8 @@
          linux-headers musl-dev                                        \
          openssl-dev openssl-libs-static                               \
          zlib-dev zlib-static
+RUN apk add --no-cache busybox-static
 
-### Bake the custom BusyBox into another layer.  The intent is that this
-### changes only when we change BBXVER.  That will force an update of
-### the layers below, but this is a rare occurrence.
-ARG BBXVER="1_35_0"
-ENV BBXURL "https://github.com/mirror/busybox/tarball/${BBXVER}"
-COPY containers/busybox-config /tmp/bbx/.config
-RUN set -x                                                             \
-    && wget -O /tmp/bbx/src.tar.gz ${BBXURL}                           \
-    && tar --strip-components=1 -C bbx -xzf bbx/src.tar.gz             \
-    && ( cd bbx && yes "" | make oldconfig && make -j11 )
-
 ### The changeable Fossil layer is the only one in the first stage that
 ### changes often, so add it last, to make it independent of the others.
 ###
@@ -61,11 +51,10 @@
 FROM scratch AS os
 WORKDIR /jail
 ARG UID=499
-ENV PATH "/bin:/jail/bin"
 
 ### Lay BusyBox down as the first base layer. Coupled with the host's
 ### kernel, this is the "OS" used to RUN the subsequent setup script.
-COPY --from=builder /tmp/bbx/busybox /bin/
+COPY --from=builder /bin/busybox.static /bin/busybox
 RUN [ "/bin/busybox", "--install", "/bin" ]
 
 ### Set up that base OS for our specific use without tying it to
@@ -82,22 +71,24 @@
     && mknod -m 666 dev/null    c 1 3                                  \
     && mknod -m 444 dev/urandom c 1 9
 
-### Do Fossil-specific things atop those base layers; this will change
-### as often as the Fossil build-from-source layer above.
-COPY --from=builder /tmp/fossil bin/
-RUN set -x                                                             \
-    && ln -s /jail/bin/fossil /bin/f                                   \
-    && echo -e '#!/bin/sh\nfossil sha1sum "$@"' > /bin/sha1sum         \
-    && echo -e '#!/bin/sh\nfossil sha3sum "$@"' > /bin/sha3sum         \
-    && echo -e '#!/bin/sh\nfossil sqlite3 --no-repository "$@"' >      \
-       /bin/sqlite3                                                    \
-    && chmod +x /bin/sha?sum /bin/sqlite3
 
+## ---------------------------------------------------------------------
+## STAGE 3: Drop BusyBox, too, now that we're done with its /bin/sh &c
+## ---------------------------------------------------------------------
 
+FROM scratch AS run
+ENV PATH "/bin:/jail/bin"
+
+COPY --from=os /etc/.  /etc
+COPY --from=os /jail/. /jail
+COPY --from=builder /tmp/fossil jail/bin/
+
+
 ## ---------------------------------------------------------------------
 ## RUN!
 ## ---------------------------------------------------------------------
 
+WORKDIR /jail
 EXPOSE 8080/tcp
 CMD [ \
     "fossil", "server",     \

Beware: due to these recent changes to the stock Dockerfile, you will need to re-fetch the current version before that patch will apply cleanly.

I find the distroless images project super neat!

My scheme above is even more distroless by the time you get to the end. It produces the minimum possible container, with a single statically-linked executable inside. Running Fossil in a jail means that even their static-debian11 image provides nothing we can make use of in the mainstream case.

EDIT: That said, I am now recommending the use of that same base image in support of the nspawn use case, where we can get some value from what it provides. Thank you for reminding me of that option.


  1. ^ Out-of-the-box support for systemd-nspawn.

(11) By Warren Young (wyoung) on 2023-03-27 05:12:20 in reply to 1.2 [link] [source]

This will no longer occur since Fossil's container no longer jails the binary1 and thus can use the default /dev tree that the runtime injects, which includes /dev/urandom and /dev/null.

This change also includes the move to a 3-stage build system sketched out at the end of post #10, resulting in a single static binary inside the container, no BusyBox at all.2 The patch is now thoroughly obsolete.


  1. ^ A proper OCI container runtime is an über-jail already. The only thing we were protecting against is a hypothetical security flaw that would allow an attacker to execute local shell commands. Now there is no shell to wall away from Fossil, thus no more need for the double-jail dance.
  2. ^ It does still use BusyBox in stages 1 and 2.