Fossil Forum

Way more code when loading up a single post, than the post itself
Login

Way more code when loading up a single post, than the post itself

Way more code when loading up a single post, than the post itself

(1) By ddevienne on 2023-05-03 07:19:31 [source]

Hi. Disclaimer: I'm not a Web-dev.

But I sometimes look at the code of a page.
And sometimes bring up the Chrome dev-tools to look at the timeline of the network tab.

When I did, on a page that says in its footer it took 4ms to generate,
actually took around 300ms to fully reach the browser, is just under 11.6kB
when gzip compressed (38.2kB), with around 12,000 lines, only 160 of which are for
the actual (larguish) post markup, the rest being what appears (to me) static markup.

Thus I'm surprised that the ratio of content vs implementation details is so small,
and I wonder how much these impl-details are fully static, and whether they could be in
separate paths to be fully cached across requests?

(OTOH, given the way the .ico loads below, that would actually make things slower?)

I also noticed favicon.ico adds 300ms in the "waiting for server response" state,
while the "content download" state is <1ms, despite being very small and unchanged.
That resource has ETag and Cache-Control, yet it's only on the 3rd download that
it's loaded from the disk-cache, thus 2 times we waited almost 300ms for it.

Is resource caching that ineffective in general?
Even with a proper ETag and Cache-Control implementation?

I'm not criticizing, I'm genuinely curious and wanting insights from Fossil's devs perspective.

(2.1) By Warren Young (wyoung) on 2023-05-03 08:44:27 edited from 2.0 in reply to 1 [link] [source]

A lot of this is down to how you configure the server, not about Fossil itself. Try some of your tests against one of my forums, such as the MySQL++ one or the PiDP-8/I one.

Without doing detailed comparisons, I suspect you'll find the issues come down to:

  • The single biggest thing you've identified is inline JS, but if you're willing to go out of your way, you can get Fossil to deliver all of its JS in a far more cacheable, compressible format.1 This forum appears to take the default, causing Fossil to send the page JS inline…every…single…time, so it can never be cached.2 With separate, bundled JS, you pay a one-time hit for the first JS pull, but then it never changes, especially if you configure your front-end proxy for long caching times, relying on the fact that the bundle ID will change when the content changes.

  • CGI blows when it comes to fast replies; you can talk about the speed of fork()/exec() call chains on Unix all you like, but it's still a cost compared to keeping the process running and the proxied TCP connection open as you get with SCGI.

  • My lowest-probability guess is that while althttpd is undoubtedly lightweight, implicitly getting the benefit of short C code paths, is it as obsessively optimized as something used by zillions like nginx? I'm not casting stones ahead of testing, but I do wonder, for one thing, if this explains the long TLS setup times. Not to slag on drh at all, I'm simply pointing out that when you use something more popular, you get the benefit of having people who work on that piece of software exclusively, all day, every working day.

You can boil all this down to simplicity, and simplicity has a cost. Go skim the Fossil self-hosting repos and the few documents it links to, then compare that to my Debian+nginx+TLS stack doc. I don't blame you one bit if you look at the latter and gag at the effort required. But it's fast. 🤷‍♂️

Pick your poison and drink up.


  1. ^ fossil server --jsmode bundled
  2. ^ …unless you revisit the same post time and time again to keep the same URL, but why would you do that?

(3) By ddevienne on 2023-05-03 12:02:40 in reply to 2.1 [link] [source]

Thanks for the insights. Interesting to learn about --jsmode, thanks.

I can indeed see just 100 lines of JS stuff on your side at the end, compared to over 1000 on sqlite.org.

And I can see caching of your extra resources from memory or disk right away, with your Ningx server.

On average, your current top-thread loads in ~180ms on ~160ms ping time. While in SQLite it was more ~250ms - 300ms on ~130ms ping time, from altHttpd.

Of course, I assume these sites are US-based, so from Europe you can't get 26ms ping time like I do for google.com with edge servers all over the world.

(4) By Warren Young (wyoung) on 2023-05-03 12:09:50 in reply to 3 [link] [source]

There are a few more differences. My host is nearly the cheapest VPS DigitalOcean sells. (It was the cheapest before they changed their pricing last year.) SQLite and Fossil are on a much bigger instance, but they carry a lot more load. This likely explains a lot of what you're seeing: bigger host gives faster ping, but higher load means greater latency.