Fossil User Forum

Current status of Continuous Deployment
Login

Current status of Continuous Deployment

Current status of Continuous Deployment

(1) By Iván Ávalos (avalos) on 2020-06-29 02:44:52 [link] [source]

Hello, Fossil community!

I'd like to know what's currently the best way to automate builds and deployment with Fossil. I need something very simple, similar to builds.sr.ht, which allows me to write a .build.yml, upload secrets and automatically start the build/deployment process. Do I really need a CI/CD service? Or can I simply write a Dockerfile?

I saw a Jenkins adapter on GitHub, but I'm not sure if it still works, since it was written in 2013 and it's not clear whether it works or not.

(2) By anonymous on 2020-06-29 04:16:55 in reply to 1 [link] [source]

I saw a Jenkins adapter on GitHub, but I'm not sure if it still works

This post, by the author, describes how the plugin supposedly works. This may be enough to create a new solution or update the adapter.

(3) By anonymous on 2020-06-29 17:06:53 in reply to 2 [link] [source]

This post, by the author, describes how the plugin supposedly works.

Looks like you can just directly configure Jenkins, using URLTriger, to monitor Fossil's RSS feed.

Summarizing, monitor: http://repo.yourdomain/timeline.rss?y=ci&n=1&tag=BRANCHNAME

Specify an XPath to inspect the returned RSS entry. The one in the post, /rss/channel/item[1]/guid/text(), may be out of date.

Then your build config will pull the latest from your repo, checkout a fresh working copy of the branch and build it.

(4) By anonymous on 2020-06-29 22:04:04 in reply to 1 [link] [source]

I'd like to know what's currently the best way to automate builds and deployment with Fossil. I need something very simple ...

Not sure what drives your choice of Fossil. But if CI is what you need, the best option in the long run is to consider other VCS solutions, with more proven CI bindings...

Of course, out of principle, it's possible to tweak here and there, do polling, rewriting/revivng Jenkins plugin, tweaking the buildbot plugin, having fun time with TH1, gluing the scripts, diving into the codebase with fresh ideas to set up proper hooks... Well, your project - your choices.

Alternatively you may bridge your Fossil universe with CI friendly one by maintaining a single checkout Git repo corresponding to the committed Fossil slice. Simple?

(5) By anonymous on 2020-06-30 00:20:12 in reply to 4 [link] [source]

But if CI is what you need, the best option in the long run is to consider other VCS solutions, with more proven CI bindings

While that might be easier for linking with a CI system, it ignores advantages that Fossil has over other VCSs. Also, someone at the (open source) Jenkins CI system has made an effort to provide tools like URLTrigger.

I admit I was pleasantly surprised that Jenkins has URLTrigger in its official plugins archive. Several other projects I've contributed to (or attempted to) refuse to accept such generic plugins into their official archives.

Of course, a Fossil plugin would be more convenient, but using URLTrigger seems simple enough.

(6) By anonymous on 2020-06-30 02:53:50 in reply to 5 [link] [source]

Jenkins is just one of CI options, most likely a self-hosting kind (mind that it's Java based). If going the Jenkins way, one may do without plugins at all simply by setting up the build job to be allowed for remote triggering. Basically, it's the other way around, the committing client on-demand calling (e.g. curl) Jenkins to start the job by the job-URL (plus some token). The official Jenkins docs give in-depth details of the remote API, here's one of many digested overviews.

Again, these are make-do solutions and hardly make a project portable. A VCS does not need to be a decisive choice for any project, it's a housekeeping tool, not a house. The more cross-tool portability, the better. Integration flexibility is not a Fossil's strength, alas... nor is it a development priority, alas^2.

(7) By anonymous on 2020-06-30 09:04:36 in reply to 6 [link] [source]

Integration flexibility is not a Fossil's strength, alas... nor is it a development priority

Fossil works well with the tools the core devs use. Also, Fossil is a side project for Fossil's lead dev, DRH, whose main project is SQLite.

So, why would Fossil's core devs work on plugins for tools they don't use?

For that matter, how many of the various plugins for tools to support Git are maintained by Git core devs? I suspect none.

Besides the RSS feed, Fossil also has a hook for a post-commit script, and for a post-receive script. The script must be in TH1 (a dialect of TCL), though they can call TCL. However by default, these are not enabled. You have build Fossil yourself with the needed options to enable those features.

As for why TH1/TCL. DRH already knew TCL. I don't see this choice as any worse than tools that require scripts be written in Lua or Python.

(8) By Warren Young (wyoung) on 2020-07-01 23:23:39 in reply to 4 [source]

the best option in the long run is to consider other VCS solutions, with more proven CI bindings...

Or publish your Fossil repo to Git and then put the other tools that only speak to Git on the other side of that wall. As far as those other programs are concerned, your team is using Git, but your team is happily not using Git.

The doc is titled with "GitHub" in its name, but only the interop parameters used as examples require GitHub. You can use it with other Git hosts by just changing the target URL and how you log in. It'll even work with a locally-hosted bare-bones Git behind SSH.

(9) By Iván Ávalos (avalos) on 2020-07-02 03:57:02 in reply to 8 [link] [source]

I will definitely consider Git exports! I think that's currently the best and easiest solution.

(10) By Richard Hipp (drh) on 2020-07-02 12:19:57 in reply to 8 [link] [source]

Why the "fossil git export" command exists

This was the original purpose of the "fossil git export" command - to export the SQLite repository to downstream consumers who make use of Git-based tooling. I implemented "fossil git" and started the SQLite GitHub mirror in response to a request from an SQLite support customer.

When I started that undertaking, there was already an existing "fossil export --git" command. But it had some quirks and limitations that made it difficult to use. So I started over from scratch with the new "fossil git" command, which I think is more reliable and intuitive.

Suggestions on how to make Fossil easier to connect to tooling?

I've been thinking about how we could make Fossil easier to connect into third-party tooling such as continuous integration and testing engines. My thought is that we should somehow enhance the backoffice feature to invoke admin-specified scripts when new check-ins arrive. In preparation for this, I've been working on improving the backoffice. (See recent check-ins.)

Would it be sufficient to simply invoke a script whenever a new check-in arrives? Should the script be limited to new check-ins on specific branches? What are other desired triggers for integration scripts? Arrival of new tickets or changes to a ticket? New tags on check-ins? Arrival of any new content? Whatever mechanism is chosen, it ought to be both simple but also powerful. We do not want to develop an overly-complex system that is too difficult to be used.

Security implications for pre-commit hooks

Do we also need pre-commit hooks? Since the commit occurs on the local developer's machine, pre-commit hooks would be advisory only. The local user can easily override them. Furthermore, there are security implications to pre-commit hooks. We don't want an evil site administrator to be able to install malicious hooks that a naive user would run automatically the next time they try to commit to the project. How can commit-hooks be distributed safely?

(11) By Warren Young (wyoung) on 2020-07-02 14:30:24 in reply to 10 [link] [source]

make Fossil easier to connect into third-party tooling

Is it not the case that "fossil git autopush" is currently over-promising and under-delivering?


Git mirror:  /usr/local/src/pidp8i/git-mirror
Last export: 2020-06-21 16:23:50 (10.9 days ago)
Autopush:    https://tangentsoft@github.com/tangentsoft/pidp8i.git
Status:      1 check-in awaiting export
Exported:    2399 check-ins and 4331 file blobs

Then if you say "git log" in ../git-mirror, you get something like this:


commit 5fd3114b76f8f0f90fdc379f04542573257e3631 (HEAD -> master)
Author: ...
Date:   Sun Jun 21 16:23:50 2020 +0000

The latest commit on that repo was made about 30 minutes ago, but I don't currently expect it to hit GitHub until cron gets around to calling "fossil git export" for me, which only happens once daily here. (June 21 is the prior commit, which was so-pushed.)

Fixing this might be a good first addition to the backoffice: if the autopush URL is set, push first to the local mirror, and then push up to the remote, all in the background.

Should the script be limited to new check-ins on specific branches?

The script should get parameters that allow the script writer to make such decisions locally.

What are other desired triggers for integration scripts?

Doesn't that flow from the above? "post-commit" can refer to any blockchain modification, and we already have the notion of artifact type filtering. Pass "ci", "f" or whatever to the script, and it can make locally-intelligent decisions based on those parameters.

pre-commit hooks would be advisory only

Yes.

I've been thinking more about this, and such a policy is directly in line with the purposeful lack of "fossil lock" and the Public Library model of check-out, where only one person owns a file at any one time.

Rather than have enforcing pre-commit hooks, it should be possible to re-run the pre-commit hooks at any time to re-generate any complaints. This would allow a developer to say "yes, yes, yes; go away" to a pre-commit hook griping about code formatting or such, but then for a manager to come along and say, "Oh, I see Joe's violating the code style guidelines again," after his morning run of the pre-commit hooks over the last day's commits tattles on Joe.

This pushes such matters from technological enforcement to human administration, where they ought to be.

How can commit-hooks be distributed safely?

I suspect that to cover a suitably-large subset of the wish list, you're going to be talked into supporting exec-as-in-Tcl, at which point security goes out the window, from Fossil's perspective. The full burden lands on the host OS.

Taking the code formatting nit-picker example again, that'll certainly have to be done by calling a third-party program.

I think it'd make sense to extend TH1 to support this. It doesn't currently have exec, but you can define procs in a context-dependent fashion, so that you still won't be able to exec from a skin header.

The ability to run hook scripts locally could be a default-off setting, not just for security reasons, but because J. Random Repo Cloner probably shouldn't be running your hook scripts just because they tried "fossil commit" on an anonymous clone, so that autosync is effectively disabled. That can be left to local policy: actual developers are expected to run with hooks enabled.

(12) By Warren Young (wyoung) on 2020-07-02 15:53:54 in reply to 11 [link] [source]

Is it not the case that "fossil git autopush" is currently over-promising and under-delivering?

On further reflection, I think there's just a missing feature: autoexport, in addition to autopush.

Point is, the people all antsy about the 1 minute resolution available with cronning a call to /timeline.rss to detect a new commit won't be any happier with "fossil git export" once a minute.

(13) By Richard Hipp (drh) on 2020-07-02 16:03:01 in reply to 11 [link] [source]

The script should get parameters that allow the script writer to make such decisions locally.

A single push might contain fragments of multiple commits on different branches, together with new tags, wiki pages, forum posts, and tickets. What kind of parameters can we send to a script to explain all of that?

Note in particular that you might get commit fragments. That is to say, the commit might have been too large for a single HTTP request (for better or worse Fossil limits the size of HTTP requests to a few megabytes) so that sender might have broken it up into multiple HTTP requests, and then the operator might have pressed Ctrl-C after the first HTTP request but before the second. In that case the server would be left holding a partial check-in, possibly for a long time.

(14) By Warren Young (wyoung) on 2020-07-02 16:35:33 in reply to 13 [link] [source]

Wait to run the scripts until all of the transactions are closed, so you can make /timeline.rss like queries on the relational lookup tables to answer such questions.

There might need to be a new "xfer" message, where the client says, "That's all I've got. Anything you have to tell me now?" That's the server's clue that all transactions should be closed by now, so that reasonable parameters can be passed to the post-commit script, which can then run knowing it has a complete picture of the client's contributions.

(15) By anonymous on 2020-07-02 19:33:50 in reply to 14 [link] [source]

Wait to run the scripts until all of the transactions are closed

What if there multiple commits, the last of which got cut off?

If a manifest is received before all of the content artifacts are received, is there anything that inhibits that commit from appearing in the timeline?

I'm not sure what SQL the post-transfer script is allowed. Maybe it could use its own tablet to hold a reference to the most recent commit already seen and maybe also a list of commits that were not complete the last time the post-xfer script ran.

(16) By Richard Hipp (drh) on 2020-07-03 21:05:00 in reply to 14 [link] [source]

There might need to be a new "xfer" message, where the client says, "That's all I've got. Anything you have to tell me now?"

It's more complicated than that. One side does not typically tell the other side "everything its got" because that can be a lot of artifact hashes sent over the wire. (The exception is when you use the --verily option.)

Nevertheless, I have made an enhancement now such that the server will know whether or not to expect more from the client. If the server does not send any "gimme" cards back to the client in its reply now (after the recent change) then we can safely assume that the sync is done and it is ok to run the post-receive hooks. At least, I think that is now the case.

I am still unclear on what the interface to the post-receive hooks should look like, though. I could use the Git documentation as a guide. But who is to say that Git got that right? Is there possibly a better interface? Suggestions, anyone?

(17) By anonymous on 2020-07-04 00:05:55 in reply to 16 [link] [source]

If the server does not send any "gimme" cards back to the client in its reply now (after the recent change) then we can safely assume that the sync is done and it is ok to run the post-receive hooks.

What if the sync had multiple commits and only a portion of them are incomplete?

That is something that could happen when a client syncs after working offline or otherwise able to contact the server.

(18) By Warren Young (wyoung) on 2020-07-04 03:15:52 in reply to 17 [link] [source]

Isn't the effect the same as if the post-commit process happens in the middle of a commit sequence?

In other words, given commits A, B, C locally, if A and B sync, post-commit happens on A + B, then later on C when that finally shows up. Whatever you've got going on on the post-commit hook, it needs to cope with any combination possible here: A + sync + B + sync + C + sync, A + B + C + sync, etc.

I suspect you're doomed in a DVCS world if you're in any way assuming you can control what a complete change set involves short of setting up a release or slush branch, so that you can reduce N changes to 1 merge.

(19) By anonymous on 2020-07-04 08:51:49 in reply to 18 [link] [source]

if you're in any way assuming you can control what a complete change set involves

No such assumption.

There might only be fragments and no complete commits. Or there might be one or more. The only way to find out is to examine the received manifests.

I see no reason to delay running the post receive script.

(20) By Warren Young (wyoung) on 2020-07-04 09:22:08 in reply to 19 [link] [source]

If the commit is incomplete at post-commit time, there is no commit event: the hook script cannot get a commit ID for a commit that doesn’t exist yet. The hook will only get that commit’s ID after the commit is complete in the receiving repo.

(22) By anonymous on 2020-07-05 06:39:16 in reply to 20 [link] [source]

The hook will only get that commit’s ID after the commit is complete in the receiving repo.

I never said that wasn't true.

What I mean is that when a client syncs and has multiple commits pending (like after working offline), a sync could be interrupted after some of the commits are complete but before all of the commits are complete.

Why delay running the hook script until after all of them are complete?

(23) By Warren Young (wyoung) on 2020-07-05 08:04:15 in reply to 22 [link] [source]

I don’t even know what “all” means in the context of a DVCS that operates in AP mode, as Fossil does. “All” implies C, but the CAP theorem tells us we cannot get that if we want A and P.

If a remote developer makes 3 commits without syncing and only one later syncs, then I’d expect this feature to run on the one sync’d commit.

(21) By Iván Ávalos (avalos) on 2020-07-05 01:17:38 in reply to 1 [link] [source]

Hi, everyone! I finally managed to get my continuous deployment workflow using Concourse CI, which turned out to be exactly what I needed! It builds on Docker containers!

I set a pipeline to subscribe to the Fossil RSS feed, and trigger the deployment job every time there's a new commit. I check-out manually inside a container, using the fossil CLI, execute the necessary steps, and finally rsync to my EC2 instance using EC2 Instance Connect. This is the pipeline.yml I use to build and deploy my personal website: https://fossil.avalos.me/personal-website/file?name=pipeline.yml&ci=tip

With this approach, I don't see the need to implement auto-triggering CI/CD stuff in Fossil, because, as DRH mentioned somewhere in this thread, it's not good for a fully distributed SCM to trigger stuff and run commands automatically. So, anyone looking for a modern, easy and convenient way of using CI/CD with Fossil, you can follow a similar approach.

I hope it is useful for someone, I don't know if it's the best approach, but it works for me. Thanks for your support, everyone! I really got a lot of ideas from your replies!

(24) By Iván Ávalos (avalos) on 2020-07-08 02:01:25 in reply to 21 [link] [source]

Hello! I already wrote a "tutorial" for integrating Concourse CI with Fossil, I hope you find it useful: https://blog.avalos.me/2020/07/06/fossil-scm-and-concourse/ I would appreciate some feedback!

I'm planing on writing a Fossil resource_type for Concourse, that automatically pulls a check-in and mounts it as an input for a task. I'll let you know when it is ready!

(25) By zilti on 2023-02-16 17:09:11 in reply to 24 [link] [source]

Hi! I wanted to check out your Concourse resource type for Fossil today, but unfortunately, some time today the repository went down...