Fossil Forum

issue: ssh -T does not close after sync operation (v. 2.15)
Login

issue: ssh -T does not close after sync operation (v. 2.15)

issue: ssh -T does not close after sync operation (v. 2.15)

(1) By Art Eschenlauer (eschen42) on 2023-05-05 20:03:35 [link] [source]

Every time I do a sync operation over SSH, two processes remain running on my local system:

/bin/sh -c ssh -e none -T 'me@myhost' fossil test-http /home/me/fossils/myrepo.fossil
\_ ssh -e none -T me@myhost fossil test-http /home/me/fossils/myrepo.fossil

This is true whether my local system is MacOS (Fossil version 2.15) or Ubuntu Linux (Fossil version 2.15).

Eventually these processes get reaped, but they make for messy ps listings.

(2) By Andy Bradford (andybradford) on 2023-05-05 20:47:18 in reply to 1 [source]

> Every  time I  do  a sync  operation over  SSH,  two processes  remain
> running on my local system:

I  don't  see  that  happening  on  my  system  (OpenBSD)  using  Fossil
[99b09b9476] so  it doesn't appear to  be a generic problem.  What state
are the processes in? On Linux can you strace the SSH process?

Thanks,

Andy

(3) By Warren Young (wyoung) on 2023-05-05 20:54:40 in reply to 1 [link] [source]

That version is over 2 years old. Maybe your bug has been fixed since then?

It isn't happening here with macOS 13 as the client and CentOS 8 as the server.

(4) By Richard Hipp (drh) on 2023-05-05 21:04:06 in reply to 3 [link] [source]

I have no memory of fixing any such bug. Even so, I think Warren's advice to upgrade is sound. If nothing else, it will help us to recreate the problem.

(5) By Art Eschenlauer (eschen42) on 2023-05-13 23:03:40 in reply to 3 [link] [source]

I observed this on Windows with version

This is fossil version 2.21 [f9aa474081] 2023-02-25 19:23:39 UTC

(6) By Art Eschenlauer (eschen42) on 2023-05-14 15:46:03 in reply to 5 [link] [source]

I observed this on MacOS with version

This is fossil version 2.21 [f9aa474081] 2023-02-25 19:23:39 UTC

(7.1) By Warren Young (wyoung) on 2023-05-14 17:40:04 edited from 7.0 in reply to 6 [link] [source]

These macOS and Windows systems are client machines? What is "myhost" in this example? OS and version, Fossil version, SSH version, etc.?

Realize that your local Fossil instance still must talk to a remote Fossil instance when syncing over SSH. If you have two in different installations in your PATH (e.g. one installed from the OS's package manager in /usr/bin and one built from source in /usr/local/bin) you might not be running the one you expect over SSH since sshd can set a different PATH than your remote shell profile.

(8.1) By Art Eschenlauer (eschen42) on 2023-05-15 12:37:29 edited from 8.0 in reply to 7.1 [link] [source]

Ah! Thank you, Warren. That's really helpful!

On myhost

This is fossil version 2.17 [f48180f2ff] 2021-10-09 14:43:10 UTC

So, I will try with a current version.


Here are my updated results:

(base) Arthurs-MacBook-Pro:src art$ fossil version
This is fossil version 2.21 [f9aa474081] 2023-02-25 19:23:39 UTC

(base) Arthurs-MacBook-Pro:src art$ which -a fossil
/Users/art/bin/fossil

(base) Arthurs-MacBook-Pro:src art$ ssh me@myhost fossil version
This is fossil version 2.21 [f9aa474081] 2023-02-25 19:23:39 UTC

(base) Arthurs-MacBook-Pro:src art$ ssh me@myhost which -a fossil
/home1/eschen42/perl5/bin/fossil

(base) Arthurs-MacBook-Pro:src art$ ssh me@myhost uname -a
Linux shared.mywebhost.com 4.19.150-76.ELK.el7.x86_64 #1 SMP Wed Oct 7 01:33:43 CDT 2020 x86_64 x86_64 x86_64 GNU/Linux

(base) Arthurs-MacBook-Pro:src art$ f sync
Sync with ssh://me@myhost//home/me/fossils/myrepo.fossil
Round-trips: 1   Artifacts sent: 0  received: 0
Sync done, wire bytes sent: 1110  received: 216  remote: myhost

(base) Arthurs-MacBook-Pro:finance art$ ps -ef | sed -n -e '1 p; /ssh/p; d'
  UID   PID  PPID   C STIME   TTY           TIME CMD
  501  7668     1   0  3May23 ??         0:00.36 /usr/bin/ssh-agent -l
  501 77158     1   0  7:29AM ttys004    0:00.07 ssh -e none -T -- me@myhost fossil test-http /home/me/fossils/myrepo.fossil
  501 77163 68324   0  7:30AM ttys004    0:00.00 sed -n -e 1 p; /ssh/p; d

So, local and remote both seem to be running v2.21, and I still add a residual connection each time I sync.

(9) By Warren Young (wyoung) on 2023-05-15 20:17:49 in reply to 8.1 [link] [source]

At this point, I'd look into whatever non-default things myhost has set in /etc/ssh/*. Strange things are in evidence there. /home1? Fossil installed under the Perl bin dir? Strange things, I tell you.

(10) By Art Eschenlauer (eschen42) on 2023-05-16 03:26:51 in reply to 9 [link] [source]

I will see what I can see, but this is a "virtual private server", so perhaps I cannot see all. I had to put fossil under my own bin directory so that it would be on the path for ssh. I will try ssh -v -v -v to see what I can uncover. But, I accept that this could be a direct result of the fact that I am connecting with a VPS.

(11) By Martin Gagnon (mgagnon) on 2023-05-16 04:47:03 in reply to 10 [link] [source]

Out of curiosity, on your original post, the ssh process listed looks like:

/bin/sh -c ssh -e none -T 'me@myhost' fossil test-http /home/me/fossils/myrepo.fossil

In my case, it looks like this: (on MacOS 13.3.1 and Ubuntu-18.04) with ssh-command setting unset.

ssh -e none -T -- me@myhost fossil test-http /path/to/repository.fossil

I don't say that it's wrong, but I wonder why it's different. Did you set a custom "ssh-command" (using fossil set ssh-command) ? Or do you have a kind of shell wrapper involved when you call "ssh" ?

(12) By Martin Gagnon (mgagnon) on 2023-05-16 04:59:38 in reply to 11 [link] [source]

Found an explanation.

Change was made around version 2.16 on this checkin: checkin.

And I just noticed your more recent post when you tried with version 2.21 that was showing the newer ssh invocation.

Sorry for the noise.

(13) By Warren Young (wyoung) on 2023-05-16 16:05:07 in reply to 10 [link] [source]

The key distinction between a VPS and a shared host is that you get root access in a VPS. If this "/home1" stuff is an indication that other users have logins on the same OS instance, and there are enough of them that they need multiple "home" volumes, then you haven't got a VPS, you've got a shared server.

I make the distinction because if you have a VPS, its internal contents are your responsibility. If you misconfigured it, you also have the power to fix it. If it's a shared server, your host's administrator is responsible for getting things working, in which case you have cause to file a support ticket with them.