Fossil

View Ticket
Login
Ticket Hash: 40d9dbd3ade39488a6786cd4588c0915974a9961
Title: ssh scheme incompatible with tcsh
Status: Open Type: Code_Defect
Severity: Minor Priority:
Subsystem: Resolution: Open
Last Modified: 2011-03-28 19:29:18
Version Found In: cdc4249268
Description:
It looks like fossil is relying on some sh(1) or bash(1) specific semantic when it comes to syncing via ssh. Maybe using something like:

<verbatim>/bin/sh -c 'exec /path/to/fossil'</verbatim>

or threads?  Here's what's currently happening with tcsh:

<verbatim>% fs remote-url 'ssh://usr@remote-host.example.com:4321/src/.fossils/proj.fossil?fossil=/usr/local/bin/fossil'
password for usr: 
ssh://usr:*@remote-url.example.com:4321/src/.fossils/proj.fossil?fossil=/usr/local/bin/fossil
% fs pull
Server:    ssh://usr@remote-host.example.com:4321/src/.fossils/proj.fossil?fossil=/usr/local/bin/fossil
ssh -p 22 -t -e none -p 4321 usr@remote-url.example.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
fossil: ssh connection failed: [Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.]
Killed by signal 2.
Exit 1
% fs set ssh-command 'ssh -p 4321 -T -e none'
% fs pull
Server:    ssh://usr@remote-host.example.com:4321/src/.fossils/proj.fossil?fossil=/usr/local/bin/fossil
ssh -p 22 -T -e none -p 4321 usr@remote-url.example.com
fossil: ssh connection failed: [Warning: no access to tty (Bad file descriptor).]
Killed by signal 2.
Exit 1
% fs set ssh-command
ssh-command          (local)  ssh -p 4321 -T -e none
% fs sync
Server:    ssh://remote-host.example.com/src/.fossils/project.fossil
ssh -p 22 -T -e none -p 4321 remote-host.example.com
fossil: ssh connection failed: [Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.]
Killed by signal 2.
Exit 1</verbatim>

Both client and server are running the same version of fossil.  When I change the shell to sh, I get the following output:

<verbatim>% fs sync
Server:    ssh://usr@remote-host.example.com:4321/src/.fossils/proj.fossil
ssh -p 22 -T -e none -p 4321 remote-host.example.com
                Bytes      Cards  Artifacts     Deltas
Sent:            4005         85          0          0
fossil: server says: 501 Not Implemented
Total network traffic: 2341 bytes sent, 0 bytes received
Closing SSH tunnel: Killed by signal 2.</verbatim>

When I sync via http over an ssh tunnel, things work just fine:

<verbatim>% fs remote-url http://usr@127.0.0.1:9099/
password for usr:
http://usr@127.0.0.1:9099/
% fs sync
Server:    http://usr@127.0.0.1:9099/
                Bytes      Cards  Artifacts     Deltas
Sent:            4005         85          0          0
Received:        3820         83          0          0
Total network traffic: 2368 bytes sent, 1275 bytes received</verbatim>

<hr /><i>joerg added on 2011-03-25 01:44:09 UTC:</i><br />
This is essentially a bug in tcsh. Consider running
<verbatim>
echo 'echo test' | tcsh -l > test.dmp 2>&1
</verbatim>

That should start a non-interactive shell, since neither stdin, stdout nor stderr is a tty. As a result, all start up scripts and the shell itself should be silent.
tcsh doesn't do that and the output is confusing fossil. It works with sftp and scp since they use the subsystem mechanism from ssh and don't actually get a shell. The message about job control should be suppressed for the !isatty case too/


<hr /><i>anonymous claiming to be seanc added on 2011-03-28 19:29:18 UTC:</i><br />
I disagree that this is an external bug.

fossil could/should execute the /bin/sh command as its command to execute on the remote side because fossil is requiring a particular shell, in this case, sh.

Or, if that's not possible, have it fire off an 'exec bin/sh' as its first command upon establishing an ssh connection. Something like (one of the two options should work - dry coded, unfortunately):

<verbatim>Index: src/http_transport.c
===================================================================
--- src/http_transport.c
+++ src/http_transport.c
@@ -147,17 +147,23 @@
     }else{
       zHost = mprintf("%s", g.urlName);
     }
     blob_append(&zCmd, " ", 1);
     shell_escape(&zCmd, zHost);
+    blob_append(&zCmd, " ", 1);
+    shell_escape(&zCmd, "/bin/sh");
     printf(" %s\n", zHost);  /* Show the conclusion of the SSH command */
     free(zHost);
     popen2(blob_str(&zCmd), &sshIn, &sshOut, &sshPid);
     if( sshPid==0 ){
       fossil_fatal("cannot start ssh tunnel using [%b]", &zCmd);
     }
     blob_reset(&zCmd);
+
+    /* Start sh(1) */
+    fprintf(sshOut, "exec /bin/sh\n");
+    fflush(sshOut);
 
     /* Send an "echo" command to the other side to make sure that the
     ** connection is up and working.
     */
     fprintf(sshOut, "echo test\n");

</verbatim>