Dan Kaminsky dan at
Thu Aug 8 11:09:56 EST 2002

>Well, right.  Linux/Solaris/others don't close the pty when the session
>leader exits (I guess that's the problem).  That's why the option was
>called 'AllowDataLossOnPty' by Markus.  *IT ALLOWS DATA LOSS*.  You
>still have an open fd you can read from, yet you're going to force it
>closed.  If you want to be totally correct, you trust that the OS does
>the correct thing and you exit only when there is no more data to read.
There are two problems:

1) A client goes away, but the server doesn't realize it
2) A server process finishes, but the client doesn't realize it

The former problem is trivial to solve.  Send SSH ignore packets, which 
MUST(in the IETF sense) elicit responses.  If n things that MUST happen 
fail to occur, then the other side is dead and all its resources can be 
dropped in the bucket -- and indeed should be.  SSH Keepalives may be 
overloaded to do this.

Conveniently enough, a dead socket and a disconnected client both fail 
to elicit responses.

Once those packets are gone, there *cannot* be any more data from the 
file descriptor, because the provider of the packets is quite dead.  If 
its dead, you don't trust it.  If you don't trust it, you cannot depend 
on resources authenticated by it.  Since the "open FD" was authenticated 
by a now untrusted agent(and we know its untrusted because its no longer 
responding to our secured segments), security demands it be closed.

Now, the latter problem may be more complicated.  For whatever reason, 
an application executing on the local command line may exit out just 
fine, while one executed using ssh user at host command doesn't exit out. 
 If I may ask, what is it that creates the exit condition locally that 
we aren't emulating remotely?

I'd like to not kill machines just by running netcat over a command forward.

Yours Truly,

    Dan Kaminsky
    DoxPara Research

More information about the openssh-unix-dev mailing list