ServerAliveCountMax (and Client) waits for TCP timeout before process exit

Darryl L. Miles darryl-mailinglists at netbauds.net
Fri Jan 10 03:48:37 EST 2014



I am of the opinion that ClientAliveCountMax should really force a 
disconnection from the testing side when a ping-pong control packet 
retransmission would exceed the max counter.

But it appears to need TCP to timeout to occur from that point, for the 
process/tty to close.



For SSH client options:
  -o ServerAliveInterval=60
  -o ServerAliveCountMax=3
Should cause the client to force an immediate disconnection at 240 
seconds, when the 4th retry would have been attempted.


For SSH server options:
   -o ClientAliveInternal=60
   -o ClientAliveCountMax=3
Should cause the server to force an immediate disconnection at 240 
seconds, when the 4th retry would have been attempted.



Can anyone confirm if this was/is the intention of this feature ?

This way the client/server administrator have better control over the 
timescales for recovery.



At the moment in my usage of this feature once a timeout has occurred, 
the SSH server/SSH client appears to wait for a TCP socket timeout to 
occur which is approximately 15 minutes after TCP backoff and 
retransmissions, etc...  So in the above configuration it takes up to 19 
minutes (4 mins for SSH to notice and 15 minutes for TCP to timeout).

This behaviour sets a floor on the minimum timeout value possible to 
recover a connection, that I can not think of a use case for ?  Can you?

The scenario is that connectivity between the IPs has been lost, or 
changed, such that TCP RST packets will no occur durign the TCP 
retransmissions.


NOTE: It was some weeks ago I tested this theory out and has taken me 
get to writing an email to the list, I hope I got the details right.


Thanks,

Darryl



More information about the openssh-unix-dev mailing list