[Bug 1286] SFTP keeps reading input until it runs out of buffer space

bugzilla-daemon at mindrot.org bugzilla-daemon at mindrot.org
Mon Feb 19 03:53:51 EST 2007


http://bugzilla.mindrot.org/show_bug.cgi?id=1286

           Summary: SFTP keeps reading input until it runs out of buffer
                    space
           Product: Portable OpenSSH
           Version: v4.5p1
          Platform: All
        OS/Version: Linux
            Status: NEW
          Keywords: patch
          Severity: normal
          Priority: P2
         Component: sftp
        AssignedTo: bitbucket at mindrot.org
        ReportedBy: thuejk at gmail.com


I had a problem with the sshfs connection dying all the time. I have
tracked the problem down to the check 
        if (newlen > BUFFER_MAX_LEN)
                fatal("buffer_append_space: alloc %u not supported",
                    newlen);
in buffer.c:buffer_append_check()

The problem is that when sending a file data, sshfs will just keep
sending without waiting for the server to catch up. This does not need
to be a problem, as the sftp server should just stop receiving TCP
packets when the buffers are full, causing TCP resends and
automatically slowing down the sending.

However, the openssh sftp server loop (openssh:sftp-server.c) will
just keep trying to read from stdin. It will read from stdin to its
input buffer at most once per loop, and dispatch at most one sftp
packet per loop. But as each read from stdin can read up to 8 sftp
packets into the input buffer then the input buffer risks running out
of space. When it runs out of space then it just kills itself
(openssh:buffer.c:buffer_append_space).

I note that the openssh sftp client has a mechanism to limit the
number of unanswered requests, which probably means unlike sshfs, it
is not affected.

I found that a reliable way to trigger this bug was to start a
parallel process which hogs the disk, causing the consuming sftp
server loop to slow down.
-sshfs-mount localhost:/tmp test
-ionice -c1 cp large_file /tmp/deleteme
-dd if=/dev/zero of=test/deleteme2

The error I get is
dd: writing to `test/deleteme2': Input/output error
dd: closing output file `test/deleteme2': Transport endpoint is not
connected

It is somewhat a matter of opinion, but I would say that it is openssh
and not sshfs which should be fixed. If I look at
http://www.openssh.org/txt/draft-ietf-secsh-filexfer-02.txt (which,
however, does not seem to be the protocol actually implemented by
openssh sftp, AFAICT), it says that
   There is no limit on the number of outstanding (non-acknowledged)
   requests that the client may send to the server.  In practice this
is
   limited by the buffering available on the data stream and the
queuing
   performed by the server.  If the server's queues are full, it should
   not read any more data from the stream, and flow control will
prevent
   the client from sending more requests.  Note, however, that while
   there is no restriction on the protocol level, the client's API may
   provide a limit in order to prevent infinite queuing of outgoing
   requests at the client.

Two versions of a patch to fix this is attached, one for 4.3p2, and one
for 4.5p1. This stops reading new data from stdin if the input buffer
is full. As a bonus, I also handled the case where the output buffer is
overwhelmed.




------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.


More information about the openssh-bugs mailing list