SFTP and outstanding requests
Chris Rapier
rapier at psc.edu
Fri Apr 27 02:28:53 EST 2007
Damien Miller wrote:
> On Wed, 25 Apr 2007, Chris Rapier wrote:
>
>> Quick note of clarification. I know you can change this from the command
>> line with -R. Mostly I'm just wondering if there are situations where
>> increasing it might cause problems.
>
> Not that I am aware of. The current value is fairly arbitrary and was based
> IIRC on a value that produced a good transfer speed between a host in
> Melbourne and one in Singapore.
>
> The default could easily be cranked up or down if you can present some
> evidence to justify it.
>
> -d
Damien,
Well, currently the default shouldn't be a problem in most any
situation. As far as I can tell it pretty much acts like a flow control
buffer with (by default) a 512K window (16x32k). So when you layer that
on top of SSH's flow control and on top of TCP's as long as its not the
minimum value it shouldn't act as a bottleneck.
It does end up being problematic with the HPN patch (boosting it to 256
gave me 32MB/s while before it was maxing out at 4MB/s) but I can
address that, probably, with user education. There is another, more
complicated, method that wouldn't tax low memory systems, but I'm not
even going to think about that until I get back.
What I am curious about, and maybe you can help point me to the right
portion of the code, is what happens when transferring multiple files in
SFTP (SCP as well). If you look at outstanding data graphs (derived by
tcptrace from a tcpdump) it seems that between each file there is
something happening that causes the network to drain completely and then
there is a 2RTT pause before the next file gets sent out. I can put a
copy of the data somewhere if you want to look at it. If I can get a
better understanding of what is happening there I can at least explain
to my users why they should do a tar pipe if they have many small files.
Chris
More information about the openssh-unix-dev
mailing list