Yet another question on window computations
JCA
1.41421 at gmail.com
Sun Nov 4 08:12:40 EST 2007
A couple of weeks ago I made an inquiry on window computation
details that has so far gone unanswered - unsurprisingly so, I now
realize, for it was way too involved. Let me try again with a
simplified version, in the hope that some nice OpenSSH developer could
please provide an answer.
What is the rationale underpinning the sending of a window adjust
packet, as implemented in channels.c? Until recently, the condition
c->local_window < c->local_window_max/2 &&
c->local_consumed > 0
where, if I understand it correctly, local_window is the current size
of the window, local_window_max is the maximum window size initially
agreed on by client and server, and local_consumed is number of bytes
already consumed in the current window at a given time. That is,
local_window + local_consumed = local_window_max; it would seem that
every time a window adjust packet is sent, the size of the window gets
reset to local_window_max.
Why local_window < local_window_max/2? Why not local_window == 0, or
even local_window <= 0? Also why local_consumed > 0? Isn't
local_consumed reset to 0 whenever a window adjust packet is sent (I
am looking at this from the point of view of the client), and
incremented afterwards?
In the latest OpenSSH release the condition is changed to
((c->local_window_max - c->local_window >
c->local_maxpacket*3) ||
c->local_window < c->local_window_max/2) &&
c->local_consumed > 0
Why the new conditional? What issues does it address that are not
addressed by the older, simpler version?
My apologies if these are trivial questions. I am trying to
understand why OpenSSH does things the way it does, and as far as the
windowing code is concerned, although the C code is simple enough, the
motivation behind it is not at all clear to me - hence me appeal to
the OpenSSH developers for clarification.
More information about the openssh-unix-dev
mailing list