[Bug 897] scp doesn't clean up forked children when processing multiple files

bugzilla-daemon at mindrot.org bugzilla-daemon at mindrot.org
Thu Jul 22 16:55:09 EST 2004


http://bugzilla.mindrot.org/show_bug.cgi?id=897





------- Additional Comments From s.riehm at hse24.de  2004-07-22 16:55 -------
Hi Ben,
you will only see the behaviour I mention if you copy multiple files from a remote directory to a local 
one. You should also make sure the files are large enough to keep scp busy for a while, as soon as scp 
has finished the OS cleans up and there's no record that anything was amiss.

Try something like this:

scp user at host:largefile1 user at host:largefile2 user at host:largefile3 user at host:largefile4 user at host:
thebiggestfileyouvegot ~/tmp

then watch your processes with ps -u from another terminal - by the time scp is copying 
thebiggestfileyouvegot you will see 4 "(scp)" processes. These are terminated child processes (they 
finished their act properly and have left the building) which are waiting for the parent (scp) to pick them 
up at the backstage door (and check their exit code etc etc). These things are called zombies and are 
nothing more than an entry in the process table. The problem for me was that OS-X (panther) only 
allows 100 processes per user by default - copy about 80 files in a single scp command and you'll get 
the error: "fork: resource temporarily not available".
I hit this because I'm syncing an image library of around 150,000 images...
The patch waits for each ssh child before starting the next one, thus preventing the accumulation of 
zombies.

If you still don't see this behaviour, I'd be most interested in your exact environment. This handling of 
child processes is perfectly normal unix behaviour.



------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.




More information about the openssh-bugs mailing list