Aw: Re: Howto log multiple sftpd instances with their chroot shared via NFS

Hildegard Meier daku8938 at
Sat Sep 25 00:37:09 AEST 2021

Many thanks for your answers!
More ideas appreciated, since it would be relly important for us to have a solution for that.
>Date: Wed, 22 Sep 2021 13:06:43 +0200
>From: Jochen Bern <Jochen.Bern at>
>To: openssh-unix-dev at
>Subject: Re: Howto log multiple sftpd instances with their chroot
>shared via NFS
On 22.09.21 11:18, David Newall wrote:
> On Tue, 21 Sep 2021, Hildegard Meier wrote:
>> So, if a user logs in on the first server, where syslog-ng was started
>> least, the user's sftp activity is logged on the first server.
>> But if the user logs in on the second server, it's sftp activity is
>> not logged, neither on the second nor on the first server.
> Forward the log entries on both machines to a log host.
>Considering that server B is not logging *at all* right now, I doubt
>that it'll have anything to forward to a log host, either.
Yes the log destination is not the problem, the empty log source is.

>The problem *presumably* is that the syslogd on server A has put some
>sort of file lock on the device that propagates through the NFS server
>and interferes with syslogd on server B using it.
As I understand, syslog-ng creates /dev/log in the user's chroot directories as Unix stream socket (see
This seems also to be called IPC socket (inter-process communication socket)   or AF_UNIX socket.
"It is used in POSIX operating systems for inter-process communication. The correct standard POSIX term is POSIX Local IPC Sockets. Unix domain connections appear as byte streams, much like network connections, but all data _remains within the local computer_."
"It means that if you create a AF_UNIX socket on a NFS disk which is shared between two machines A and B, you cannot have a process on A writing data to the unix socket and a process on B reading data from that socket.
The communication happens at kernel level, and you can only transfer data among processes sitting in the same kernel."
(see source: )

>One solution might be to reconfigure the syslogd's to use a method of
>locking that does *not* propagate through NFS. I'm afraid I don't know
>syslog-ng well enough to advise on that.
I think the problem is that the chrooted sftp users only have write access (for logging) under their chroot home dir (and there /devlog) , and since that is on NFS, how could one escape that. A possibility would be a symlink to a local unix stream device, but chroot does not allow to access out of it, that's the whole point of being chrooted (jailed).

>Then there's the possibility of reconfiguring *NFS* to stop the
>forwarding, but "breaking" file locking on NFS is, of course, a can of
>worms of possible side effects ...
Do not know what you exactly mean, but I think we cannot fiddle with NFS file locking since we have several hundrets of sftp customer accounts on the NFS chroot, which transfer many many files.

>(Bind) mounting a local .../dev over the NFS-shared chroot dirtree ...
>ought to work, but complicates unmounting/remounting, which was already
>enough of a hair-puller in failure scenarios when I last worked with NFS.
man bind says: "mount --bind olddir newdir"
/dev/log is not a directory, but created by syslog-ng as a unix stream socket (see above). I guess if it would a directory, the sftpd process could not log to it?
# mkdir -p /my/local/dir/
# mount -v --bind /var/data/chroot/test/dev/log /my/local/dir
mount: /my/local/dir: mount(2) system call failed: Not a directory.

>What do the chrooted users have for a homedir *within* the chroot? Would
>it be possible to have /var/data/chroot be a local FS and mount only
>/var/data/chroot/home from the NFS server? (If there are files that you
>need to keep identical on both servers, e.g., under
>/var/data/chroot/etc, you can still symlink those to some special subdir
>like /var/data/chroot/home/ETC to put the actual data onto the NFS share.)
Strict technically, we really _need_ to share every sftp user's chrooted upload directory (/var/data/chroot/<username>/in/) and download directory (/var/data/chroot/<username>/out/), since we have a single backend process that reads all files that the customers have uploaded into their chroot /in/ directory (be it that they have logged in to the one sftp server or the other, both sftp servers are TCP load balanced, but the time order of the uploads needs to be honoured), and writes files to the customers chroot /out/ dir (and the customer can download the files from it's chrooted /out/ dir whether having sftp logged in to the one sftp server or the other).
But since we have 800 sftp customers, so 800 /var/data/chroot/<username>/in/ and 800 /var/data/chroot/<username>/out/ it is not feasable to have 1600 single NFS mounts, to achieve that /var/data/chroot/<username>/dev/log is local. That is the problem.
>Mount a tiny local ramfs or tmpfs over /var/data/chroot/dev?
For the same reason, since we have so many /var/data/chroot/<username>/dev/log , this would not be feasable (apart of the question if this would technically be a solution).
We need also a single log source for each every sftp user, because we need a single log file for every single sftp user, to be able to view each sftp user's sftp activity.
Best regards

More information about the openssh-unix-dev mailing list