From kaleb at wolfssl.com Tue Sep 1 00:48:10 2015 From: kaleb at wolfssl.com (Kaleb Himes) Date: Mon, 31 Aug 2015 08:48:10 -0600 Subject: Inter-op and port (wolfSSL + openSSH) In-Reply-To: <20150830160842.GB6818@hatter.bewilderbeest.net> References: <20150830160842.GB6818@hatter.bewilderbeest.net> Message-ID: Hi Darren, Tucker, and openSSH, wolfSSL is a dual licenced software. We have a Commercial option and a GPLv2 option. We provide support to both code sources and are an active part of the open source community. You can freely download our code from our website or visit our development branch on github. https://github.com/wolfSSL/wolfssl. https://wolfssl.com/wolfSSL/download/downloadForm.php http://wolfssl.com/wolfSSL/License.html Regards, Kaleb Himes www.wolfssl.com kaleb at wolfssl.com Skype: kaleb.himes +1 406 381 9556 On Sun, Aug 30, 2015 at 10:08 AM, Zev Weiss wrote: > On Sun, Aug 30, 2015 at 05:06:56PM +1000, Darren Tucker wrote: > >> I lost interest when the download required registration. >> > > I initially had the same reaction, but then discovered that (though it's > not at all obvious from looking at it) the "information" part of the form > on that page is actually optional; you can just select the desired .zip, > check the "I agree to the GPL" box (assuming you do), click "download" and > it'll go right ahead and download. > > > Zev > > > From bernt.jernberg at gmail.com Tue Sep 1 00:52:38 2015 From: bernt.jernberg at gmail.com (Bernt Jernberg) Date: Mon, 31 Aug 2015 16:52:38 +0200 Subject: configure in 7.1p1 fails on Solaris 10 SPARC Message-ID: Hi, I am trying to build the 7.1p1 on Solaris 10 SPARC. I have build OpenSSL 1.0.1p and installed it in /opt/local/openssl. Configure options for that: export PATH=/opt/local/bin:/usr/sfw/bin:/usr/ccs/bin:/usr/bin:/usr/sbin:/sbin export MAKE=gmake ./Configure shared solaris64-sparcv9-gcc -R/usr/sfw/lib/sparcv9 -R/opt/local/openssl/lib --prefix=/opt/local/openssl gmake test gmake install export PATH=/opt/local/bin:/usr/sfw/bin:/usr/ccs/bin:/usr/bin:/usr/sbin:/sbin ./configure --sysconfdir=/etc/opt/openssh \ --prefix=/opt/local \ --with-solaris-contracts \ --with-tcp-wrappers=/usr/sfw/lib \ --with-ssl-dir=/opt/local/openssl \ --with-audit=bsm \ --without-bsd-auth \ --with-zlib=/usr/sfw/lib \ --with-privsep-path=/var/opt/empty \ --with-pam \ --with-privsep-user=sshd \ --with-default-path=/opt/local/bin:/usr/bin:/usr/sbin:/sbin \ --with-superuser-path=/opt/local/sbin:/opt/local/bin:/sbin:/usr/sbin:/usr/bin \ --with-kerberos5=/opt/local CPPFLAGS='-I/opt/local/openssl/include' LDFLAGS='-L/opt/local/openssl/lib' # crle Configuration file [version 4]: /var/ld/ld.config Default Library Path (ELF): /opt/local/openssl/lib:/lib:/usr/lib:/usr/ccs/lib:/usr/sfw/lib Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default) Command line: crle -c /var/ld/ld.config -l /opt/local/openssl/lib:/lib:/usr/lib:/usr/ccs/lib:/usr/sfw/lib I am using the default compiler: # /usr/sfw/bin/gcc --version gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # uname -a SunOS myhost 5.10 Generic_142900-11 sun4u sparc SUNW,Sun-Fire-V215 No matter what I do the configure fails with: checking OpenSSL header version... 1000110f (OpenSSL 1.0.1p 9 Jul 2015) checking OpenSSL library version... configure: error: OpenSSL >= 0.9.8f required (have "0090704f (OpenSSL 0.9.7d 17 Mar 2004 (+ security fixes for: CVE-2005-2969 CVE-2006-2937 CVE-2006-2940 CVE-2006-3738 CVE-2006-4339 CVE-2006-4343 CVE-2007-5135 CVE-2007-3108 CVE-2008-5077 CVE-2009-0590))") It always checks the one installed in /usr/sfw/lib Am I missing something obvious? Bernt Jernberg From peter at stuge.se Tue Sep 1 01:35:06 2015 From: peter at stuge.se (Peter Stuge) Date: Mon, 31 Aug 2015 17:35:06 +0200 Subject: Inter-op and port (wolfSSL + openSSH) In-Reply-To: References: <20150830160842.GB6818@hatter.bewilderbeest.net> Message-ID: <20150831153506.31759.qmail@stuge.se> Kaleb Himes wrote: > You can freely download our code from our website or visit our > development branch on github. > > https://github.com/wolfSSL/wolfssl. Good, but, where's your patch? And are you prepared to license your patch under BSD terms? //Peter From mysatyre at gmail.com Tue Sep 1 09:23:50 2015 From: mysatyre at gmail.com (=?UTF-8?Q?Martti_K=C3=BChne?=) Date: Tue, 1 Sep 2015 01:23:50 +0200 Subject: COLUMNS and LINES environment variables Message-ID: Hello openssh developers, Instead of just playing nethack, I've been building a client that would log in to nethack at alt.org and using a pipe to get the login data from pwsafe directly onto the server. All of this works brilliantly after playing with some stty magic (full script in [0]), however, this way the terminal size is burned into 80x24, which is way smaller than my graphical terminal. Anyway, I proceeded grepping some of the openssh source code and wrote this patch [1], which I have locally tested with great success. cheers! mar77i [0] https://gist.github.com/mar77i/15040d227ec9f7311f25 [1] https://gist.github.com/mar77i/673b0338a90bd53bb32e From djm at mindrot.org Tue Sep 1 10:00:30 2015 From: djm at mindrot.org (Damien Miller) Date: Tue, 1 Sep 2015 10:00:30 +1000 (AEST) Subject: COLUMNS and LINES environment variables In-Reply-To: References: Message-ID: On Tue, 1 Sep 2015, Martti K?hne wrote: > Hello openssh developers, > > Instead of just playing nethack, I've been building a client that > would log in to nethack at alt.org and using a pipe to get the login data > from pwsafe directly onto the server. > All of this works brilliantly after playing with some stty magic (full > script in [0]), however, this way the terminal size is burned into > 80x24, which is way smaller than my graphical terminal. > > Anyway, I proceeded grepping some of the openssh source code and wrote > this patch [1], which I have locally tested with great success. The problem with doing it via enviornment variables is that they will become invalid if the client ever changes the size of their window. TTYs support sending SIGWINCH for this and OpenSSH handles this already. Your program should be able to get the correct window size at any time using: struct winsize ws; if (ioctl(ttyfd, TIOCGWINSZ, &ws) != 0) error(...); -d From philipp.marek at linbit.com Tue Sep 1 16:52:58 2015 From: philipp.marek at linbit.com (Philipp Marek) Date: Tue, 1 Sep 2015 08:52:58 +0200 Subject: COLUMNS and LINES environment variables In-Reply-To: References: Message-ID: <20150901065257.GA5342@cacao.linbit> > > Instead of just playing nethack, I've been building a client that > > would log in to nethack at alt.org and using a pipe to get the login data > > from pwsafe directly onto the server. > > All of this works brilliantly after playing with some stty magic (full > > script in [0]), however, this way the terminal size is burned into > > 80x24, which is way smaller than my graphical terminal. > > > > Anyway, I proceeded grepping some of the openssh source code and wrote > > this patch [1], which I have locally tested with great success. > > The problem with doing it via enviornment variables is that they > will become invalid if the client ever changes the size of their > window. TTYs support sending SIGWINCH for this and OpenSSH handles > this already. > > Your program should be able to get the correct window size at any > time using: > > struct winsize ws; > if (ioctl(ttyfd, TIOCGWINSZ, &ws) != 0) > error(...); The "script" version of that is calling "tput cols" and "tput lines", parsing the text output as integers... From mysatyre at gmail.com Tue Sep 1 17:36:29 2015 From: mysatyre at gmail.com (=?UTF-8?Q?Martti_K=C3=BChne?=) Date: Tue, 1 Sep 2015 09:36:29 +0200 Subject: Fwd: COLUMNS and LINES environment variables In-Reply-To: References: Message-ID: On Tue, Sep 1, 2015 at 2:00 AM, Damien Miller wrote: > The problem with doing it via enviornment variables is that they > will become invalid if the client ever changes the size of their > window. TTYs support sending SIGWINCH for this and OpenSSH handles > this already. > > Your program should be able to get the correct window size at any > time using: > > struct winsize ws; > if (ioctl(ttyfd, TIOCGWINSZ, &ws) != 0) > error(...); > > -d The issue I have is the redirection of stdin/fd0 which is equal to in_fd in the function where the patch applied. It must be a pipe, or no data will be received by the program due to how file descriptors work. An alternative, probably more agreeable solution would be to open /dev/tty exclusively for the ioctl. Thanks for your consideration. cheers! mar77i From nkadel at gmail.com Tue Sep 1 20:45:10 2015 From: nkadel at gmail.com (Nico Kadel-Garcia) Date: Tue, 1 Sep 2015 06:45:10 -0400 Subject: Disabling host key checking on LAN In-Reply-To: References: Message-ID: On Mon, Aug 31, 2015 at 9:02 AM, Bostjan Skufca wrote: > On 30 August 2015 at 18:53, Nico Kadel-Garcia wrote: >> >> On Sun, Aug 30, 2015 at 6:57 AM, Bostjan Skufca wrote: >> > those were my thoughts, exacly, except that I was thinking about using "dig >> > +short HOST | ..." which has the cleanest output of all. >> >> It can get a bit confusing with >> round-robin DNS, which can give multiple responses. > > > Care to illustrate your use case? For confusion without "round-robin DNS"? Sure. # dig +short www.google.com 63.117.14.21 63.117.14.23 63.117.14.25 63.117.14.22 63.117.14.24 63.117.14.26 63.117.14.20 63.117.14.27 That's an unordered list of not-necessarily identical targets, it's quite dynamic, and the exposed IP addresses *do not necessarily have the same SSH key*. Moreover, the resolution of the specific IP addresses when establishing a new connection is not really "round-robin". working its way through a particular order. The "round-robin" is typically a common, but subtly mistaken one. It's actually multiple A records, and that is the local libc library's problem to resolve, and the results can be.... unpredictable. A more powerful specific local example is Samba or Active Directory servers for a particular domain. The DNS domain for thei "EXAMPLE" Active Directory might be "example.com". The AD or Samba servers, both active and failover, will publish A records for themselves as "example.com", "_kerberos.tcp.example.com", and other multiple A records, no matter if they have individual host names of "samba1.example.com" and "samba2.example.com". Should those multiple AD servers have identical SSH keys, simply to prevent confusion if I look up "example.com" and need to talk to it as an admin? Or do I personally have to keep track, somehow, of which server is which and talk to its individual name, even if all I really want to do is set up a shared configuration, or even log into it to do DNS zone transfers on localhost to get configuration information for the entire domain, and I *have* the designated root SSH keys for exactly such access? > I am having difficulties imagining it: > 1. If you are managing particular host, you connect to its IP directly > (possibly via DNS entry). > 2. If that DNS entry represents a service that has a load-balanced IP > list, you should not be connecting to arbitrary host in that list, but > use dedicated IP of particular server in that list, or am I missing > something here? You're missing that what people *should* do is often not trivially available to the SSH client. There are even poorly configured load balancer environments where the load balancer exposed entry was the only to get at the SSH service directly from outside the local network That drove me *nuts* for a particular svn+ssh environment some years back, I really wanted to hit the admin who did that with a brick because it was an intermittent problem. We won't even *discuss* the problems caused because the svn+ssh setup was already split-brain with write privileges on both servers. > Additional point: > If your environment gets complicated enough, it probably justifies > usage of ProxyCommand directive with reference to dedicated > script/program that does the necessary plumbing (technical and > policy-wise) to set up your connection. > > b. Which I'd have to set up manually for every remote host target, and would have to propagate or configure on any workstation I happen to work from. Let's see, hand-tuning .ssh/config everywhere, or just throwing out hostkey based SSH verification so I don't have to spend time confirming new keys and can get on with my work.... I know which is the better security option, but I also know which is the more common "stop bothering me about this!" option. From kaleb at wolfssl.com Wed Sep 2 02:36:06 2015 From: kaleb at wolfssl.com (Kaleb Himes) Date: Tue, 1 Sep 2015 10:36:06 -0600 Subject: Inter-op and port (wolfSSL + openSSH) In-Reply-To: References: <20150830160842.GB6818@hatter.bewilderbeest.net> Message-ID: Hi openSSH, After having time to review our licensing model and perhaps play around with our product we were checking back to see what your thoughts might be. We also wanted to point out that we only desire to give end-users an alternative option to compiling with openSSL. End users who configure with the "--enable-wolfssl" option would need to consider licensing. That would be a part of their project evaluation phase. Any patch we submit to you would retain your licensing model. Your feedback is appreciated, Kind regards, Kaleb Himes www.wolfssl.com kaleb at wolfssl.com Skype: kaleb.himes +1 406 381 9556 From djm at mindrot.org Fri Sep 4 13:12:36 2015 From: djm at mindrot.org (Damien Miller) Date: Fri, 4 Sep 2015 13:12:36 +1000 (AEST) Subject: Inter-op and port (wolfSSL + openSSH) In-Reply-To: References: <20150830160842.GB6818@hatter.bewilderbeest.net> Message-ID: On Tue, 1 Sep 2015, Kaleb Himes wrote: > Hi openSSH, > > After having time to review our licensing model and perhaps play around > with our product we were checking back to see what your thoughts might be. > > We also wanted to point out that we only desire to give end-users an > alternative option to compiling with openSSL. > End users who configure with the "--enable-wolfssl" option would need to > consider licensing. > That would be a part of their project evaluation phase. Any patch we submit > to you would retain your licensing model. Hi, I'm not opposed to making OpenSSH play nicer with non-OpenSSL crypto libraries, but I am worried that attempts to do so could yield a worse #ifdef maze than we already have. Microsoft will need to figure out how to handle crypto in their port of OpenSSH since they'll likely be using CryptoAPI instead of OpenSSL, so perhaps there is an opportunity to find some nice way of abstracting out all the BIGNUM, RSA, DSA, EC*, etc out that suits you both (and cleans up core OpenSSH along the way). -d From list at eworm.de Mon Sep 7 23:10:26 2015 From: list at eworm.de (Christian Hesse) Date: Mon, 7 Sep 2015 15:10:26 +0200 Subject: [PATCH 1/1] do not print warning about missing home directory in chroot Message-ID: <1441631426-9471-1-git-send-email-list@eworm.de> From: Christian Hesse Since setting options.chroot_directory to NULL after successful chroot the following error message is back: Could not chdir to home directory /home/user: No such file or directory Remember that we are inside a chroot and do not print error message about missing home directory. Signed-off-by: Christian Hesse --- session.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/session.c b/session.c index 5a64715..35790cf 100644 --- a/session.c +++ b/session.c @@ -160,6 +160,7 @@ login_cap_t *lc; #endif static int is_child = 0; +static int in_chroot = 0; /* Name and directory of socket for authentication agent forwarding. */ static char *auth_sock_name = NULL; @@ -1529,6 +1530,14 @@ do_setusercontext(struct passwd *pw) safely_chroot(chroot_path, pw->pw_uid); free(tmp); free(chroot_path); + + /* + * Remember we are inside a chroot. We need this later + * to know whether or not to print a warning about + * missing home directory. + */ + in_chroot = 1; + /* Make sure we don't attempt to chroot again */ free(options.chroot_directory); options.chroot_directory = NULL; @@ -1790,8 +1799,7 @@ do_child(Session *s, const char *command) #ifdef HAVE_LOGIN_CAP r = login_getcapbool(lc, "requirehome", 0); #endif - if (r || options.chroot_directory == NULL || - strcasecmp(options.chroot_directory, "none") == 0) + if (r || in_chroot == 0) fprintf(stderr, "Could not chdir to home " "directory %s: %s\n", pw->pw_dir, strerror(errno)); -- 2.5.1 From thomas.jarosch at intra2net.com Tue Sep 8 00:16:42 2015 From: thomas.jarosch at intra2net.com (Thomas Jarosch) Date: Mon, 07 Sep 2015 16:16:42 +0200 Subject: [PATCH] ssh-agent: Add support to load additional certificates In-Reply-To: References: <55B4F422.7030405@intra2net.com> <1647204.J8msWKWJdq@storm> Message-ID: <7447315.H6vDixcBK1@storm> On Monday, 17. August 2015 10:33:54 Damien Miller wrote: > Hi, > > This seems like a resonable idea. > > Could you please attach this to a bug at https://bugzilla.mindrot.org/ ? > This will ensure it won't get lost. ok, will do. It might take a bit more time since I want to test everything is still fine with openssh 7.0+ Should I split up the patch into smaller parts or is the patch size digestible? Cheers, Thomas From nolan.hergert at gmail.com Tue Sep 8 05:50:25 2015 From: nolan.hergert at gmail.com (Nolan Hergert) Date: Mon, 7 Sep 2015 12:50:25 -0700 Subject: UI-related change to PasswordAuthentication in sshd_config file Message-ID: Hello SSH developers, I spent about 2 hours today trying to track down why disabling passwords wasn't working on my Linux machine. I would like to propose the following change to sshd_config:60 Before: # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes After: # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes I had done the usual "change yes to no", but on my non-colored editor on the server I didn't notice the additional "#" at the beginning of the line. I know that server admins probably have a different opinion on this change, but for new users doing basic stuff the less steps and little details the better. Thanks for the consideration! Nolan From nolan.hergert at gmail.com Tue Sep 8 06:39:34 2015 From: nolan.hergert at gmail.com (Nolan Hergert) Date: Mon, 7 Sep 2015 13:39:34 -0700 Subject: UI-related change to PasswordAuthentication in sshd_config file In-Reply-To: <746b0fc1-7ecc-4d9a-96fd-77b4a2bceda4@email.android.com> References: <746b0fc1-7ecc-4d9a-96fd-77b4a2bceda4@email.android.com> Message-ID: Ok, good to know. Thank you On Mon, Sep 7, 2015 at 1:35 PM, Benjamin Ziirish Sans wrote: > Hi Nolan, > > The default sshd configuration file is often distribution-dependant. It > means that even if upstream changed it, the result would not always be > propaged in the distrib packages. > > > > > Hello SSH developers, > > > > I spent about 2 hours today trying to track down why disabling passwords > > wasn't working on my Linux machine. I would like to propose the > > following change to sshd_config:60 > > > > Before: > > # Change to no to disable tunnelled clear text passwords > > #PasswordAuthentication yes > > > > After: > > # Change to no to disable tunnelled clear text passwords > > PasswordAuthentication yes > > > > I had done the usual "change yes to no", but on my non-colored editor on > > the server I didn't notice the additional "#" at the beginning of the > line. > > I know that server admins probably have a different opinion on this > change, > > but for new users doing basic stuff the less steps and little details the > > better. > > > > Thanks for the consideration! > > > > Nolan > > _______________________________________________ > > openssh-unix-dev mailing list > > openssh-unix-dev at mindrot.org > > https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev > > From ziirish at ziirish.info Tue Sep 8 06:35:14 2015 From: ziirish at ziirish.info (=?UTF-8?B?QmVuamFtaW4gWmlpcmlzaCBTYW5z?=) Date: Mon, 07 Sep 2015 22:35:14 +0200 Subject: UI-related change to PasswordAuthentication in sshd_config file Message-ID: <746b0fc1-7ecc-4d9a-96fd-77b4a2bceda4@email.android.com> Hi Nolan, The default sshd configuration file is often distribution-dependant. It means that even if upstream changed it, the result would not always be propaged in the distrib packages. > > Hello SSH developers, > > I spent about 2 hours today trying to track down why disabling passwords > wasn't working on my Linux machine. I would like to propose the > following change to sshd_config:60 > > Before: > # Change to no to disable tunnelled clear text passwords > #PasswordAuthentication yes > > After: > # Change to no to disable tunnelled clear text passwords > PasswordAuthentication yes > > I had done the usual "change yes to no", but on my non-colored editor on > the server I didn't notice the additional "#" at the beginning of the line. > I know that server admins probably have a different opinion on this change, > but for new users doing basic stuff the less steps and little details the > better. > > Thanks for the consideration! > > Nolan > _______________________________________________ > openssh-unix-dev mailing list > openssh-unix-dev at mindrot.org > https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev > From felix-openssh at fefe.de Wed Sep 9 22:57:43 2015 From: felix-openssh at fefe.de (Felix von Leitner) Date: Wed, 9 Sep 2015 14:57:43 +0200 Subject: OpenSSH 7.1p1 dietlibc (and future glibc) patch Message-ID: <20150909125743.GA18622@fefe.de> Hi OpenSSH devs, I noticed that openssh 7.1 does not work when compiled with dietlibc. It does build properly, and sshd runs and accepts connections, but every connection attempt immediately fails. The root cause is that dietlibc implements some OpenBSD interfaces (getentropy and arc4random) so openssh can use the new getrandom syscall that Linux provices. OpenSSH configure detects those APIs and uses them, but the seccomp filter sandbox code does not yet allow the getrandom syscall. Here's the trivial patch that makes it work: diff -ur openssh-7.1p1/sandbox-seccomp-filter.c openssh-7.1p1-fefe/sandbox-seccomp-filter.c --- openssh-7.1p1/sandbox-seccomp-filter.c 2015-08-21 06:49:03.000000000 +0200 +++ openssh-7.1p1-fefe/sandbox-seccomp-filter.c 2015-09-09 14:51:04.071681323 +0200 @@ -198,6 +198,9 @@ #ifdef __NR_socketcall SC_ALLOW_ARG(socketcall, 0, SYS_SHUTDOWN), #endif +#ifdef __NR_getrandom + SC_ALLOW(getrandom), +#endif /* Default deny */ BPF_STMT(BPF_RET+BPF_K, SECCOMP_FILTER_FAIL), Since this syscall will also be needed when the compat code for glibc is updated, I see no obvious downside in applying this patch now. Thanks, Felix From djm at mindrot.org Thu Sep 10 10:58:32 2015 From: djm at mindrot.org (Damien Miller) Date: Thu, 10 Sep 2015 10:58:32 +1000 (AEST) Subject: OpenSSH 7.1p1 dietlibc (and future glibc) patch In-Reply-To: <20150909125743.GA18622@fefe.de> References: <20150909125743.GA18622@fefe.de> Message-ID: On Wed, 9 Sep 2015, Felix von Leitner wrote: > Hi OpenSSH devs, > > I noticed that openssh 7.1 does not work when compiled with dietlibc. It > does build properly, and sshd runs and accepts connections, but every > connection attempt immediately fails. > > The root cause is that dietlibc implements some OpenBSD interfaces > (getentropy and arc4random) so openssh can use the new getrandom syscall > that Linux provices. OpenSSH configure detects those APIs and uses them, > but the seccomp filter sandbox code does not yet allow the getrandom > syscall. > > Here's the trivial patch that makes it work: ... Applied. This will be in OpenSSH 7.2 - thanks! -d From igor at mir2.org Thu Sep 10 20:05:54 2015 From: igor at mir2.org (Igor Bukanov) Date: Thu, 10 Sep 2015 12:05:54 +0200 Subject: support for non-system users Message-ID: Hello, In my setup I use ssh running on the host to login into different containers. As users in the containers are not known to the host, to support that I use a dummy user per container on the host with a database that maps public keys into container users. Then I use the `command` option for authorized keys to invoke a command that connects to the container where one of the arguments is the username inferred from the key. This works (and thanks for recent changes to support AuthorizedKeysCommand custom arguments made things even simpler!), but it has few drawbacks. As I want to allow to use the same public key to login to different containers, I cannot use `root` for a dummy user. But since operations with containers are privileged, I have to use setuid/setguid command to eventually start a process in the container from the process under a dummy user account on host. Another problem is that those dummy users should be created on the host. This complicates container setup. So would it be possible to add an options to sshd_command to run ForceCommand under a different user than the one that tries to login? This will resolve the first concern. Another option would be to allow to use non-system user names with sshd trusting that AuthorizedKeysCommand and ForceCommand would do the proper job with user verification. From nkadel at gmail.com Thu Sep 10 21:24:35 2015 From: nkadel at gmail.com (Nico Kadel-Garcia) Date: Thu, 10 Sep 2015 07:24:35 -0400 Subject: support for non-system users In-Reply-To: References: Message-ID: On Thu, Sep 10, 2015 at 6:05 AM, Igor Bukanov wrote: > Hello, > > In my setup I use ssh running on the host to login into different > containers. As users in the containers are not known to the host, to > support that I use a dummy user per container on the host with a database > that maps public keys into container users. Then I use the `command` option > for authorized keys to invoke a command that connects to the container > where one of the arguments is the username inferred from the key. > > This works (and thanks for recent changes to support AuthorizedKeysCommand > custom arguments made things even simpler!), but it has few drawbacks. > > As I want to allow to use the same public key to login to different > containers, I cannot use `root` for a dummy user. But since operations with > containers are privileged, I have to use setuid/setguid command to > eventually start a process in the container from the process under a dummy > user account on host. Another problem is that those dummy users should be > created on the host. This complicates container setup. > > So would it be possible to add an options to sshd_command to run > ForceCommand under a different user than the one that tries to login? This > will resolve the first concern. Another option would be to allow to use > non-system user names with sshd trusting that AuthorizedKeysCommand and > ForceCommand would do the proper job with user verification. I think you're painting yourself into a corner, and trying to fix it by stapling a potentially complex and security sensitive toolkit into a stable application. If you have well defined user list in your containers running well defined commands with alternative usernames and privileges, why can't you activate a well defined "sudo" privilege for precisely that action? This avoids the setuid/setgid problem well, and helps even allows runing shell scrupts. (Linux OS's, at least, do not support setuid shell scripts.) From scott_n at xypro.com Fri Sep 11 03:32:13 2015 From: scott_n at xypro.com (Scott Neugroschl) Date: Thu, 10 Sep 2015 17:32:13 +0000 Subject: CVE-2015-6563 and CVE-2015-6564 Message-ID: Got a question. Am I vulnerable to CVE-2015-6563 and CVE-2015-6564 if I have PAM support disabled (--without-pam)? --- Scott Neugroschl | XYPRO Technology Corporation 4100 Guardian Street | Suite 100 |Simi Valley, CA 93063 | Phone 805 583-2874|Fax 805 583-0124 | From dtucker at zip.com.au Fri Sep 11 09:36:32 2015 From: dtucker at zip.com.au (Darren Tucker) Date: Fri, 11 Sep 2015 09:36:32 +1000 Subject: CVE-2015-6563 and CVE-2015-6564 In-Reply-To: References: Message-ID: On Fri, Sep 11, 2015 at 3:32 AM, Scott Neugroschl wrote: > Got a question. Am I vulnerable to CVE-2015-6563 and CVE-2015-6564 if I > have PAM support disabled (--without-pam) > No. The code in question is not compiled in when sshd is built --without-pam. -- Darren Tucker (dtucker at zip.com.au) GPG key 8FF4FA69 / D9A3 86E9 7EEE AF4B B2D4 37C9 C982 80C7 8FF4 FA69 Good judgement comes with experience. Unfortunately, the experience usually comes from bad judgement. From alex at alex.org.uk Sat Sep 12 01:25:02 2015 From: alex at alex.org.uk (Alex Bligh) Date: Fri, 11 Sep 2015 16:25:02 +0100 Subject: Differentiating between ssh connection failures and ssh command failures Message-ID: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> I'm sure this should be an easy question, but from the ssh client manpage: EXIT STATUS ssh exits with the exit status of the remote command or with 255 if an error occurred. Let's say I'm using ssh server.example.com /usr/bin/do/something in (e.g.) a bash script. How can one differentiate between a failure of ssh to connect to the host and the command in question returning an error? I need to detect both, and differentiate between them. -- Alex Bligh From djm at mindrot.org Sat Sep 12 11:14:07 2015 From: djm at mindrot.org (Damien Miller) Date: Sat, 12 Sep 2015 11:14:07 +1000 (AEST) Subject: Differentiating between ssh connection failures and ssh command failures In-Reply-To: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> References: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> Message-ID: On Fri, 11 Sep 2015, Alex Bligh wrote: > I'm sure this should be an easy question, but from the ssh client manpage: > > EXIT STATUS > ssh exits with the exit status of the remote command or with 255 if an error occurred. > > > Let's say I'm using > ssh server.example.com /usr/bin/do/something > in (e.g.) a bash script. > > How can one differentiate between a failure of ssh to connect to the host and the > command in question returning an error? I need to detect both, and differentiate > between them. ssh server.example.com /usr/bin/do/something r=$? if [ $r -eq 0 ] ; then echo success elif [ $r -eq 255 ] ; then echo ssh failed else echo command failed fi From lists at spuddy.org Sat Sep 12 12:13:59 2015 From: lists at spuddy.org (Stephen Harris) Date: Fri, 11 Sep 2015 22:13:59 -0400 Subject: Differentiating between ssh connection failures and ssh command failures In-Reply-To: References: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> Message-ID: <20150912021359.GA17482@mercury7.spuddy.org> On Sat, Sep 12, 2015 at 11:14:07AM +1000, Damien Miller wrote: > ssh server.example.com /usr/bin/do/something > r=$? > if [ $r -eq 0 ] ; then > echo success > elif [ $r -eq 255 ] ; then > echo ssh failed > else > echo command failed > fi ssh remoteserver exit 255 Hmm :-) exit(-1) aka exit(255) is a pretty standard "generic failure code" for many programs. The problem, really, is that "exit code" is the wrong thing to test for. x=`ssh remoteserver "echo CONNECTED && somecommand"` And then see if CONNECTED appears in the output to show successful connection. -- rgds Stephen From alex at alex.org.uk Sat Sep 12 19:29:43 2015 From: alex at alex.org.uk (Alex Bligh) Date: Sat, 12 Sep 2015 10:29:43 +0100 Subject: Differentiating between ssh connection failures and ssh command failures In-Reply-To: <20150912021359.GA17482@mercury7.spuddy.org> References: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> <20150912021359.GA17482@mercury7.spuddy.org> Message-ID: On 12 Sep 2015, at 03:13, Stephen Harris wrote: > On Sat, Sep 12, 2015 at 11:14:07AM +1000, Damien Miller wrote: >> ssh server.example.com /usr/bin/do/something >> r=$? >> if [ $r -eq 0 ] ; then >> echo success >> elif [ $r -eq 255 ] ; then >> echo ssh failed >> else >> echo command failed >> fi > > > ssh remoteserver exit 255 > > Hmm :-) > > exit(-1) aka exit(255) is a pretty standard "generic failure code" > for many programs. That's *exactly* the issue I'm concerned about. Furthermore the server is not UNIX so I have no idea how to wrap it in something that makes it return a different exit code. > The problem, really, is that "exit code" is the wrong thing to test for. Well, I suppose there could be a CLI option to squash any non-zero return codes from the remote into a single specified return code. I don't think there is currently. > x=`ssh remoteserver "echo CONNECTED && somecommand"` > > And then see if CONNECTED appears in the output to show successful > connection. That's about as far as I got too. Technically that would fail to differentiate between the shell being /bin/false and failure to connect, but would be good enough for my use. -- Alex Bligh From dtucker at dtucker.net Sat Sep 12 23:29:10 2015 From: dtucker at dtucker.net (Darren Tucker) Date: Sat, 12 Sep 2015 23:29:10 +1000 Subject: Differentiating between ssh connection failures and ssh command failures In-Reply-To: References: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> <20150912021359.GA17482@mercury7.spuddy.org> Message-ID: On Sep 12, 2015 7:30 PM, "Alex Bligh" wrote: > That's *exactly* the issue I'm concerned about. Furthermore the server is > not UNIX so I have no idea how to wrap it in something that makes it > return a different exit code. Use the control master / mux function in the ssh client? That way the connection establishment and command requests will be separate invocations and you can check their return codes independently. From mfriedl at gmail.com Fri Sep 18 22:57:13 2015 From: mfriedl at gmail.com (Markus Friedl) Date: Fri, 18 Sep 2015 14:57:13 +0200 Subject: Differentiating between ssh connection failures and ssh command failures In-Reply-To: <20150912021359.GA17482@mercury7.spuddy.org> References: <4A07ADC7-EECB-4D84-8E33-5CB716CA557D@alex.org.uk> <20150912021359.GA17482@mercury7.spuddy.org> Message-ID: <91F87434-D071-4182-9BAF-BBF8BB8018A2@gmail.com> true. but never found this to be a problem in practice? > Am 12.09.2015 um 04:13 schrieb Stephen Harris : > > On Sat, Sep 12, 2015 at 11:14:07AM +1000, Damien Miller wrote: >> ssh server.example.com /usr/bin/do/something >> r=$? >> if [ $r -eq 0 ] ; then >> echo success >> elif [ $r -eq 255 ] ; then >> echo ssh failed >> else >> echo command failed >> fi > > > ssh remoteserver exit 255 > > Hmm :-) > > exit(-1) aka exit(255) is a pretty standard "generic failure code" > for many programs. > > The problem, really, is that "exit code" is the wrong thing to test for. > > x=`ssh remoteserver "echo CONNECTED && somecommand"` > > And then see if CONNECTED appears in the output to show successful > connection. > > -- > > rgds > Stephen > _______________________________________________ > openssh-unix-dev mailing list > openssh-unix-dev at mindrot.org > https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev From fidencio at redhat.com Fri Sep 18 23:47:43 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Fri, 18 Sep 2015 15:47:43 +0200 Subject: [RFE] Multiple ssh-agent support Message-ID: Howdy! I've been working on a prototype that allows to do ssh-agent forward between a guest, using SPICE, and a spice client (remote-viewer/virt-viewer/spicy) The whole idea is to have something similar to "ssh -A guest", but integrated with the desktop environment. As a proof of concept I wrote a standalone ssh-agent that _unlink_ the current running agent in the guest machine and creates its socket in the same path used by the old agent. It works as you can see in these small demo videos: https://fidencio.fedorapeople.org/ssh-agent-forward/ Now where the problem starts: doing this would break the desktop integration with its running ssh-agent. A few possible solutions for this would involve a way to support more than one agent, talking to both (the local one and the spice one), merging then their responses and returning it to any application who sent the request. Note that would be really nice if we can limit it to do just some operations (like, ssh-add .ssh/id_rsa probably must not go to the spice agent). But how to do that? What could be a good approach for doing that? Expand the agent protocol in order to have a "ssh-add --proxy /path/to/the/new/agent/socket" can be one option. Making SSH_AUTH_SOCK support a list of agents is another option, then the first agent would be the "dispatcher". These are the questions that I have and I am open to suggestions/further discussions. Best Regards, -- Fabiano Fid?ncio From peter at stuge.se Sat Sep 19 03:07:39 2015 From: peter at stuge.se (Peter Stuge) Date: Fri, 18 Sep 2015 19:07:39 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: References: Message-ID: <20150918170739.7126.qmail@stuge.se> Fabiano Fid?ncio wrote: > A few possible solutions for this would involve a way to support more > than one agent, talking to both (the local one and the spice one), > merging then their responses and returning it to any application who > sent the request. Note that would be really nice if we can limit it to > do just some operations (like, ssh-add .ssh/id_rsa probably must not > go to the spice agent). > > But how to do that? What could be a good approach for doing that? One obvious approach is to create a proxy agent which looks like an agent to all clients, but which also integrates with SPICE. //Peter From keisial at gmail.com Sat Sep 19 06:58:44 2015 From: keisial at gmail.com (=?UTF-8?B?w4FuZ2VsIEdvbnrDoWxleg==?=) Date: Fri, 18 Sep 2015 22:58:44 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: References: Message-ID: <55FC7B04.2050902@gmail.com> On 18/09/15 15:47, Fabiano Fid?ncio wrote: > Howdy! > > I've been working on a prototype that allows to do ssh-agent forward > between a guest, using SPICE, and a spice client > (remote-viewer/virt-viewer/spicy) > The whole idea is to have something similar to "ssh -A guest", but > integrated with the desktop environment. > > As a proof of concept I wrote a standalone ssh-agent that _unlink_ the > current running agent in the guest machine and creates its socket in > the same path used by the old agent. unlinking the socket seems a bit overkill. You could play with SSH_AUTH_SOCK > A few possible solutions for this would involve a way to support more > than one agent, talking to both (the local one and the spice one), > merging then their responses and returning it to any application who > sent the request. Note that would be really nice if we can limit it to > do just some operations (like, ssh-add .ssh/id_rsa probably must not > go to the spice agent). > I would make a proxy ssh agent that linearly attempts from each child agent. The add operations would always go to the first agent (unless it returned an error?). I also like the idea of SSH_AUTH_SOCK containing a list of sockets. From fidencio at redhat.com Sat Sep 19 10:31:59 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Sat, 19 Sep 2015 02:31:59 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: <20150918170739.7126.qmail@stuge.se> References: <20150918170739.7126.qmail@stuge.se> Message-ID: On Fri, Sep 18, 2015 at 7:07 PM, Peter Stuge wrote: > Fabiano Fid?ncio wrote: >> A few possible solutions for this would involve a way to support more >> than one agent, talking to both (the local one and the spice one), >> merging then their responses and returning it to any application who >> sent the request. Note that would be really nice if we can limit it to >> do just some operations (like, ssh-add .ssh/id_rsa probably must not >> go to the spice agent). >> >> But how to do that? What could be a good approach for doing that? > > One obvious approach is to create a proxy agent which looks like an > agent to all clients, but which also integrates with SPICE. This is a good solution, probably the best one. The main problem is how to implement it. We have two clear ways for adding a proxy agent. One is with the SSH_AUTH_SOCK supporting a list of sockets, but it won't be dynamically. In other words, if I want to replace the spice-agent for another one, it would, most likely, require a session restart and it's not exactly good :-\ The other option would be extend the ssh-agent protocol to support a few new operations (add/remove the proxy agent) and then we could just do a ssh-add --proxy path/to/the/socket ... I am really would prefer to go for the second approach, but I really would like to hear, from you (ssh people), if it would be accepted and if I can proceed with the implementation. Best Regards, -- Fabiano Fid?ncio From fidencio at redhat.com Sat Sep 19 10:38:33 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Sat, 19 Sep 2015 02:38:33 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: <55FC7B04.2050902@gmail.com> References: <55FC7B04.2050902@gmail.com> Message-ID: On Fri, Sep 18, 2015 at 10:58 PM, ?ngel Gonz?lez wrote: > On 18/09/15 15:47, Fabiano Fid?ncio wrote: >> >> Howdy! >> >> I've been working on a prototype that allows to do ssh-agent forward >> between a guest, using SPICE, and a spice client >> (remote-viewer/virt-viewer/spicy) >> The whole idea is to have something similar to "ssh -A guest", but >> integrated with the desktop environment. >> >> As a proof of concept I wrote a standalone ssh-agent that _unlink_ the >> current running agent in the guest machine and creates its socket in >> the same path used by the old agent. > > unlinking the socket seems a bit overkill. You could play with > SSH_AUTH_SOCK Playing with SSH_AUTH_SOCK may be a bit problematic. As far as I understand it would require a session restart in order to set a new value to the env var (at least using GNOME). Btw, I would like to be really clear here that I am focused in a DE-agnostic solution. :-) > > > >> A few possible solutions for this would involve a way to support more >> than one agent, talking to both (the local one and the spice one), >> merging then their responses and returning it to any application who >> sent the request. Note that would be really nice if we can limit it to >> do just some operations (like, ssh-add .ssh/id_rsa probably must not >> go to the spice agent). >> > I would make a proxy ssh agent that linearly attempts from each > child agent. The add operations would always go to the first agent > (unless it returned an error?). > > I also like the idea of SSH_AUTH_SOCK containing a list of sockets. > The proxy agent would be the spice one or the one already running in the system? This part is very important, because when you are doing a ssh-add .ssh/id_rsa you really want the key to be added to your system agent (it means, gnome-keyring-daemon agent or ssh-agent, depending on the DE you're using). Considering we want to have the system agent as a dispatcher ... how would we add a second agent to it without extending the protocol? Again, adding it to SSH_AUTH_SOCK may be a solution, but then all DEs must add the spice agent socket path independently if it's running or not. That's the reason I still think that having a ssh-add -p path/to/the/socket would be better. It could be dynamically set and would not require a DE session restart. Best Regards, -- Fabiano Fid?ncio From peter at stuge.se Sat Sep 19 10:57:00 2015 From: peter at stuge.se (Peter Stuge) Date: Sat, 19 Sep 2015 02:57:00 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: References: <20150918170739.7126.qmail@stuge.se> Message-ID: <20150919005700.10492.qmail@stuge.se> Fabiano Fid?ncio wrote: > > One obvious approach is to create a proxy agent which looks like an > > agent to all clients, but which also integrates with SPICE. > > This is a good solution, probably the best one. The main problem is > how to implement it. > We have two clear ways for adding a proxy agent. The proxy agent is not "added" but would run "in front of" the original local agent. In addition to simply proxying from clients to the original local agent, the proxy agent would be capable of communicating across SPICE. > One is with the SSH_AUTH_SOCK supporting a list of sockets, SSH_AUTH_SOCK could be dynamically changed to point to the proxy agent. > The other option would be extend the ssh-agent protocol to support a > few new operations (add/remove the proxy agent) and then we could just > do a ssh-add --proxy path/to/the/socket ... This seems unneccessary - just put the proxy agent in front of the original one. //Peter From fidencio at redhat.com Sat Sep 19 11:17:40 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Sat, 19 Sep 2015 03:17:40 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: <20150919005700.10492.qmail@stuge.se> References: <20150918170739.7126.qmail@stuge.se> <20150919005700.10492.qmail@stuge.se> Message-ID: On Sat, Sep 19, 2015 at 2:57 AM, Peter Stuge wrote: > Fabiano Fid?ncio wrote: >> > One obvious approach is to create a proxy agent which looks like an >> > agent to all clients, but which also integrates with SPICE. >> >> This is a good solution, probably the best one. The main problem is >> how to implement it. >> We have two clear ways for adding a proxy agent. > > The proxy agent is not "added" but would run "in front of" the > original local agent. In addition to simply proxying from clients to > the original local agent, the proxy agent would be capable of > communicating across SPICE. > >> One is with the SSH_AUTH_SOCK supporting a list of sockets, > > SSH_AUTH_SOCK could be dynamically changed to point to the proxy agent. How could it be done dinamically for the whole session? I mean, setting an env var for the whole DE session would require a session restart (at least for GNOME). > > >> The other option would be extend the ssh-agent protocol to support a >> few new operations (add/remove the proxy agent) and then we could just >> do a ssh-add --proxy path/to/the/socket ... > > This seems unneccessary - just put the proxy agent in front of the > original one. And here we have the problem to convince DE developers to set the spice-agent as the first one ... actually, I don't think that would be a problem for GNOME but may be a problem for any other DEs, I will try to talk to them.. Hmm. Maybe it can be the best way to go, but I still have to do some tests using kde/xfce and see the if I can ensure that the spice-agent will run firstly and then that the ssh-agent will set SSH_AUTH_SOCK=$SSH_AUTH_SOCK:/path/to/the/system/ssh/agent. Best Regards, -- Fabiano Fid?ncio From carlo.abelli at gmail.com Sun Sep 20 00:36:52 2015 From: carlo.abelli at gmail.com (Carlo Abelli) Date: Sat, 19 Sep 2015 10:36:52 -0400 Subject: OpenSSH Always Hangs When Connecting to Remote Message-ID: <55FD7304.8010204@gmail.com> I am running Arch Linux. Very updated version. When I try to connect to remote servers using OpenSSH I get a hang as show here: $ ssh -v compsci at 10.1.1.12 OpenSSH_7.1p1, OpenSSL 1.0.2d 9 Jul 2015 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Connecting to 10.1.1.12 [10.1.1.12] port 22. debug1: Connection established. debug1: identity file /home/carloabelli/.ssh/id_rsa type 1 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_rsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_dsa type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_dsa-cert type -1 debug1: identity file /home/carloabelli/.ssh/id_ecdsa type 3 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_ecdsa-cert type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_ed25519 type -1 debug1: key_load_public: No such file or directory debug1: identity file /home/carloabelli/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* compat 0x04000000 debug1: Authenticating to 10.1.1.12:22 as 'compsci' debug1: SSH2_MSG_KEXINIT sent or here: $ ssh -v thebes.openshells.net OpenSSH_7.1p1, OpenSSL 1.0.2d 9 Jul 2015 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Connecting to thebes.openshells.net [23.239.220.55] port 22. I believe this is an OpenSSH issue as dropbear works fine: $ dbclient compsci at 10.1.1.12 Last login: Sat Sep 19 00:34:39 2015 from 10.11.1.253 compsci-server:~ compsci$ I have been struggling with this one. Any thoughts? Thanks, Carlo From dtucker at zip.com.au Sun Sep 20 17:25:13 2015 From: dtucker at zip.com.au (Darren Tucker) Date: Sun, 20 Sep 2015 17:25:13 +1000 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <55FD7304.8010204@gmail.com> References: <55FD7304.8010204@gmail.com> Message-ID: On Sep 20, 2015 2:08 AM, "Carlo Abelli" wrote: ... > debug1: SSH2_MSG_KEXINIT sent I suspect a path mtu problem. The key exchange packet is one of the first large ones in an SSH connection so it tends to show up such problems. See http://www.snailbook.com/faq/mtu-mismatch.auto.html > I believe this is an OpenSSH issue as dropbear works fine: Dropbear supports a much smaller number of key exchange methods (exactly one in older versions) so its packets are much smaller than openssh's. From carlo.abelli at gmail.com Mon Sep 21 04:28:05 2015 From: carlo.abelli at gmail.com (Carlo Abelli) Date: Sun, 20 Sep 2015 14:28:05 -0400 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: References: <55FD7304.8010204@gmail.com> Message-ID: <55FEFAB5.1050001@gmail.com> On 09/20/2015 03:25 AM, Darren Tucker wrote: > I suspect a path mtu problem. The key exchange packet is one of the > first large ones in an SSH connection so it tends to show up such problems. > > Seehttp://www.snailbook.com/faq/mtu-mismatch.auto.html > Has this been changed? SSH used to work fine on my old machine. My local mtu and server mtu are higher than required. Therefore the only thing I could expect would be a lower mtu on the vpn. Has this changed in recent versions. As previously I was using OSX I expect the OpenSSH version was older. Still strange to me that it worked before and not now. I will test tomorrow to see if connecting on the actual network solves the issue. From lists at eitanadler.com Mon Sep 21 05:48:44 2015 From: lists at eitanadler.com (Eitan Adler) Date: Sun, 20 Sep 2015 12:48:44 -0700 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <55FEFAB5.1050001@gmail.com> References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> Message-ID: On 20 September 2015 at 11:28, Carlo Abelli wrote: > On 09/20/2015 03:25 AM, Darren Tucker wrote: >> I suspect a path mtu problem. The key exchange packet is one of the >> first large ones in an SSH connection so it tends to show up such problems. >> >> Seehttp://www.snailbook.com/faq/mtu-mismatch.auto.html >> > > Has this been changed? SSH used to work fine on my old machine. My local > mtu and server mtu are higher than required. Therefore the only thing I > could expect would be a lower mtu on the vpn. Has this changed in recent > versions. As previously I was using OSX I expect the OpenSSH version was > older. Still strange to me that it worked before and not now. I will > test tomorrow to see if connecting on the actual network solves the issue. It is common for VPNs to introduce lower MTU limits and to not correctly send ICMP code 3 type 4 packets. This isn't so much of an SSH problem as it is a misconfiguration on the network path to the destination. See https://www.ietf.org/rfc/rfc2923.txt for further examples. -- Eitan Adler From keisial at gmail.com Mon Sep 21 06:05:24 2015 From: keisial at gmail.com (=?ISO-8859-1?Q?=C1ngel_Gonz=E1lez?=) Date: Sun, 20 Sep 2015 22:05:24 +0200 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <55FEFAB5.1050001@gmail.com> References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> Message-ID: <55FF1184.3070205@gmail.com> On 20/09/15 20:28, Carlo Abelli wrote: > On 09/20/2015 03:25 AM, Darren Tucker wrote: >> I suspect a path mtu problem. The key exchange packet is one of the >> first large ones in an SSH connection so it tends to show up such problems. >> >> Seehttp://www.snailbook.com/faq/mtu-mismatch.auto.html > Has this been changed? SSH used to work fine on my old machine. My local > mtu and server mtu are higher than required. Therefore the only thing I > could expect would be a lower mtu on the vpn. Has this changed in recent > versions. As previously I was using OSX I expect the OpenSSH version was > older. Still strange to me that it worked before and not now. I will > test tomorrow to see if connecting on the actual network solves the issue. New versions tend to add more key exchanges, so yes. You can also use ssh -o KexAlgorithms= to test the hypothesis. From carlo.abelli at gmail.com Mon Sep 21 12:43:31 2015 From: carlo.abelli at gmail.com (Carlo Abelli) Date: Sun, 20 Sep 2015 22:43:31 -0400 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <55FF1184.3070205@gmail.com> References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> <55FF1184.3070205@gmail.com> Message-ID: Thanks all. I'll check tomorrow if it is indeed the VPN by connecting to the network and testing. On Sunday, September 20, 2015, ?ngel Gonz?lez wrote: > On 20/09/15 20:28, Carlo Abelli wrote: > >> On 09/20/2015 03:25 AM, Darren Tucker wrote: >> >>> I suspect a path mtu problem. The key exchange packet is one of the >>> first large ones in an SSH connection so it tends to show up such >>> problems. >>> >>> Seehttp://www.snailbook.com/faq/mtu-mismatch.auto.html >>> >> Has this been changed? SSH used to work fine on my old machine. My local >> mtu and server mtu are higher than required. Therefore the only thing I >> could expect would be a lower mtu on the vpn. Has this changed in recent >> versions. As previously I was using OSX I expect the OpenSSH version was >> older. Still strange to me that it worked before and not now. I will >> test tomorrow to see if connecting on the actual network solves the issue. >> > New versions tend to add more key exchanges, so yes. You can also use > ssh -o KexAlgorithms= to test the hypothesis. > > > -- Thanks, Carlo From philipp.marek at linbit.com Mon Sep 21 15:40:56 2015 From: philipp.marek at linbit.com (Philipp Marek) Date: Mon, 21 Sep 2015 07:40:56 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: References: <55FC7B04.2050902@gmail.com> Message-ID: <20150921054056.GA4659@cacao.linbit> > > unlinking the socket seems a bit overkill. You could play with > > SSH_AUTH_SOCK > > Playing with SSH_AUTH_SOCK may be a bit problematic. As far as I > understand it would require a session restart in order to set a new > value to the env var (at least using GNOME). > Btw, I would like to be really clear here that I am focused in a > DE-agnostic solution. :-) Well, just move the existing socket aside (to another name in the same directory), and have your spice agent provide a socket with the original name. > > I would make a proxy ssh agent that linearly attempts from each > > child agent. The add operations would always go to the first agent > > (unless it returned an error?). That sounds easy enough - after moving the "original" socket aside, fall back to queries on that one if the "new" agent can't answer. That way we'd get a chain of agents, each asking the "older"/previous if needed. > > I also like the idea of SSH_AUTH_SOCK containing a list of sockets. Uh, that sounds like more complexity - and that means more code. As the ssh-agent is handling private keys, it should be as small as possible - forwarding queries to only one (the next one) is enough for this use case. > The proxy agent would be the spice one or the one already running in the system? I guess that the spice-agent wouldn't add keys back to the developer machine; that runs a bit against having the key on the VM (and only the VM), if it routinely gets moved across the network. So I guess the spice agent would need to provide it, by storing it in the VM system agent. From fidencio at redhat.com Mon Sep 21 16:14:27 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Mon, 21 Sep 2015 08:14:27 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: <20150921054056.GA4659@cacao.linbit> References: <55FC7B04.2050902@gmail.com> <20150921054056.GA4659@cacao.linbit> Message-ID: On Mon, Sep 21, 2015 at 7:40 AM, Philipp Marek wrote: >> > unlinking the socket seems a bit overkill. You could play with >> > SSH_AUTH_SOCK >> >> Playing with SSH_AUTH_SOCK may be a bit problematic. As far as I >> understand it would require a session restart in order to set a new >> value to the env var (at least using GNOME). >> Btw, I would like to be really clear here that I am focused in a >> DE-agnostic solution. :-) > Well, just move the existing socket aside (to another name in the same > directory), and have your spice agent provide a socket with the original > name. Here it breaks gnome-keyring-daemon --replace. I mean, if I takeover the gnome-keyring agent socket path (always in /run/user/$uid/keyring/ssh), when the user runs gnome-keyring-daemon --replace it replaces my spice-agent :-\ > >> > I also like the idea of SSH_AUTH_SOCK containing a list of sockets. > Uh, that sounds like more complexity - and that means more code. > As the ssh-agent is handling private keys, it should be as small as > possible - forwarding queries to only one (the next one) is enough for > this use case. > > >> The proxy agent would be the spice one or the one already running in the system? > I guess that the spice-agent wouldn't add keys back to the developer > machine; that runs a bit against having the key on the VM (and only the > VM), if it routinely gets moved across the network. > > So I guess the spice agent would need to provide it, by storing it in the > VM system agent. > I agree here. From jul_bsd at yahoo.fr Tue Sep 22 08:13:39 2015 From: jul_bsd at yahoo.fr (jul) Date: Mon, 21 Sep 2015 22:13:39 +0000 (UTC) Subject: More logging for ssh tunnels? Message-ID: <1314198263.1739274.1442873619758.JavaMail.yahoo@mail.yahoo.com> Hello, While auditing some system and setting up some ssh tunnels, I asked myself if there was a way to control ssh tunnel usage outside of restricting them with OpenPermit in sshd config or authorized_keys. If found this page with an audit patch https://blog.rootshell.be/2009/03/01/keep-an-eye-on-ssh-forwarding/ Another page which referenced other monitoring way https://serverfault.com/questions/181660/how-do-i-log-ssh-port-forwards Patch does not seem to be very complicated. It has been "submitted" on the list in 2013 with no response https://marc.info/?l=openssh-unix-dev&m=136197476517114&w=2 Any chance for it to be reviewed ? Thanks a lot for your great work Cheers J From dtucker at zip.com.au Tue Sep 22 10:49:24 2015 From: dtucker at zip.com.au (Darren Tucker) Date: Tue, 22 Sep 2015 10:49:24 +1000 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <55FF1184.3070205@gmail.com> References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> <55FF1184.3070205@gmail.com> Message-ID: On Mon, Sep 21, 2015 at 6:05 AM, ?ngel Gonz?lez wrote: > > New versions tend to add more key exchanges, so yes. You can also use > ssh -o KexAlgorithms= to test the hypothesis. > Note that this is not a definitive test because the server will still offer its full list of key exchange and cipher methods, so depending on exactly what and where the problem is, this could still potentially tickle MTU blackhole problems. You'd need to restrict the OpenSSH server's KexAlgorithms, HostKeyAlgorithms, Ciphers and Compression settings for an accurate test, and note that this configuration would be significantly less safe than OpenSSH's defaults. I'd try the MTU thing in the link I sent you. Alternatively, if you have access to both ends via some other means, find the ssh connection in the output of "netstat" on both sides and check if the SendQ column stays non-zero indicating that the network traffic never get acknowledged. $ telnet openssh 22 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_7.1 SSH-2.0-me t ???? ?P6???}:? Ydiffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1'ssh-rsa,ecdsa-sha2-nistp256, ssh-ed25519lchacha20-poly1305 at openssh.com,aes128-ctr,aes192-ctr,aes256-ctr, aes128-gcm at openssh.com,aes256-gcm at openssh.comlchacha20-poly1305@openssh.com ,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm at openssh.com, aes256-gcm at openssh.com?umac-64-etm at openssh.com,umac-128-etm at openssh.com, hmac-sha2-256-etm at openssh.com,hmac-sha2-512-etm at openssh.com, hmac-sha1-etm at openssh.com,umac-64 at openssh.com,umac-128 at openssh.com ,hmac-sha2-256,hmac-sha2-512,hmac-sha1?umac-64-etm at openssh.com, umac-128-etm at openssh.com,hmac-sha2-256-etm at openssh.com, hmac-sha2-512-etm at openssh.com,hmac-sha1-etm at openssh.com,umac-64 at openssh.com, umac-128 at openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 none, zlib at openssh.com none,zlib at openssh.com ^] telnet> quit Connection closed $ telnet dropear 22 Trying 192.168.34.1... Connected to rtr. Escape character is '^]'. SSH-2.0-dropbear_0.46 SSH-2.0-me ?? ?Hlm?O?)d??8? diffie-hellman-group1-sha1ssh-rs3des-cb3des-cbc hmac-sha1,hmac-md5 hmac-sha1,hmac-md5 none none ?h8u??! ^] telnet> quit Connection closed -- Darren Tucker (dtucker at zip.com.au) GPG key 8FF4FA69 / D9A3 86E9 7EEE AF4B B2D4 37C9 C982 80C7 8FF4 FA69 Good judgement comes with experience. Unfortunately, the experience usually comes from bad judgement. From alex at alex.org.uk Tue Sep 22 21:54:37 2015 From: alex at alex.org.uk (Alex Bligh) Date: Tue, 22 Sep 2015 12:54:37 +0100 Subject: Failure without controlling tty irrespective of -tt / -T Message-ID: I'm a bit at my wits end with this one. I'm seeing a problem where an automated script fails if it doesn't have a tty. Stripping it right back, the issue is that (open)ssh to a Windows build box VM succeeds if openssh has a controlling pty, and fails if it doesn't. IE: ssh -T host command succeeds ssh -tt host command succeeds nohup ssh -T host command fails nohup ssh -tt host command fails I'm running under 'env -i' to clear the environment off. I have verified the MS end is not actually running the command (as opposed to merely the output not showing) - for one it's 100 times quicker without nohup. I'm running: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014 -- Alex Bligh $ rm -f nohup.out; env -i nohup ssh -vvv -T -oGSSAPIAuthentication=no -oUserKnownHostsFile=unixbuild/known_hosts -oNumberOfPasswordPrompts=0 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i build/protkey -p 10037 Administrator at 127.0.0.1 'powershell /c echo hello' ; cat nohup.out nohup: ignoring input and appending output to 'nohup.out' OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 127.0.0.1 [127.0.0.1] port 10037. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "build/protkey" as a RSA1 public key debug1: identity file build/protkey type 1 debug1: identity file build/protkey-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version WeOnlyDo 2.4.3 debug1: no match: WeOnlyDo 2.4.3 debug2: fd 3 setting O_NONBLOCK debug3: put_host_port: [127.0.0.1]:10037 debug3: load_hostkeys: loading entries for host "[127.0.0.1]:10037" from file "unixbuild/known_hosts" debug3: load_hostkeys: found key type RSA in file unixbuild/known_hosts:12 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01 at openssh.com,ssh-rsa-cert-v00 at openssh.com,ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256 at libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa-cert-v01 at openssh.com,ssh-rsa-cert-v00 at openssh.com,ssh-rsa,ecdsa-sha2-nistp256-cert-v01 at openssh.com,ecdsa-sha2-nistp384-cert-v01 at openssh.com,ecdsa-sha2-nistp521-cert-v01 at openssh.com,ssh-ed25519-cert-v01 at openssh.com,ssh-dss-cert-v01 at openssh.com,ssh-dss-cert-v00 at openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm at openssh.com,aes256-gcm at openssh.com,chacha20-poly1305 at openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc at lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm at openssh.com,aes256-gcm at openssh.com,chacha20-poly1305 at openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc at lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm at openssh.com,hmac-sha1-etm at openssh.com,umac-64-etm at openssh.com,umac-128-etm at openssh.com,hmac-sha2-256-etm at openssh.com,hmac-sha2-512-etm at openssh.com,hmac-ripemd160-etm at openssh.com,hmac-sha1-96-etm at openssh.com,hmac-md5-96-etm at openssh.com,hmac-md5,hmac-sha1,umac-64 at openssh.com,umac-128 at openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160 at openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm at openssh.com,hmac-sha1-etm at openssh.com,umac-64-etm at openssh.com,umac-128-etm at openssh.com,hmac-sha2-256-etm at openssh.com,hmac-sha2-512-etm at openssh.com,hmac-ripemd160-etm at openssh.com,hmac-sha1-96-etm at openssh.com,hmac-md5-96-etm at openssh.com,hmac-md5,hmac-sha1,umac-64 at openssh.com,umac-128 at openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160 at openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,zlib at openssh.com,zlib debug2: kex_parse_kexinit: none,zlib at openssh.com,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group1-sha1,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,aes128-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes192-ctr,aes256-cbc,aes256-ctr,rijndael128-cbc,rijndael192-cbc,rijndael256-cbc,rijndael-cbc at lysator.liu.se,cast128-cbc debug2: kex_parse_kexinit: aes128-cbc,aes128-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes192-ctr,aes256-cbc,aes256-ctr,rijndael128-cbc,rijndael192-cbc,rijndael256-cbc,rijndael-cbc at lysator.liu.se,cast128-cbc debug2: kex_parse_kexinit: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96,hmac-md5,none debug2: kex_parse_kexinit: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96,hmac-md5,none debug2: kex_parse_kexinit: none,none debug2: kex_parse_kexinit: none,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA f3:0e:1c:c8:14:c9:11:b1:f4:6a:a3:4b:8d:66:0b:f7 debug3: put_host_port: [127.0.0.1]:10037 debug3: put_host_port: [127.0.0.1]:10037 debug3: load_hostkeys: loading entries for host "[127.0.0.1]:10037" from file "unixbuild/known_hosts" debug3: load_hostkeys: found key type RSA in file unixbuild/known_hosts:12 debug3: load_hostkeys: loaded 1 keys debug1: Host '[127.0.0.1]:10037' is known and matches the RSA host key. debug1: Found key in unixbuild/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: build/protkey (0x7ff216a52da0), explicit debug1: Authentications that can continue: password,gssapi-with-mic,publickey debug3: start over, passed a different list password,gssapi-with-mic,publickey debug3: preferred publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: build/protkey debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 535 debug2: input_userauth_pk_ok: fp 94:64:fc:54:e6:91:82:ee:c4:4e:ae:bd:cd:15:36:fa debug3: sign_and_send_pubkey: RSA 94:64:fc:54:e6:91:82:ee:c4:4e:ae:bd:cd:15:36:fa debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 127.0.0.1 ([127.0.0.1]:10037). debug2: fd 4 setting O_NONBLOCK debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x08 debug2: client_session2_setup: id 0 debug1: Sending environment. debug1: Sending command: powershell /c echo hello debug2: channel 0: request exec confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 131072 rmax 98304 debug2: channel 0: read<=0 rfd 4 len -1 debug2: channel 0: read failed debug2: channel 0: close_read debug2: channel 0: input open -> drain debug2: channel 0: ibuf empty debug2: channel 0: send eof debug2: channel 0: input drain -> closed debug2: channel_input_status_confirm: type 99 id 0 debug2: exec request accepted on channel 0 debug2: channel 0: rcvd adjust 0 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug2: channel 0: rcvd close debug2: channel 0: output open -> drain debug3: channel 0: will not send data after close debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1) debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK debug3: fd 2 is not O_NONBLOCK Transferred: sent 4128, received 1912 bytes, in 0.0 seconds Bytes per second: sent 406787.3, received 188415.0 debug1: Exit status 0 $ env -i ssh -vvv -T -oGSSAPIAuthentication=no -oUserKnownHostsFile=unixbuild/known_hosts -oNumberOfPasswordPrompts=0 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i build/protkey -p 10037 Administrator at 127.0.0.1 'powershell echo hello' OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 127.0.0.1 [127.0.0.1] port 10037. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "build/protkey" as a RSA1 public key debug1: identity file build/protkey type 1 debug1: identity file build/protkey-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version WeOnlyDo 2.4.3 debug1: no match: WeOnlyDo 2.4.3 debug2: fd 3 setting O_NONBLOCK debug3: put_host_port: [127.0.0.1]:10037 debug3: load_hostkeys: loading entries for host "[127.0.0.1]:10037" from file "unixbuild/known_hosts" debug3: load_hostkeys: found key type RSA in file unixbuild/known_hosts:12 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01 at openssh.com,ssh-rsa-cert-v00 at openssh.com,ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256 at libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa-cert-v01 at openssh.com,ssh-rsa-cert-v00 at openssh.com,ssh-rsa,ecdsa-sha2-nistp256-cert-v01 at openssh.com,ecdsa-sha2-nistp384-cert-v01 at openssh.com,ecdsa-sha2-nistp521-cert-v01 at openssh.com,ssh-ed25519-cert-v01 at openssh.com,ssh-dss-cert-v01 at openssh.com,ssh-dss-cert-v00 at openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm at openssh.com,aes256-gcm at openssh.com,chacha20-poly1305 at openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc at lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm at openssh.com,aes256-gcm at openssh.com,chacha20-poly1305 at openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc at lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm at openssh.com,hmac-sha1-etm at openssh.com,umac-64-etm at openssh.com,umac-128-etm at openssh.com,hmac-sha2-256-etm at openssh.com,hmac-sha2-512-etm at openssh.com,hmac-ripemd160-etm at openssh.com,hmac-sha1-96-etm at openssh.com,hmac-md5-96-etm at openssh.com,hmac-md5,hmac-sha1,umac-64 at openssh.com,umac-128 at openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160 at openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm at openssh.com,hmac-sha1-etm at openssh.com,umac-64-etm at openssh.com,umac-128-etm at openssh.com,hmac-sha2-256-etm at openssh.com,hmac-sha2-512-etm at openssh.com,hmac-ripemd160-etm at openssh.com,hmac-sha1-96-etm at openssh.com,hmac-md5-96-etm at openssh.com,hmac-md5,hmac-sha1,umac-64 at openssh.com,umac-128 at openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160 at openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,zlib at openssh.com,zlib debug2: kex_parse_kexinit: none,zlib at openssh.com,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group1-sha1,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,aes128-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes192-ctr,aes256-cbc,aes256-ctr,rijndael128-cbc,rijndael192-cbc,rijndael256-cbc,rijndael-cbc at lysator.liu.se,cast128-cbc debug2: kex_parse_kexinit: aes128-cbc,aes128-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes192-ctr,aes256-cbc,aes256-ctr,rijndael128-cbc,rijndael192-cbc,rijndael256-cbc,rijndael-cbc at lysator.liu.se,cast128-cbc debug2: kex_parse_kexinit: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96,hmac-md5,none debug2: kex_parse_kexinit: hmac-sha2-256,hmac-sha2-512,hmac-sha1,hmac-sha1-96,hmac-md5,none debug2: kex_parse_kexinit: none,none debug2: kex_parse_kexinit: none,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA f3:0e:1c:c8:14:c9:11:b1:f4:6a:a3:4b:8d:66:0b:f7 debug3: put_host_port: [127.0.0.1]:10037 debug3: put_host_port: [127.0.0.1]:10037 debug3: load_hostkeys: loading entries for host "[127.0.0.1]:10037" from file "unixbuild/known_hosts" debug3: load_hostkeys: found key type RSA in file unixbuild/known_hosts:12 debug3: load_hostkeys: loaded 1 keys debug1: Host '[127.0.0.1]:10037' is known and matches the RSA host key. debug1: Found key in unixbuild/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: build/protkey (0x7f0ac907ed90), explicit debug1: Authentications that can continue: password,gssapi-with-mic,publickey debug3: start over, passed a different list password,gssapi-with-mic,publickey debug3: preferred publickey,keyboard-interactive debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: build/protkey debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 535 debug2: input_userauth_pk_ok: fp 94:64:fc:54:e6:91:82:ee:c4:4e:ae:bd:cd:15:36:fa debug3: sign_and_send_pubkey: RSA 94:64:fc:54:e6:91:82:ee:c4:4e:ae:bd:cd:15:36:fa debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 127.0.0.1 ([127.0.0.1]:10037). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x08 debug2: client_session2_setup: id 0 debug1: Sending environment. debug1: Sending command: powershell echo hello debug2: channel 0: request exec confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 131072 rmax 98304 debug2: channel_input_status_confirm: type 99 id 0 debug2: exec request accepted on channel 0 debug2: channel 0: rcvd adjust 0 hello <-------------------------------------------------- OUTPUT HERE debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug2: channel 0: rcvd close debug2: channel 0: output open -> drain debug2: channel 0: close_read debug2: channel 0: input open -> closed debug3: channel 0: will not send data after close debug2: channel 0: obuf empty debug2: channel 0: close_write debug2: channel 0: output drain -> closed debug2: channel 0: almost dead debug2: channel 0: gc: notify user debug2: channel 0: gc: user detached debug2: channel 0: send close debug2: channel 0: is dead debug2: channel 0: garbage collecting debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1) Transferred: sent 4096, received 1944 bytes, in 0.9 seconds Bytes per second: sent 4658.3, received 2210.9 debug1: Exit status 0 From alex at alex.org.uk Tue Sep 22 23:20:49 2015 From: alex at alex.org.uk (Alex Bligh) Date: Tue, 22 Sep 2015 14:20:49 +0100 Subject: Failure without controlling tty irrespective of -tt / -T In-Reply-To: References: Message-ID: <49679058-3F25-4D78-8DEB-C755BC1FE0D9@alex.org.uk> On 22 Sep 2015, at 12:54, Alex Bligh wrote: > I'm a bit at my wits end with this one. > > I'm seeing a problem where an automated script fails if it doesn't have a tty. Stripping it right back, the issue is that (open)ssh to a Windows build box VM succeeds if openssh has a controlling pty, and fails if it doesn't. > > IE: > > ssh -T host command succeeds > ssh -tt host command succeeds > nohup ssh -T host command fails > nohup ssh -tt host command fails > > I'm running under 'env -i' to clear the environment off. > > I have verified the MS end is not actually running the command (as opposed to merely the output not showing) - for one it's 100 times quicker without nohup. > > I'm running: > OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014 I have a suspicion this might be an openssh client. strace shows the connection dies after receiving EBADF. However broken the ssh server is, I don't think openssh should be reading from an FD that returns EBADF. Here's ssh launched with 'nohup'. Node the EBADF near the end (when it decides the connection is dead). 26857 1442926333.820491 write(2, "debug1: Entering interactive session.\r\n", 39) = 39 26857 1442926333.820580 rt_sigaction(SIGHUP, NULL, {SIG_IGN, [], 0}, 8) = 0 26857 1442926333.820658 rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0 26857 1442926333.820734 rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.820804 rt_sigaction(SIGINT, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, 8) = 0 26857 1442926333.820873 rt_sigaction(SIGINT, {0x7fac48e37950, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.820942 rt_sigaction(SIGQUIT, NULL, {SIG_DFL, [], 0}, 8) = 0 26857 1442926333.821010 rt_sigaction(SIGQUIT, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.821078 rt_sigaction(SIGQUIT, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, 8) = 0 26857 1442926333.821148 rt_sigaction(SIGQUIT, {0x7fac48e37950, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.821216 rt_sigaction(SIGTERM, NULL, {SIG_DFL, [], 0}, 8) = 0 26857 1442926333.821285 rt_sigaction(SIGTERM, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.821353 rt_sigaction(SIGTERM, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fac479f6d40}, 8) = 0 26857 1442926333.821421 rt_sigaction(SIGTERM, {0x7fac48e37950, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.821489 rt_sigaction(SIGWINCH, NULL, {SIG_DFL, [], 0}, 8) = 0 26857 1442926333.821557 rt_sigaction(SIGWINCH, {0x7fac48e37bd0, [], SA_RESTORER, 0x7fac479f6d40}, NULL, 8) = 0 26857 1442926333.821642 select(7, [3], [3], NULL, NULL) = 1 (out [3]) 26857 1442926333.821729 write(3, "\rN\0360KugC\333\245\311\30\rNPM\22)\265\223-\313\301U\363dk\241\312\354\274\303\314^(I\36z\360V\205\376\354TJ\364\v\227\217\227\tP\"\31O\233zy\17\372\337\226\3211", 64) = 64 26857 1442926333.821892 select(7, [3], [], NULL, NULL) = 1 (in [3]) 26857 1442926333.822569 read(3, "zU\357\210T\356\241_y\263\v\301\217\351\355/{\206\264\2230\177\270\263r\255n\321\226[\33\25 \23\\\311\260\254\0e\261\327\3227\4\326\230\234", 8192) = 48 26857 1442926333.822683 write(2, "debug2: callback start\r\n", 24) = 24 26857 1442926333.822781 getsockopt(3, SOL_TCP, TCP_NODELAY, [0], [4]) = 0 26857 1442926333.822849 write(2, "debug2: fd 3 setting TCP_NODELAY\r\n", 34) = 34 26857 1442926333.822912 setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0 26857 1442926333.822972 getsockname(3, {sa_family=AF_INET, sin_port=htons(33222), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0 26857 1442926333.823039 write(2, "debug3: packet_set_tos: set IP_TOS 0x08\r\n", 41) = 41 26857 1442926333.823101 setsockopt(3, SOL_IP, IP_TOS, [8], 4) = 0 26857 1442926333.823163 write(2, "debug2: client_session2_setup: id 0\r\n", 37) = 37 26857 1442926333.823228 write(2, "debug1: Sending environment.\r\n", 30) = 30 26857 1442926333.823294 write(2, "debug1: Sending command: powershell /c echo hello\r\n", 51) = 51 26857 1442926333.823359 write(2, "debug2: channel 0: request exec confirm 1\r\n", 43) = 43 26857 1442926333.823445 write(2, "debug2: callback done\r\n", 23) = 23 26857 1442926333.823510 write(2, "debug2: channel 0: open confirm rwindow 131072 rmax 98304\r\n", 59) = 59 26857 1442926333.823582 select(7, [3 4], [3], NULL, NULL) = 2 (in [4], out [3]) 26857 1442926333.823652 read(4, 0x7ffee032e970, 16384) = -1 EBADF (Bad file descriptor) 26857 1442926333.823725 write(2, "debug2: channel 0: read<=0 rfd 4 len -1\r\n", 41) = 41 26857 1442926333.823798 write(2, "debug2: channel 0: read failed\r\n", 32) = 32 26857 1442926333.823884 write(2, "debug2: channel 0: close_read\r\n", 31) = 31 26857 1442926333.823951 close(4) = 0 .... Here's ssh launched without 'nohup': 27010 1442926394.519659 write(2, "debug1: Entering interactive session.\r\n", 39) = 39 27010 1442926394.519886 rt_sigaction(SIGHUP, NULL, {SIG_DFL, [], 0}, 8) = 0 27010 1442926394.519961 rt_sigaction(SIGHUP, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520033 rt_sigaction(SIGHUP, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, 8) = 0 27010 1442926394.520114 rt_sigaction(SIGHUP, {0x7fcfdc60d950, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520185 rt_sigaction(SIGINT, NULL, {SIG_DFL, [], 0}, 8) = 0 27010 1442926394.520257 rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520327 rt_sigaction(SIGINT, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, 8) = 0 27010 1442926394.520398 rt_sigaction(SIGINT, {0x7fcfdc60d950, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520469 rt_sigaction(SIGQUIT, NULL, {SIG_DFL, [], 0}, 8) = 0 27010 1442926394.520539 rt_sigaction(SIGQUIT, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520622 rt_sigaction(SIGQUIT, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, 8) = 0 27010 1442926394.520703 rt_sigaction(SIGQUIT, {0x7fcfdc60d950, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520773 rt_sigaction(SIGTERM, NULL, {SIG_DFL, [], 0}, 8) = 0 27010 1442926394.520844 rt_sigaction(SIGTERM, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.520914 rt_sigaction(SIGTERM, NULL, {SIG_IGN, [], SA_RESTORER, 0x7fcfdb1ccd40}, 8) = 0 27010 1442926394.520985 rt_sigaction(SIGTERM, {0x7fcfdc60d950, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.521056 rt_sigaction(SIGWINCH, NULL, {SIG_DFL, [], 0}, 8) = 0 27010 1442926394.521127 rt_sigaction(SIGWINCH, {0x7fcfdc60dbd0, [], SA_RESTORER, 0x7fcfdb1ccd40}, NULL, 8) = 0 27010 1442926394.521208 select(7, [3], [3], NULL, NULL) = 1 (out [3]) 27010 1442926394.521295 write(3, "BG]\251\32b\217z\315\30d\17)\345\266\6\317\37\1\347j\224\10\350\321\347u{^\316\313\216\225\345\27\344\373\373q#mY\25\t\2553R]\371\372!\3756Uo92/j\325\4Z\200\346", 64) = 64 27010 1442926394.521611 select(7, [3], [], NULL, NULL) = 1 (in [3]) 27010 1442926394.522086 read(3, "H\21\331!\370f\2373\34\306X1\376'\335X\"\326\277\25\321\332,\312\237N\341y\362\3721\236N\303\0169\313\2\30\2\216u\333TZ.\357\333", 8192) = 48 27010 1442926394.522178 write(2, "debug2: callback start\r\n", 24) = 24 27010 1442926394.522282 getsockopt(3, SOL_TCP, TCP_NODELAY, [0], [4]) = 0 27010 1442926394.522366 write(2, "debug2: fd 3 setting TCP_NODELAY\r\n", 34) = 34 27010 1442926394.522458 setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0 27010 1442926394.522534 getsockname(3, {sa_family=AF_INET, sin_port=htons(33230), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0 27010 1442926394.522628 write(2, "debug3: packet_set_tos: set IP_TOS 0x08\r\n", 41) = 41 27010 1442926394.522719 setsockopt(3, SOL_IP, IP_TOS, [8], 4) = 0 27010 1442926394.522798 write(2, "debug2: client_session2_setup: id 0\r\n", 37) = 37 27010 1442926394.522892 write(2, "debug1: Sending environment.\r\n", 30) = 30 27010 1442926394.522985 write(2, "debug1: Sending command: powershell /c echo hello\r\n", 51) = 51 27010 1442926394.523085 write(2, "debug2: channel 0: request exec confirm 1\r\n", 43) = 43 27010 1442926394.523264 write(2, "debug2: callback done\r\n", 23) = 23 27010 1442926394.523391 write(2, "debug2: channel 0: open confirm rwindow 131072 rmax 98304\r\n", 59) = 59 27010 1442926394.523523 select(7, [3 4], [3], NULL, NULL) = 1 (out [3]) 27010 1442926394.523616 write(3, "V0\300\\kSoD\353E\"+\344\7\251|\367s\232\22728\22\273\321\345\2H\321u\303Y\31A\237\201\271\326\21\367\360\350\242\353\266\210m\20r\2156\3447\316\5\24.\356\342p\17~D\371\2\245\245\205&T\20\3533F4_\202\273\36\347", 80) = 80 27010 1442926394.523733 select(7, [3 4], [], NULL, NULL) = 1 (in [3]) 27010 1442926394.524474 read(3, "BV\351\356\351\353\340#{@\\6\341>\352)\342\241\307\270j\305\10J\322i`,\214\346a\275", 8192) = 32 27010 1442926394.524570 write(2, "debug2: channel_input_status_confirm: type 99 id 0\r\n", 52) = 52 27010 1442926394.524689 write(2, "debug2: exec request accepted on channel 0\r\n", 44) = 44 27010 1442926394.524785 select(7, [3 4], [], NULL, NULL) = 1 (in [3]) 27010 1442926394.526720 read(3, "\262>\177ta\276\20\375\314\347\2441\240\254<|[\344\272\320\345X,\311\342\201q\241CV\"\314\305\2\314!|pq\235\341\22Y\"N\24Z\261", 8192) = 48 27010 1442926394.526816 write(2, "debug2: channel 0: rcvd adjust 0\r\n", 34) = 34 27010 1442926394.526906 select(7, [3 4], [], NULL, NULL) = 1 (in [3]) 27010 1442926395.409546 read(3, "(\303\273\333H\217o\306\353\372\344(\340{\177\366<\323\237\300\223\251N\316E\0313PS$a\5\2334\372\226\324\33-QO\237\206C\3342(8", 8192) = 48 27010 1442926395.409610 select(7, [3 4], [5], NULL, NULL) = 1 (out [5]) 27010 1442926395.409698 write(5, "hello\r\n", 7) = 7 27010 1442926395.409891 select(7, [3 4], [], NULL, NULL) = 1 (in [3]) 27010 1442926395.440588 read(3, "\251d\352\315\351\26\276\316\246\321k\270\276\243\376=\343\273\372\205a\37R\34\271\304\203<\326\27\354%\35\322\366f\363\246\307\373\212-\32\327\214\277-\255\2612\304\254\256\337\2\216\35\230D2\35LN\365", 8192) = 64 27010 1442926395.440649 write(2, "debug1: client_input_channel_req: channel 0 rtype exit-status reply 0\r\n", 71) = 71 27010 1442926395.440805 select(7, [3 4], [], NULL, NULL) = 1 (in [3]) 27010 1442926395.440935 read(3, "\324<,J\321\250\325H\376\322\230,\312\331\204\312&\266\230\221\240\3200\304\252\352lE\246\311\336]", 8192) = 32 27010 1442926395.441016 write(2, "debug2: channel 0: rcvd close\r\n", 31) = 31 27010 1442926395.441082 write(2, "debug2: channel 0: output open -> drain\r\n", 41) = 41 27010 1442926395.441144 write(2, "debug2: channel 0: close_read\r\n", 31) = 31 27010 1442926395.441234 close(4) = 0 27010 1442926395.441286 write(2, "debug2: channel 0: input open -> closed\r\n", 41) = 41 27010 1442926395.441382 write(2, "debug3: channel 0: will not send data after close\r\n", 51) = 51 27010 1442926395.441479 write(2, "debug2: channel 0: obuf empty\r\n", 31) = 31 27010 1442926395.441571 write(2, "debug2: channel 0: close_write\r\n", 32) = 32 27010 1442926395.441647 close(5) = 0 27010 1442926395.441708 write(2, "debug2: channel 0: output drain -> closed\r\n", 43) = 43 27010 1442926395.441796 write(2, "debug2: channel 0: almost dead\r\n", 32) = 32 .... The relevant lines seem to be: 26857 1442926333.818989 dup(0) = 4 26857 1442926333.819057 dup(1) = 5 26857 1442926333.819125 dup(2) = 6 ... 26857 1442926333.823582 select(7, [3 4], [3], NULL, NULL) = 2 (in [4], out [3]) 26857 1442926333.823652 read(4, 0x7ffee032e970, 16384) = -1 EBADF (Bad file descriptor) ... 27010 1442926395.441234 close(4) = 0 i.e. FD 4 is open (or the close would have errored), but returns EBADF. I believe EBADF is only returned if the FD is closed, or not open for reading. I think this happens because FD 0 gets set up like this by nohup: 28728 1442927535.524873 open("/dev/null", O_WRONLY) = 3 28728 1442927535.524947 dup2(3, 0) = 0 28728 1442927535.525008 close(3) = 0 Per line 125 of https://fossies.org/dox/coreutils-8.24/nohup_8c_source.html this is deliberate. 115 ignoring_input = isatty (STDIN_FILENO); 116 redirecting_stdout = isatty (STDOUT_FILENO); 117 stdout_is_closed = (!redirecting_stdout && errno == EBADF); 118 redirecting_stderr = isatty (STDERR_FILENO); 119 120 /* If standard input is a tty, replace it with /dev/null if possible. 121 Note that it is deliberately opened for *writing*, 122 to ensure any read evokes an error. */ 123 if (ignoring_input) 124 { 125 if (fd_reopen (STDIN_FILENO, "/dev/null", O_WRONLY, 0) < 0) 126 error (exit_internal_failure, errno, 127 _("failed to render standard input unusable")); 128 if (!redirecting_stdout && !redirecting_stderr) 129 error (0, 0, _("ignoring input")); 130 } 131 132 ('cron' and 'at' appear to do similarly) And I have no idea why ssh should be reading from stdin when executing a command with -T. -- Alex Bligh From carlo.abelli at gmail.com Tue Sep 22 23:37:48 2015 From: carlo.abelli at gmail.com (Carlo Abelli) Date: Tue, 22 Sep 2015 09:37:48 -0400 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> <55FF1184.3070205@gmail.com> Message-ID: <560159AC.2020503@gmail.com> Can confirm that this is due to the VPN as connecting over the network directly does in fact work. This leads me to believe an issue with the mtu on the VPN. Thanks all for your help. I'll inquire to see if it can be changed. From lists at eitanadler.com Wed Sep 23 00:08:36 2015 From: lists at eitanadler.com (Eitan Adler) Date: Tue, 22 Sep 2015 07:08:36 -0700 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: <560159AC.2020503@gmail.com> References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> <55FF1184.3070205@gmail.com> <560159AC.2020503@gmail.com> Message-ID: On 22 September 2015 at 06:37, Carlo Abelli wrote: > Can confirm that this is due to the VPN as connecting over the network > directly does in fact work. This leads me to believe an issue with the > mtu on the VPN. Thanks all for your help. I'll inquire to see if it can > be changed. To be clear: the issue isn't with the MTU per se, but with some node on the network not correctly sending ICMP packets. One workaround is to set the MSS for the physical interface to a known good lower value. -- Eitan Adler From carlo.abelli at gmail.com Wed Sep 23 03:32:52 2015 From: carlo.abelli at gmail.com (Carlo Abelli) Date: Tue, 22 Sep 2015 13:32:52 -0400 Subject: OpenSSH Always Hangs When Connecting to Remote In-Reply-To: References: <55FD7304.8010204@gmail.com> <55FEFAB5.1050001@gmail.com> <55FF1184.3070205@gmail.com> <560159AC.2020503@gmail.com> Message-ID: <560190C4.1040207@gmail.com> > To be clear: the issue isn't with the MTU per se, but with some node > on the network not correctly sending ICMP packets. One workaround is > to set the MSS for the physical interface to a known good lower value. Sorry networking is not at all my strong suit. This would be the MSS on the local interface or on the server interface? From kaleb at wolfssl.com Wed Sep 23 03:49:27 2015 From: kaleb at wolfssl.com (Kaleb Himes) Date: Tue, 22 Sep 2015 11:49:27 -0600 Subject: Inter-op and port (wolfSSL + openSSH) In-Reply-To: References: <20150830160842.GB6818@hatter.bewilderbeest.net> Message-ID: Hi Damien and openSSH team, We were able to discuss your suggestions today in our team meeting. We are eager to work with openSSH on this. We will be making efforts on this project going forward. Are there any other suggestions from your side before we start? Kind Regards, wolfSSL Team and Kaleb Kaleb Himes www.wolfssl.com kaleb at wolfssl.com Skype: kaleb.himes +1 406 381 9556 On Thu, Sep 3, 2015 at 9:12 PM, Damien Miller wrote: > On Tue, 1 Sep 2015, Kaleb Himes wrote: > > > Hi openSSH, > > > > After having time to review our licensing model and perhaps play around > > with our product we were checking back to see what your thoughts might > be. > > > > We also wanted to point out that we only desire to give end-users an > > alternative option to compiling with openSSL. > > End users who configure with the "--enable-wolfssl" option would need to > > consider licensing. > > That would be a part of their project evaluation phase. Any patch we > submit > > to you would retain your licensing model. > > Hi, > > I'm not opposed to making OpenSSH play nicer with non-OpenSSL crypto > libraries, but I am worried that attempts to do so could yield a worse > #ifdef maze than we already have. > > Microsoft will need to figure out how to handle crypto in their port > of OpenSSH since they'll likely be using CryptoAPI instead of OpenSSL, > so perhaps there is an opportunity to find some nice way of abstracting > out all the BIGNUM, RSA, DSA, EC*, etc out that suits you both (and > cleans up core OpenSSH along the way). > > -d > > From fidencio at redhat.com Fri Sep 25 02:21:51 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Thu, 24 Sep 2015 18:21:51 +0200 Subject: [RFE] Multiple ssh-agent support In-Reply-To: References: <55FC7B04.2050902@gmail.com> <20150921054056.GA4659@cacao.linbit> Message-ID: Okay, okay. I have tested a few ideas and what ended up working better for me was: 1) have a spice-ssh-agent.sh installed in my /etc/profile.d 2) get the system's SSH_AUTH_SOCK and prepend the path to the spice-ssh-agent's socket there. I tested this with different DEs, using Fedora and it seems to work as expected. So, can we move further with the SSH_AUTH_SOCK containing a list of sockets? Please, the idea is _NOT_ to talk to all of the sockets, is just to have a second socket working in the case where the first one breaks/is not available. Is this the right place to drop a patch to openssh adding this to the agent? Looking forward for answers! Best Regards, -- Fabiano Fid?ncio On Mon, Sep 21, 2015 at 8:14 AM, Fabiano Fid?ncio wrote: > On Mon, Sep 21, 2015 at 7:40 AM, Philipp Marek wrote: >>> > unlinking the socket seems a bit overkill. You could play with >>> > SSH_AUTH_SOCK >>> >>> Playing with SSH_AUTH_SOCK may be a bit problematic. As far as I >>> understand it would require a session restart in order to set a new >>> value to the env var (at least using GNOME). >>> Btw, I would like to be really clear here that I am focused in a >>> DE-agnostic solution. :-) >> Well, just move the existing socket aside (to another name in the same >> directory), and have your spice agent provide a socket with the original >> name. > > Here it breaks gnome-keyring-daemon --replace. > I mean, if I takeover the gnome-keyring agent socket path (always in > /run/user/$uid/keyring/ssh), when the user runs gnome-keyring-daemon > --replace it replaces my spice-agent :-\ > >> >>> > I also like the idea of SSH_AUTH_SOCK containing a list of sockets. >> Uh, that sounds like more complexity - and that means more code. >> As the ssh-agent is handling private keys, it should be as small as >> possible - forwarding queries to only one (the next one) is enough for >> this use case. >> >> >>> The proxy agent would be the spice one or the one already running in the system? >> I guess that the spice-agent wouldn't add keys back to the developer >> machine; that runs a bit against having the key on the VM (and only the >> VM), if it routinely gets moved across the network. >> >> So I guess the spice agent would need to provide it, by storing it in the >> VM system agent. >> > > I agree here. From fidencio at redhat.com Sat Sep 26 07:12:11 2015 From: fidencio at redhat.com (=?UTF-8?q?Fabiano=20Fid=C3=AAncio?=) Date: Fri, 25 Sep 2015 23:12:11 +0200 Subject: [RFC][PATCH] Support a list of sockets on SSH_AUTH_SOCKET Message-ID: <1443215531-9109-1-git-send-email-fidencio@redhat.com> The idea behind this change is to add support for different "ssh-agents" being able to run at the same time. It does not change the current behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for itself). Neither does it change the behaviour of SSH_AGENT_PID (which still supports only one pid). The new implementation will go through the list of sockets (which are separated by a colon (:)), and will return the very first functional one. An example of the new supported syntax is: SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 The idea has been discussed a little in this e-mail thread: http://lists.mindrot.org/pipermail/openssh-unix-dev/2015-September/034381.html Signed-off-by: Fabiano Fid?ncio --- authfd.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/authfd.c b/authfd.c index 12bf125..20fcba2 100644 --- a/authfd.c +++ b/authfd.c @@ -83,21 +83,12 @@ decode_reply(u_char type) return SSH_ERR_INVALID_FORMAT; } -/* Returns the number of the authentication fd, or -1 if there is none. */ -int -ssh_get_authentication_socket(int *fdp) +static int +get_authentication_socket(const char *authsocket, int *fdp) { - const char *authsocket; int sock, oerrno; struct sockaddr_un sunaddr; - if (fdp != NULL) - *fdp = -1; - - authsocket = getenv(SSH_AUTHSOCKET_ENV_NAME); - if (!authsocket) - return SSH_ERR_AGENT_NOT_PRESENT; - memset(&sunaddr, 0, sizeof(sunaddr)); sunaddr.sun_family = AF_UNIX; strlcpy(sunaddr.sun_path, authsocket, sizeof(sunaddr.sun_path)); @@ -117,7 +108,32 @@ ssh_get_authentication_socket(int *fdp) *fdp = sock; else close(sock); - return 0; + return SSH_ERR_SUCCESS; +} + +/* Returns the number of the authentication fd, or -1 if there is none. */ +int +ssh_get_authentication_socket(int *fdp) +{ + const char *authsocketlist; + const char *authsocket; + int rc; + + if (fdp != NULL) + *fdp = -1; + + authsocketlist = getenv(SSH_AUTHSOCKET_ENV_NAME); + if (!authsocketlist) + return SSH_ERR_AGENT_NOT_PRESENT; + + authsocket = strtok((char *)authsocketlist, ":"); + + do { + rc = get_authentication_socket(authsocket, fdp); + authsocket = strtok(NULL, ":"); + } while (rc != SSH_ERR_SUCCESS && authsocket != NULL); + + return rc; } /* Communicate with agent: send request and read reply */ -- 2.4.3 From fidencio at redhat.com Sat Sep 26 11:41:38 2015 From: fidencio at redhat.com (=?UTF-8?q?Fabiano=20Fid=C3=AAncio?=) Date: Sat, 26 Sep 2015 03:41:38 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK Message-ID: <1443231698-17257-1-git-send-email-fidencio@redhat.com> The idea behind this change is to add support for different "ssh-agents" being able to run at the same time. It does not change the current behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for itself). Neither does it change the behaviour of SSH_AGENT_PID (which still supports only one pid). The new implementation will go through the list of sockets (which are separated by a colon (:)), and will return the very first functional one. An example of the new supported syntax is: SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 The idea has been discussed a little in this e-mail thread: http://lists.mindrot.org/pipermail/openssh-unix-dev/2015-September/034381.html Signed-off-by: Fabiano Fid?ncio --- Changes since v1: - Fix a typo in the commit (SSH_AUTH_SOCKET -> SSH_AUTH_SOCK) --- authfd.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/authfd.c b/authfd.c index 12bf125..20fcba2 100644 --- a/authfd.c +++ b/authfd.c @@ -83,21 +83,12 @@ decode_reply(u_char type) return SSH_ERR_INVALID_FORMAT; } -/* Returns the number of the authentication fd, or -1 if there is none. */ -int -ssh_get_authentication_socket(int *fdp) +static int +get_authentication_socket(const char *authsocket, int *fdp) { - const char *authsocket; int sock, oerrno; struct sockaddr_un sunaddr; - if (fdp != NULL) - *fdp = -1; - - authsocket = getenv(SSH_AUTHSOCKET_ENV_NAME); - if (!authsocket) - return SSH_ERR_AGENT_NOT_PRESENT; - memset(&sunaddr, 0, sizeof(sunaddr)); sunaddr.sun_family = AF_UNIX; strlcpy(sunaddr.sun_path, authsocket, sizeof(sunaddr.sun_path)); @@ -117,7 +108,32 @@ ssh_get_authentication_socket(int *fdp) *fdp = sock; else close(sock); - return 0; + return SSH_ERR_SUCCESS; +} + +/* Returns the number of the authentication fd, or -1 if there is none. */ +int +ssh_get_authentication_socket(int *fdp) +{ + const char *authsocketlist; + const char *authsocket; + int rc; + + if (fdp != NULL) + *fdp = -1; + + authsocketlist = getenv(SSH_AUTHSOCKET_ENV_NAME); + if (!authsocketlist) + return SSH_ERR_AGENT_NOT_PRESENT; + + authsocket = strtok((char *)authsocketlist, ":"); + + do { + rc = get_authentication_socket(authsocket, fdp); + authsocket = strtok(NULL, ":"); + } while (rc != SSH_ERR_SUCCESS && authsocket != NULL); + + return rc; } /* Communicate with agent: send request and read reply */ -- 2.4.3 From nkadel at gmail.com Sat Sep 26 22:24:05 2015 From: nkadel at gmail.com (Nico Kadel-Garcia) Date: Sat, 26 Sep 2015 08:24:05 -0400 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: <1443231698-17257-1-git-send-email-fidencio@redhat.com> References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> Message-ID: On Fri, Sep 25, 2015 at 9:41 PM, Fabiano Fid?ncio wrote: > The idea behind this change is to add support for different "ssh-agents" > being able to run at the same time. It does not change the current > behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for > itself). Neither does it change the behaviour of SSH_AGENT_PID (which > still supports only one pid). Conceptually, it seems reasonable. But I'd recommend being very, very careful with environment parsing between multiple old and new versions of client, agent, and server.. As a purely practical and local approach, I personally tend to use multiple perl "keychain" tool commands. # keycain # Leaves sourceable ssh-agent config in $HOME/.keychain/$HOSTNAME.sh # HOSTNAME=github keychain # Leaves sourceable ssh-agent config in $HOME/.keychin/github.sh # HOSTNAME=work keychain # Leaves sourceable ssh-agent config for work keys in $HOME/.keychain/work.sh Then I can source and enable keys for the keychain as desired, and switch among them. It's not perfect, but it lets me switch from one keychain to the other for work related github keys, personal github keys, root keys, personal keys, etc. and only have the relevant ones in a particular shell session. > The new implementation will go through the list of sockets (which are > separated by a colon (:)), and will return the very first functional > one. An example of the new supported syntax is: > SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 > > The idea has been discussed a little in this e-mail thread: > http://lists.mindrot.org/pipermail/openssh-unix-dev/2015-September/034381.html > > Signed-off-by: Fabiano Fid?ncio > --- > Changes since v1: > - Fix a typo in the commit (SSH_AUTH_SOCKET -> SSH_AUTH_SOCK) > --- > authfd.c | 40 ++++++++++++++++++++++++++++------------ > 1 file changed, 28 insertions(+), 12 deletions(-) > > diff --git a/authfd.c b/authfd.c > index 12bf125..20fcba2 100644 > --- a/authfd.c > +++ b/authfd.c > @@ -83,21 +83,12 @@ decode_reply(u_char type) > return SSH_ERR_INVALID_FORMAT; > } > > -/* Returns the number of the authentication fd, or -1 if there is none. */ > -int > -ssh_get_authentication_socket(int *fdp) > +static int > +get_authentication_socket(const char *authsocket, int *fdp) > { > - const char *authsocket; > int sock, oerrno; > struct sockaddr_un sunaddr; > > - if (fdp != NULL) > - *fdp = -1; > - > - authsocket = getenv(SSH_AUTHSOCKET_ENV_NAME); > - if (!authsocket) > - return SSH_ERR_AGENT_NOT_PRESENT; > - > memset(&sunaddr, 0, sizeof(sunaddr)); > sunaddr.sun_family = AF_UNIX; > strlcpy(sunaddr.sun_path, authsocket, sizeof(sunaddr.sun_path)); > @@ -117,7 +108,32 @@ ssh_get_authentication_socket(int *fdp) > *fdp = sock; > else > close(sock); > - return 0; > + return SSH_ERR_SUCCESS; > +} > + > +/* Returns the number of the authentication fd, or -1 if there is none. */ > +int > +ssh_get_authentication_socket(int *fdp) > +{ > + const char *authsocketlist; > + const char *authsocket; > + int rc; > + > + if (fdp != NULL) > + *fdp = -1; > + > + authsocketlist = getenv(SSH_AUTHSOCKET_ENV_NAME); > + if (!authsocketlist) > + return SSH_ERR_AGENT_NOT_PRESENT; > + > + authsocket = strtok((char *)authsocketlist, ":"); > + > + do { > + rc = get_authentication_socket(authsocket, fdp); > + authsocket = strtok(NULL, ":"); > + } while (rc != SSH_ERR_SUCCESS && authsocket != NULL); > + > + return rc; > } > > /* Communicate with agent: send request and read reply */ > -- > 2.4.3 > > _______________________________________________ > openssh-unix-dev mailing list > openssh-unix-dev at mindrot.org > https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev From arw at cs.fau.de Sun Sep 27 12:45:11 2015 From: arw at cs.fau.de (Alexander Wuerstlein) Date: Sun, 27 Sep 2015 04:45:11 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: <1443231698-17257-1-git-send-email-fidencio@redhat.com> References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> Message-ID: <20150927024511.GA26812@cip.informatik.uni-erlangen.de> On 2015-09-26T03:47, Fabiano Fid?ncio wrote: > The idea behind this change is to add support for different "ssh-agents" > being able to run at the same time. It does not change the current > behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for > itself). Neither does it change the behaviour of SSH_AGENT_PID (which > still supports only one pid). > The new implementation will go through the list of sockets (which are > separated by a colon (:)), and will return the very first functional > one. An example of the new supported syntax is: > SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 I think changing the semantics of SSH_AUTH_SOCK may be problematic. I'm currently using a few scripts that create a socket per X display, named like '/path/somewhere/:17.agent'. The choice of ':' as a separator would of course break those scripts. While my personal problem described above is easily fixable, I think the bigger picture is: No choice[0] of separator character is possible that won't break existing usage. Therefore I'd rather suggest introducing a separate SSH_AUTH_SOCK_FALLBACKS environment in addition to SSH_AUTH_SOCK. SSH_AUTH_SOCK_FALLBACKS would then be used as the list of fallbacks if SSH_AUTH_SOCK is not working currently. Another advantage of a separate environment variable is that existing scripts and programs that replace or alter SSH_AUTH_SOCK won't interfere with it and won't need to be changed. Ciao, Alexander Wuerstlein. [0] all whitespace like \n and \t would break some shellscript somewhere, simple spaces are sometimes used for directory names (think 'Program Files' or 'Application Data') and nonprintable ASCII characters would be even more of a pain to work with From fidencio at redhat.com Sun Sep 27 19:23:58 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Sun, 27 Sep 2015 11:23:58 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: <20150927024511.GA26812@cip.informatik.uni-erlangen.de> References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> <20150927024511.GA26812@cip.informatik.uni-erlangen.de> Message-ID: Alexander, On Sun, Sep 27, 2015 at 4:45 AM, Alexander Wuerstlein wrote: > On 2015-09-26T03:47, Fabiano Fid?ncio wrote: >> The idea behind this change is to add support for different "ssh-agents" >> being able to run at the same time. It does not change the current >> behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for >> itself). Neither does it change the behaviour of SSH_AGENT_PID (which >> still supports only one pid). >> The new implementation will go through the list of sockets (which are >> separated by a colon (:)), and will return the very first functional >> one. An example of the new supported syntax is: >> SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 > > I think changing the semantics of SSH_AUTH_SOCK may be problematic. I'm > currently using a few scripts that create a socket per X display, named > like '/path/somewhere/:17.agent'. The choice of ':' as a separator would > of course break those scripts. Your point really make sense. This is the first approach that came to my mind and could be acceptable by the community (according to the discussions I linked in the email). But seems that now we have a better option ... > > While my personal problem described above is easily fixable, I think the > bigger picture is: No choice[0] of separator character is possible that > won't break existing usage. Therefore I'd rather suggest introducing a > separate SSH_AUTH_SOCK_FALLBACKS environment in addition to > SSH_AUTH_SOCK. SSH_AUTH_SOCK_FALLBACKS would then be used as the list of > fallbacks if SSH_AUTH_SOCK is not working currently. ... because I this idea sounds better than the initial approach. OTOH, we still have the problem about the separator as using a colon would break your fallbacks as well. Do you have some suggestion about this? Or as it is a new env var we can just warn the users and then they will have enough time for changing their scripts (like in your case)? Best Regards, --- Fabiano Fid?ncio From arw at cs.fau.de Mon Sep 28 12:20:27 2015 From: arw at cs.fau.de (Alexander Wuerstlein) Date: Mon, 28 Sep 2015 04:20:27 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> <20150927024511.GA26812@cip.informatik.uni-erlangen.de> Message-ID: <20150928022027.GB26812@cip.informatik.uni-erlangen.de> On 2015-09-27T11:23, Fabiano Fid?ncio wrote: > Alexander, > > On Sun, Sep 27, 2015 at 4:45 AM, Alexander Wuerstlein wrote: > > On 2015-09-26T03:47, Fabiano Fid?ncio wrote: > >> The idea behind this change is to add support for different "ssh-agents" > >> being able to run at the same time. It does not change the current > >> behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for > >> itself). Neither does it change the behaviour of SSH_AGENT_PID (which > >> still supports only one pid). > >> The new implementation will go through the list of sockets (which are > >> separated by a colon (:)), and will return the very first functional > >> one. An example of the new supported syntax is: > >> SSH_AUTH_SOCK=/run/user/1000/spice/ssh:/tmp/ssh-hHomdONwQus6/agent.6907 > > > > While my personal problem described above is easily fixable, I think the > > bigger picture is: No choice[0] of separator character is possible that > > won't break existing usage. Therefore I'd rather suggest introducing a > > separate SSH_AUTH_SOCK_FALLBACKS environment in addition to > > SSH_AUTH_SOCK. SSH_AUTH_SOCK_FALLBACKS would then be used as the list of > > fallbacks if SSH_AUTH_SOCK is not working currently. > > ... because I this idea sounds better than the initial approach. > OTOH, we still have the problem about the separator as using a colon > would break your fallbacks as well. Do you have some suggestion about > this? Not really. As I've said, I can easily change that colon to something that works. And, if you take my suggestion, SSH_AUTH_SOCK would work as before. So it would only be necessary to change anything if I were to start using SSH_AUTH_SOCK_FALLBACKS. > Or as it is a new env var we can just warn the users and then they > will have enough time for changing their scripts (like in your case)? I think if its a new environment variable, nobody will use it in old scripts, so there is no cause for a warning in advance. But I'm not sure what actual openssh developers have to say about this (I'm just reading the mailing list...). Ciao, Alexander Wuerstlein. From philipp.marek at linbit.com Mon Sep 28 16:26:18 2015 From: philipp.marek at linbit.com (Philipp Marek) Date: Mon, 28 Sep 2015 08:26:18 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> Message-ID: <20150928062618.GA8824@cacao.linbit> > > The idea behind this change is to add support for different "ssh-agents" > > being able to run at the same time. It does not change the current > > behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for > > itself). Neither does it change the behaviour of SSH_AGENT_PID (which > > still supports only one pid). > > Conceptually, it seems reasonable. But I'd recommend being very, very > careful with environment parsing between multiple old and new versions > of client, agent, and server.. IMO having another environment variable with similar meaning is not a good design. In shell scripts it will be left alone, so having another ssh-agent active by error, and similar things. Well, I can offer a few ideas. One is to use the ":" separator, like in $PATH. Yes, it got discarded for various reasons in the other thread; yes, X11 uses that for display names, but observe: $ echo $DISPLAY :0 $ ls -la /tmp/.X11-unix/X0 srwxrwxrwx 1 root root 0 Sep 26 22:36 /tmp/.X11-unix/X0 Although the display has a ":" in it, the socket in the filesystem doesn't; so I guess that scripts wanting to store a SSH agent per-display (instead of per-user) can get that working, too. Whitespace (with a fixed set, eg. space and tab - not any 'whitespace' unicode points) would be another idea, but see IFS, quoting, etc. The second idea is to have $SSH_AUTH_SOCK point to a *directory*, and to use the sockets in there in ASCII alphanumeric order - so the default agent would register itself with as "/tmp/ssh-/500-agent.8903", and other agents could move themselved earlier or later in the list. The third idea is similar: keep pointing to a file, but look at all glob("$SSH_AUTH_SOCK*") sockets in there, in ASCII alphanumeric order again. Or, the other idea from the original question - have an agent push queries to the "previous" agent as a fallback. I'd prefer the last one - because it transparently works with all programs that know how to access *one* agent socket (like some java implementations, etc.), followed by 3,2, and 1, I guess - although it doesn't matter as much with these any more. From fidencio at redhat.com Mon Sep 28 17:59:19 2015 From: fidencio at redhat.com (=?UTF-8?Q?Fabiano_Fid=C3=AAncio?=) Date: Mon, 28 Sep 2015 09:59:19 +0200 Subject: [RFC][PATCH v2] Support a list of sockets on SSH_AUTH_SOCK In-Reply-To: <20150928062618.GA8824@cacao.linbit> References: <1443231698-17257-1-git-send-email-fidencio@redhat.com> <20150928062618.GA8824@cacao.linbit> Message-ID: On Mon, Sep 28, 2015 at 8:26 AM, Philipp Marek wrote: >> > The idea behind this change is to add support for different "ssh-agents" >> > being able to run at the same time. It does not change the current >> > behaviour of the ssh-agent (which will set SSH_AUTH_SOCK just for >> > itself). Neither does it change the behaviour of SSH_AGENT_PID (which >> > still supports only one pid). >> >> Conceptually, it seems reasonable. But I'd recommend being very, very >> careful with environment parsing between multiple old and new versions >> of client, agent, and server.. > > IMO having another environment variable with similar meaning is not a good > design. In shell scripts it will be left alone, so having another ssh-agent > active by error, and similar things. > > > Well, I can offer a few ideas. > > One is to use the ":" separator, like in $PATH. Yes, it got discarded for > various reasons in the other thread; yes, X11 uses that for display names, > but observe: > > $ echo $DISPLAY > :0 > $ ls -la /tmp/.X11-unix/X0 > srwxrwxrwx 1 root root 0 Sep 26 22:36 /tmp/.X11-unix/X0 > > Although the display has a ":" in it, the socket in the filesystem doesn't; > so I guess that scripts wanting to store a SSH agent per-display (instead > of per-user) can get that working, too. If you (or any openssh developer) buys the idea, the patch is done and we can go for it :-) > > Whitespace (with a fixed set, eg. space and tab - not any 'whitespace' > unicode points) would be another idea, but see IFS, quoting, etc. > Yeah, but if we are using a list with a separator I would prefer to go with ":" (as the $PATH) ... > > The second idea is to have $SSH_AUTH_SOCK point to a *directory*, and to > use the sockets in there in ASCII alphanumeric order - so the default agent > would register itself with as "/tmp/ssh-/500-agent.8903", and other > agents could move themselved earlier or later in the list. > Hmm. Not sure if I like this one. The reason is that when running the spice ssh-agent I would have to place it in this specific folder and then be sure that it is the first one (maybe it would involve renaming files, which would break functionalities and so on ...) > The third idea is similar: keep pointing to a file, but look at all > glob("$SSH_AUTH_SOCK*") sockets in there, in ASCII alphanumeric order > again. I really did not understand this idea, sorry. > > Or, the other idea from the original question - have an agent push queries > to the "previous" agent as a fallback. I would have my agent pushing queries to the previous agent. The main point is that the "dispatcher" agent must be in SSH_AUTH_SOCK and we have to have a way to talk to know the path to the old agent socket (and keep it unchanged). So, from this came the idea to support a list on SSH_AUTH_SOCK whee I would just prepend the "dispatcher" and that's it :-) > > > I'd prefer the last one - because it transparently works with all programs > that know how to access *one* agent socket (like some java implementations, > etc.), followed by 3,2, and 1, I guess - although it doesn't matter as much > with these any more. > The last one, you mean, having an agent pushing queries to the previous agent? If yes, it would need one of the 3 ideas you suggested to be implemented anyway, considering I understand your idea correctly. Best Regards, -- Fabiano Fid?ncio From mathias at brossard.org Mon Sep 28 18:17:34 2015 From: mathias at brossard.org (Mathias Brossard) Date: Mon, 28 Sep 2015 01:17:34 -0700 Subject: [PATCH] Enabling ECDSA in PKCS#11 support for ssh-agent Message-ID: Hi, I have made a patch for enabling the use of ECDSA keys in the PKCS#11 support of ssh-agent which will be of interest to other users. I have tested it with P-256 keys. P-384 and P-521 should work out-of-the box. The code is ready for non-FIPS curves (named or explicit), but OpenSSH currently limits ECDSA to those 3 curves. At high level it works like the support for RSA, but because of differences in OpenSSL between RSA and EC_KEY, implementation has a few differences. The RSA and RSA_METHOD structures are exposed and the existing ssh-pkcs11 code uses that to create an RSA_METHOD object for each key. Because of APIs (in addition to ECDSA support) needed by the patch this currently works with: - LibreSSL >= 2.2.2: until LibreSSL 2.1.2 (which is the what I am testing for), the ECDSA_METHOD structure was defined in a private header. But the LIBRESSL_VERSION_NUMBER constant was not updated until 2.2.2. - OpenSSL >= 1.0.2: creating your own ECDSA_METHOD is not possible before because the ECDSA_METHOD structure if opacified. In OpenSSL 1.0.2, they added the option to create new ECDSA_METHOD object if this is detectable with the ECDSA_F_ECDSA_METHOD_NEW define. A few notes to understand the patch: - A few places assumed RSA keys, I added a key type field and use it to handle the differences. I also renamed some function to reflect their link to RSA. - I moved some code out of pkcs11_rsa_private_encrypt into a separate function pkcs11_login to share it with pkcs11_ecdsa_sign - For EC_KEY, the pointer to the struct pkcs11_key object is not in the method but in the EC_KEY itself using ECDSA_set_ex_data and ECDSA_set_ex_data. This allows having a single ECDSA_METHOD for all keys. - Unlike the RSA_METHOD, ECDSA_METHOD does not include a "finish" method to clean up the associated data. This was only a problem for ssh-pkcs11-helper.c that called key_free on struct sshkey objects created by ssh-pkcs11.c. To work around that I added a function pkcs11_del_key(struct sshkey *) to the list of functions exported by ssh-pkcs11.c that allows us to properly clean up ECDSA keys. I tried to: - be as consistent as possible with the RSA part, - minimize the size of the patch and the number of locations, - document some of the additional quirks specific to ECDSA. I added this patch and text as https://bugzilla.mindrot.org/show_bug.cgi?id=2474 Sincerely, -- Mathias Brossard From stephane.chazelas at gmail.com Wed Sep 30 02:50:02 2015 From: stephane.chazelas at gmail.com (Stephane Chazelas) Date: Tue, 29 Sep 2015 17:50:02 +0100 Subject: stderr pipe not closed when cancelling session on shared connection Message-ID: <20150929165002.GA13961@chaz.gmail.com> Hello, [this is a repost for an email I tried to send on Friday via gmane and that never came through] when investigating an issue with gitolite (http://thread.gmane.org/gmane.comp.version-control.gitolite/4006) I realised that when you do # Create a master connection ssh -MnNfS ~/.ssh/ctl localhost # Reuse that shared connection to run a command on the host ssh -S ~/.ssh/ctl localhost 'cat; yes >&2' And then press Ctrl-C, sshd does close the stdin and stdout pipe to the remote shell (so cat returns after seeing eof on stdin), but not the stderr pipe. That next "yes" command does fill up the stderr pipe, and looking at strace output, at the other end of the pipe, sshd is not reading from it. If I kill that "yes" command, "bash" hangs again trying to write a "job killed" message on stderr. If I kill "bash", I can see sshd returning from a waitpid(), doing *one* read (of 16384 bytes) from that stderr pipe (put doesn't do anything with the data read) and closes that last pipe. It seems to me that if sshd is not going to do anything with that pipe, it should close it as soon as it becomes impossible to send the data to the client like it does for the stdout pipe. In the case of gitolite, gitolite was writing a character to stderr upon EOF to force a SIGPIPE delivery when the ssh connection is aborted. While that works for a normal ssh connection, that does't work for one using a shared connection. Reproduced with: OpenSSH_6.9p1 Debian-2, OpenSSL 1.0.2d 9 Jul 2015 and OpenSSH_7.1p1, OpenSSL 1.0.1f 6 Jan 2014 Thanks, Stephane