Human readable .ssh/known_hosts?

Bob Proulx bob at proulx.com
Thu Oct 1 10:38:17 AEST 2020


Nico Kadel-Garcia wrote:
> Damien Miller wrote:
> > Again, you should read the documentation for CheckHostIP. Turing it off
> > makes known_hosts solely bind to hostnames and, as long as you use names
> > to refer to hosts, avoids any problems caused by IP address reuse.
>
> Have you used AWS? Unless you spend the time and effort, the hostname
> registered in AWS DNS is based on the IP address. Many people do *not*
> use consistent subnets for distinct classes of server or specific OS
> images, so different servers wind up on the same IP address with
> distinct hostkeys based on factors like autoscaling, for which IP
> addresses are not predictable. You can work around it, by locking down
> and sharing hostkeys for your OS images, or by segregating subnets
> based on application and corresponding OS image. These present other
> burdens.

I use AWS and therefore will say a few words here.  The general
default for an AWS EC2 node is that the hostname will look like this.

    root at ip-172-31-29-33:~# hostname
    ip-172-31-29-33

And note that this is in the RFC1918 unroutable private IP space.
That is not and cannot be the public IP address.  The public IP
address is routed to it through an edge route.  It's mapped.  (And
always IPv4 since Amazon has been slow to support IPv6 and AFAIK
elastic IP addresses can only be IPv4 still to this day.)

And if one doesn't do anything then by default Amazon will provide a
DNS name for it in their domain space.  In the case of the above it
will be something like this.  [ Which I have obscured in an obvious
way.  Do you immediately spot why this cannot be valid? :-) ]

    ec2-35-168-278-321.compute-1.amazonaws.com

So in an auto-scale-out elastic configuration one might create a node
ip-172-31-29-33 at noon, might destroy that node by evening, and then
tomorrow recreate the node again as ip-172-31-10-42 but with the same
public IP address and associated amazonaws.com DNS name.

This host key collision with the AWS provided DNS name is only a
problem if 1) one uses the AWS provided DNS name and 2) one uses a
randomly generated ssh host key.  Avoiding this problem can be done by
avoiding either of those two things.  I don't do either of them.

I don't use the AWS supplied DNS name.  I use my own DNS name in the
my domain space.  However I know the stated problem was lack of
control of the DNS.  Okay.  But note that anyone can register a random
domain and and then have control of that namespace.  I have used my
own domain name for my personal use when working with client systems.
Nothing prevents this.

Also for an elastic node that is just a template produced machine then
I believe that one should override the random ssh host key with a
static host key appropriate for the role it is performing.  This can
be done at instantiation time automatically using any of cloud-init or
ansible or other system configuration tool.  Since it then has a
repeatable host key for the role then it won't mismatch when
connecting to it.  When re-created fresh it will have the same host
key that it had before.  I do this.

In a previous life I used to manage a bare metal compute farm and when
the machines are simply srv001 through srv600 and all exactly
equivalent and smashed and recreated as needed then there is no need
for them to have unique host keys.  That's counterproductive.  I set
them all the same as appropriate for their role.

Bob


More information about the openssh-unix-dev mailing list