357

Here's my situation: I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via ssh. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client.

The problem I'm having is that the first ssh command run against a new virtual instance always comes up with an interactive prompt:

The authenticity of host '[hostname] ([IP address])' can't be established.
RSA key fingerprint is [key fingerprint].
Are you sure you want to continue connecting (yes/no)?

Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

chicks
  • 3,915
  • 10
  • 29
  • 37

28 Answers28

298

IMO, the best way to do this is the following:

ssh-keygen -R [hostname]
ssh-keygen -R [ip_address]
ssh-keygen -R [hostname],[ip_address]
ssh-keyscan -H [hostname],[ip_address] >> ~/.ssh/known_hosts
ssh-keyscan -H [ip_address] >> ~/.ssh/known_hosts
ssh-keyscan -H [hostname] >> ~/.ssh/known_hosts

That will make sure there are no duplicate entries, that you are covered for both the hostname and IP address, and will also hash the output, an extra security measure.

yar
  • 3,175
205

Set the StrictHostKeyChecking option to no, either in the config file or via -o :

ssh -o StrictHostKeyChecking=no username@hostname.com

Jason
  • 135
142

For the lazy ones:

ssh-keyscan -H <host> >> ~/.ssh/known_hosts

-H hashes the hostname / IP address

fivef
  • 1,555
46

As mentioned, using key-scan would be the right & unobtrusive way to do it.

ssh-keyscan -t rsa,dsa HOST 2>&1 | sort -u - ~/.ssh/known_hosts > ~/.ssh/tmp_hosts
mv ~/.ssh/tmp_hosts ~/.ssh/known_hosts

The above will do the trick to add a host, ONLY if it has not yet been added. It is also not concurrency safe; you must not execute the snippet on the same origin machine more than once at the same time, as the tmp_hosts file can get clobbered, ultimately leading to the known_hosts file becoming bloated...

kasperd
  • 31,086
ysawej
  • 585
21

You could use ssh-keyscan command to grab the public key and append that to your known_hosts file.

kenorb
  • 7,125
Alex
  • 6,723
17

Check the fingerprint of each new server/host. This is the only way to authenticate the server. Without it, your SSH connection can be subject to a man-in-the-middle attack.

Do not use the old value StrictHostKeyChecking=no which never checks the authenticity of the server at all. Though the meaning of StrictHostKeyChecking=no is planned to be flipped later.

Second option, but less secure, is to use StrictHostKeyChecking=accept-new, which was introduced in version 7.6 (2017-10-03) of OpenSSH:

The first "accept-new" will automatically accept hitherto-unseen keys but will refuse connections for changed or invalid hostkeys.

Dominik
  • 357
11

This is how you can incorporate ssh-keyscan into your play:

---
# ansible playbook that adds ssh fingerprints to known_hosts
- hosts: all
  connection: local
  gather_facts: no
  tasks:
  - command: /usr/bin/ssh-keyscan -T 10 {{ ansible_host }}
    register: keyscan
  - lineinfile: name=~/.ssh/known_hosts create=yes line={{ item }}
    with_items: '{{ keyscan.results | map(attribute='stdout_lines') | list }}'
Zart
  • 350
  • 3
  • 8
11

To do this properly, what you really want to do is collect the host public keys of the VMs as you create them and drop them into a file in known_hosts format. You can then use the -o GlobalKnownHostsFile=..., pointing to that file, to ensure that you're connecting to the host you believe you should be connecting to. How you do this depends on how you're setting up the virtual machines, however, but reading it off the virtual filesystem, if possible, or even getting the host to print the contents of /etc/ssh/ssh_host_rsa_key.pub during configuration may do the trick.

That said, this may not be worthwhile, depending on what sort of environment you're working in and who your anticipated adversaries are. Doing a simple "store on first connect" (via a scan or simply during the first "real" connection) as described in several other answers above may be considerably easier and still provide some modicum of security. However, if you do this I strongly suggest you change the user known hosts file (-o UserKnownHostsFile=...) to a file specific for this particular test installation; this will avoid polluting your personal known hosts file with test information and make it easy to clean up the now useless public keys when you delete your VMs.

cjs
  • 1,424
7

This would be a complete solution, accepting host key for the first time only

#!/usr/bin/env ansible-playbook
---
- name: accept ssh fingerprint automatically for the first time
  hosts: all
  connection: local
  gather_facts: False

tasks: - name: "check if known_hosts contains server's fingerprint" command: ssh-keygen -F {{ inventory_hostname }} register: keygen failed_when: keygen.stderr != '' changed_when: False

- name: fetch remote ssh key
  command: ssh-keyscan -T5 {{ inventory_hostname }}
  register: keyscan
  failed_when: keyscan.rc != 0 or keyscan.stdout == ''
  changed_when: False
  when: keygen.rc == 1

- name: add ssh-key to local known_hosts
  lineinfile:
    name: ~/.ssh/known_hosts
    create: yes
    line: &quot;{{ item }}&quot;
  when: keygen.rc == 1
  with_items: '{{ keyscan.stdout_lines|default([]) }}'

Chris
  • 87
7

I do a one-liner script, a bit long but useful to make this task for hosts with multiples IPs, using dig and bash

(host=github.com; ssh-keyscan -H $host; for ip in $(dig @8.8.8.8 github.com +short); do ssh-keyscan -H $host,$ip; ssh-keyscan -H $ip; done) 2> /dev/null >> .ssh/known_hosts
6

How are you building these machines? can you run a dns update script? can you join an IPA Domain?

FreeIPA does this automatically, but essentially all you need is SSHFP dns records and DNSSEC on your zone (freeipa provides as configurable options (dnssec disabled by default)).

You can get the existing SSHFP records from your host by running.

ssh-keygen -r jersey.jacobdevans.com

jersey.jacobdevans.com IN SSHFP 1 1 4d8589de6b1a48e148d8fc9fbb967f1b29f53ebc jersey.jacobdevans.com IN SSHFP 1 2 6503272a11ba6d7fec2518c02dfed88f3d455ac7786ee5dbd72df63307209d55 jersey.jacobdevans.com IN SSHFP 3 1 5a7a1e8ab8f25b86b63c377b303659289b895736 > jersey.jacobdevans.com IN SSHFP 3 2 1f50f790117dfedd329dbcf622a7d47551e12ff5913902c66a7da28e47de4f4b

then once published, you'd add VerifyHostKeyDNS yes to your ssh_config or ~/.ssh/config

If/When google decides to flip on DNSSEC, you could ssh in without a hostkey prompt.

ssh jersey.jacobdevans.com

BUT my domain is not signed yet, so for now you'd see....

debug1: Server host key: ecdsa-sha2-nistp256 SHA256:H1D3kBF9/t0ynbz2IqfUdVHhL/WROQLGan2ijkfeT0s

debug1: found 4 insecure fingerprints in DNS

debug1: matching host key fingerprint

found in DNS The authenticity of host 'jersey.jacobdevans.com (2605:6400:10:434::10)' can't be established. ECDSA key fingerprint is SHA256:H1D3kBF9/t0ynbz2IqfUdVHhL/WROQLGan2ijkfeT0s. Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? no

Jacob Evans
  • 8,431
5

I had a similar issue and found that some of the provided answers only got me part way to an automated solution. The following is what I ended up using, hope it helps:

ssh -o "StrictHostKeyChecking no" -o PasswordAuthentication=no 10.x.x.x

It adds the key to known_hosts and doesn't prompt for the password.

5

The following avoid duplicate entries in ~/.ssh/known_hosts:

if ! grep "$(ssh-keyscan github.com 2>/dev/null)" ~/.ssh/known_hosts > /dev/null; then
    ssh-keyscan github.com >> ~/.ssh/known_hosts
fi
Amadu Bah
  • 207
4

If you wish to check the key before adding it blindly, you can use this code:

# verify github and gitlab key
# GitHub
github=SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8
ssh-keyscan github.com >> githubKey
read bit githubkey host <<< $(ssh-keygen -lf githubKey)
if [ "$githubkey" != "$github" ]
then
  echo "The GitHub fingerprint is incorrect"
  exit 1
fi
echo "github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" | sudo tee -a /etc/ssh/ssh_known_hosts

# GitLab
gitlab=SHA256:ROQFvPThGrW4RuWLoL9tq9I9zJ42fK4XywyRtbOz/EQ
ssh-keyscan gitlab.com >> gitlabKey
read bit gitlabkey host <<< $(ssh-keygen -lf gitlabKey)
if [ "$githubkey" != "$github" ]
then
  echo "The GitLab fingerprint is incorrect"
  exit 1
fi
echo "gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9" | sudo tee -a /etc/ssh/ssh_known_hosts

The GitHub and GitLab keys may change if they get compromised. In this case, check the most recent ones here and there

Remark: You may need to ensure that the key is not added twice. For this, refer to other answers.

Sharcoux
  • 179
4

This whole

  • ssh-key-scan
  • ssh-copy-id
  • ECSDA key warning

business kept annoying me so I opted for

One script to rule them all

This is a variant of the script at https://askubuntu.com/a/949731/129227 with Amadu Bah's answer https://serverfault.com/a/858957/162693 in a loop.

example call

./sshcheck somedomain site1 site2 site3

The script will loop over the names sites and modify the .ssh/config and .ssh/known_hosts file and do ssh-copy-id on request - for the last feature just the let the ssh test calls fail e.g. by hitting enter 3 times on the password request.

sshcheck script

#!/bin/bash
# WF 2017-08-25
# check ssh access to bitplan servers

#ansi colors
#http://www.csc.uvic.ca/~sae/seng265/fall04/tips/s265s047-tips/bash-using-colors.html
blue='\033[0;34m'  
red='\033[0;31m'  
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.
endColor='\033[0m'

#
# a colored message 
#   params:
#     1: l_color - the color of the message
#     2: l_msg - the message to display
#
color_msg() {
  local l_color="$1"
  local l_msg="$2"
  echo -e "${l_color}$l_msg${endColor}"
}

#
# error
#
#   show an error message and exit
#
#   params:
#     1: l_msg - the message to display
error() {
  local l_msg="$1"
  # use ansi red for error
  color_msg $red "Error: $l_msg" 1>&2
  exit 1
}

#
# show the usage
#
usage() {
  echo "usage: $0 domain sites"
  exit 1 
}

#
# check known_hosts entry for server
#
checkknown() {
  local l_server="$1"
  #echo $l_server
  local l_sid="$(ssh-keyscan $l_server 2>/dev/null)" 
  #echo $l_sid
  if (! grep "$l_sid" $sknown) > /dev/null 
  then
    color_msg $blue "adding $l_server to $sknown"
    ssh-keyscan $l_server >> $sknown 2>&1
  fi
}

#
# check the given server
#
checkserver() {
  local l_server="$1"
  grep $l_server $sconfig > /dev/null
  if [ $? -eq 1 ]
  then
    color_msg $blue "adding $l_server to $sconfig"
    today=$(date "+%Y-%m-%d")
    echo "# added $today by $0"  >> $sconfig
    echo "Host $l_server" >> $sconfig
    echo "   StrictHostKeyChecking no" >> $sconfig
    echo "   userKnownHostsFile=/dev/null" >> $sconfig
    echo "" >> $sconfig
    checkknown $l_server
  else
    color_msg $green "$l_server found in $sconfig"
  fi
  ssh -q $l_server id > /dev/null
  if [ $? -eq 0 ]
  then
    color_msg $green "$l_server accessible via ssh"
  else
    color_msg $red "ssh to $l_server failed" 
    color_msg $blue "shall I ssh-copy-id credentials to $l_server?"
    read answer
    case $answer in
      y|yes) ssh-copy-id $l_server
    esac
  fi
}

#
# check all servers
#
checkservers() {
me=$(hostname -f)
for server in $(echo $* | sort)
do
  os=`uname`
  case $os in
   # Mac OS X
   Darwin*)
     pingoption=" -t1";;
    *) ;;
  esac

  pingresult=$(ping $pingoption -i0.2 -c1 $server)
  echo $pingresult | grep 100 > /dev/null
  if [ $? -eq 1 ]
  then 
    checkserver $server
    checkserver $server.$domain
  else
    color_msg $red "ping to $server failed"
  fi
done
}

#
# check configuration
#
checkconfig() {
#https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh
  if [ -f $sconfig ]
  then
    color_msg $green "$sconfig exists"
    ls -l $sconfig
  fi
}

sconfig=~/.ssh/config
sknown=~/.ssh/known_hosts

case  $# in
  0) usage ;;
  1) usage ;;
  *) 
    domain=$1 
    shift 
    color_msg $blue "checking ssh configuration for domain $domain sites $*"
    checkconfig
    checkservers $* 
    #for server in $(echo $* | sort)
    ##do
    #  checkknown $server 
    #done
    ;;
esac
3

So, I was searching for a mundane way to bypass the unkown host manual interaction of cloning a git repo as shown below:

brad@computer:~$ git clone git@bitbucket.org:viperks/viperks-api.git
Cloning into 'viperks-api'...
The authenticity of host 'bitbucket.org (104.192.143.3)' can't be established.
RSA key fingerprint is 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40.
Are you sure you want to continue connecting (yes/no)?

Note the RSA key fingerprint...

So, this is a SSH thing, this will work for git over SSH and just SSH related things in general...

brad@computer:~$ nmap bitbucket.org --script ssh-hostkey

Starting Nmap 7.01 ( https://nmap.org ) at 2016-10-05 10:21 EDT Nmap scan report for bitbucket.org (104.192.143.3) Host is up (0.032s latency). Other addresses for bitbucket.org (not scanned): 104.192.143.2 104.192.143.1 2401:1d80:1010::150 Not shown: 997 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: | 1024 35:ee:d7:b8:ef:d7:79:e2:c6:43:9e:ab:40:6f:50:74 (DSA) |_ 2048 97:8c:1b:f2:6f:14:6b:5c:3b:ec:aa:46:46:74:7c:40 (RSA) 80/tcp open http 443/tcp open https

Nmap done: 1 IP address (1 host up) scanned in 42.42 seconds

First, install nmap on your daily driver. nmap is highly helpful for certain things, like detecting open ports and this-- manually verifying SSH fingerprints. But, back to what we are doing.

Good. I'm either compromised at the multiple places and machines I've checked it-- or the more plausible explanation of everything being hunky dory is what is happening.

That 'fingerprint' is just a string shortened with a one way algorithm for our human convenience at the risk of more than one string resolving into the same fingerprint. It happens, they are called collisions.

Regardless, back to the original string which we can see in context below.

brad@computer:~$ ssh-keyscan bitbucket.org
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-128
no hostkey alg
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-129
bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-123
no hostkey alg

So, ahead of time, we have a way of asking for a form of identification from the original host.

At this point we manually are as vulnerable as automatically-- the strings match, we have the base data that creates the fingerprint, and we could ask for that base data (preventing collisions) in the future.

Now to use that string in a way that prevents asking about a hosts authenticity...

The known_hosts file in this case does not use plaintext entries. You'll know hashed entries when you see them, they look like hashes with random characters instead of xyz.com or 123.45.67.89.

brad@computer:~$ ssh-keyscan -t rsa -H bitbucket.org
# bitbucket.org SSH-2.0-conker_1.0.257-ce87fba app-128
|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==

The first comment line infuriatingly shows up-- but you can get rid of it with a simple redirect via the ">" or ">>" convention.

As I've done my best to obtain untainted data to be used to identify a "host" and trust, I will add this identification to my known_hosts file in my ~/.ssh directory. Since it will now be identified as a known host, I will not get the prompt mentioned above when you were a youngster.

Thanks for sticking with me, here you go. I'm adding the bitbucket RSA key so that I can interact with my git repositories there in a non-interactive way as part of a CI workflow, but whatever you do what you want.

#!/bin/bash
cp ~/.ssh/known_hosts ~/.ssh/known_hosts.old && echo "|1|yr6p7i8doyLhDtrrnWDk7m9QVXk=|LuKNg9gypeDhfRo/AvLTAlxnyQw= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==" >> ~/.ssh/known_hosts

So, that's how you stay a virgin for today. You can do the same with github by following similar directions on your own time.

I saw so many stack overflow posts telling you to programmatically add the key blindly without any kind of checking. The more you check the key from different machines on different networks, the more trust you can have that the host is the one it says it is-- and that is the best you can hope from this layer of security.

WRONG ssh -oStrictHostKeyChecking=no hostname [command]

WRONG ssh-keyscan -t rsa -H hostname >> ~/.ssh/known_hosts

Don't do either of the above things, please. You're given the opportunity to increase your chances of avoiding someone eavesdropping on your data transfers via a man in the middle attack-- take that opportunity. The difference is literally verifying that the RSA key you have is the one of the bona fide server and now you know how to get that information to compare them so you can trust the connection. Just remember more comparisons from different computers & networks will usually increase your ability to trust the connection.

1

If you already have a .pub then anyone telling you about ssh-keyscan is asking you to risk a MitM attack.

I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via ssh.

This answer from StackOverflow has a better more correct answer. You must obtain the .pub file from a trusted source -- your system bootstrapper should call home and provide the file to a management system.

#!/usr/bin/env bash

: "${pubkey=-"$2"}" : "${host=-"$1"}"

TMP_KNOWN_HOSTS=$(mktemp) echo "${host}" "$(cat "${pubkey}")" > "${TMP_KNOWN_HOSTS}"

ssh-keygen -H -f "${TMP_KNOWN_HOSTS}" ssh-keygen -F "${host}" -f "${TMP_KNOWN_HOSTS}" | tee ~/.ssh/known_hosts

shred "${TMP_KNOWN_HOSTS}.old" rm -f "${TMP_KNOWN_HOSTS}" "${TMP_KNOWN_HOSTS}.old"

You can try this with your local host keys:

$ for pubkey in /etc/ssh/ssh_host_*.pub; do ./add_pubkey localhost "${pubkey}"; done

inetknght
  • 391
1

To automatically accept and memorize the fingerprint on never-before-seen remote git server, e.g. when executing git clone git@example.com, you can leverage git's GIT_SSH_COMMAND together with ssh's -o StrictHostKeyChecking=accept-new:

GIT_SSH_COMMAND="ssh -o StrictHostKeyChecking=accept-new" git clone git@example.com
Abdull
  • 227
1

Modern versions of SSH support certificate authorities, much like the ones used for SSL/TLS which secure your everyday web browsing. This allows you to bypass the Trust On First Use mechanism, as trust is already established by the CA. This works for both users (authorized keys) and hosts (known hosts).

Note this will complicate the process by which you bring up the VMs, as you will need all systems configured to trust the CA, and will need to have your CA sign the host keys for the VMs before you can use it to skip TOFU.

Here are some links to documentation about it:

https://www.lorier.net/docs/ssh-ca.html

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-creating_ssh_ca_certificate_signing-keys

https://jameshfisher.com/2018/03/16/how-to-create-an-ssh-certificate-authority/

https://smallstep.com/blog/use-ssh-certificates/

flibwib
  • 111
1

Here is how to do a collection of hosts

define a collection of hosts

ssh_hosts:
  - server1.domain.com
  - server2.domain.com
  - server3.domain.com
  - server4.domain.com
  - server5.domain.com
  - server6.domain.com
  - server7.domain.com
  - server8.domain.com
  - server9.domain.com

Then define two tasks to add the keys to known hosts:

- command: "ssh-keyscan {{item}}"
   register: known_host_keys
   with_items: "{{ssh_hosts}}"
   tags:
     - "ssh"

 - name: Add ssh keys to know hosts
   known_hosts:
     name: "{{item.item}}"
     key: "{{item.stdout}}"
     path: ~/.ssh/known_hosts
   with_items: "{{known_host_keys.results}}"
0

Since those are "VM's", you should be able to go into them by other means (ex: mount the filesystem), and get the keys.

Once you get the public key's, you could build up your "known_hosts" (concatenate public keys).

Or the other way back: push a known private key on each slave by accessing the filesystem of the VM's.

This solution is overkill for simple cases where you master your environment, but it is not sensible to "men-in-the-middle" attack...

jehon
  • 271
  • 2
  • 4
0
if ! ssh-keygen -F HOST; then
    ssh-keyscan HOST >> ~/.ssh/known_hosts
fi

or in case of a custom port:

if ! ssh-keygen -F [HOST]:PORT; then
    ssh-keyscan -p PORT HOST >> ~/.ssh/known_hosts
fi

But this is vulnerable to MITM.

x-yuri
  • 2,526
0

This must be a rare case but I needed to connect 2 docker containers using SSH recently. First I made it work using ssh-keyscan:

/usr/sbin/sshd
wait4ports tcp://"$1":22
if ! ssh-keygen -F "$1"; then
    ssh-keyscan "$1" > ~/.ssh/known_hosts
fi

Then, w/o ssh-keyscan (by pregenerating the keys and injecting them from the host):

awk -v "host=$1" '{print host, $1, $2}' \
    /etc/ssh/ssh_host_ecdsa_key.pub \
    /etc/ssh/ssh_host_ed25519_key.pub \
    /etc/ssh/ssh_host_rsa_key.pub \
    > ~/.ssh/known_hosts
/usr/sbin/sshd

The host keys can be generated this way (well, there might be an easier way):

docker run --rm alpine:3.15 sh -euc '
    (apk add openssh
    ssh-keygen -A
    cd /etc/ssh
    tar czf keys.tar.gz ssh_host*) >/dev/null
    cat ~/.ssh/keys.tar.gz
' > keys.tar.gz
tar xf keys.tar.gz
rm keys.tar.gz
x-yuri
  • 2,526
0

For everyone complaining about MITM issues:

First off, if you are afraid of MITM attacks rethink your infrastructure deployment and network design.

But if you have no choice. This might be a viable solution:

On the host running SSHD, do:

ssh-keygen -l -v -E sha256 -f /etc/ssh/ssh_host_ecdsa_key.pub

On your SSH client host, do:

ssh -o visualhostkey=yes -o FingerprintHash=sha256 myuser@my.ssh.server

Now compare the pictures, are they similar? Congratulations type "yes" on your client host to automatically add to known_hosts. Are they different? Might be a MITM attack, or you just might be sloppy. This is an OK solution if you just have password access. Let's hope nobody has your password.

But let's dig deeper and analyze the real problem here:

The problem really is a physical rather than a technical. Ask yourself, what is the first network layer? Think about it.

Some viable solutions to that are:

If you are in the the same physical location. Grab a new router, a switch and create a local-network that is not connected to the internet or any other unknown network. Here you can use StrictHostKeyChecking=no as much as you please no MITM worries. Same goes for Virtual Machines that you run directly on your workstation in an isolated network. Don't need to worry about MITM attacks if you control the physical network layer.

Consider buying a KVM-switch, use HPE iLO, IPMI or even Pi-KVM if your server supports it. But even then, these services should be on an isolated network as well. You can also use Intel-ME on laptops or workstations, but that is a whole other can of worms.

Copy your servers key fingerprint to an USB-stick, go back to your client and add the fingerprint manually to known_hosts from the USB-stick. It's elaborate, but you will be able to sleep at night. You can even automate this somewhat with a script. If you want to be über-clever and impress the arrogant minion above you, automate this with "Rubber Ducky" or alike. But don't be clever. Someones "cleverness" brought us here in the first place.

Consider adding keys during the installation of your clients. For example on Debian you can use Preseed and a custom ISO. There are many many solutions here.

If you don't have access to the servers location. Call a trusted friend over the phone and ask him to verify the fingerprint before you add it to known_hosts. But often if you don't have physical access, you are probably working for a company that already has secure solutions for this kind of situation. If not, consider changing jobs, or be a smarty and offer your boss a better solution in exchange for a raise. (remember to mention the word "Ransomware" at least 5 times during that conversation).

If all of the above is not an option for you. And you desperately want to connect to an unknown machine somewhere on the internet while using an unencrypted proxy chain. You should reconsider your life choices. Or you might just be the man in the middle.

16-bit
  • 11
0

I haven't checked all the answers, but after reading some of them, I suggest copying the entry from another host that already knows the target server and adding it to the .ssh/known_hosts.

echo '[KNOWN_HOST_ENTRY]' >> .ssh/known_hosts

This way, the risk of a man-in-the-middle attack is no longer given because you hopefully already checked the key on the other host.

Franz
  • 101
-1

I had faced a similar issue where despite using the above mentioned verified solution, my ssh was not working and it was because the known_hosts file was missing from ~/.ssh/ directory and the File System was read only. SO during run time also I was unable to create the ~/.ssh/known_hosts file.

If you face the similar issue then see if you can write the known_hosts file in the /tmp location. This is mostly write enabled even in a read-only file system.

Later in the ssh command you can specify the ssh to read the known_hosts file from /tmp location.

ssh -o UserKnownHostsFile=/tmp/known_hosts -o StrictHostKeyChecking=no user_name@destination_server_ip

-1

Use this command to add the host to ~/.ssh/known-hosts (and dont add duplicated ones) as guided here

e.g. adding gitlab.com

ssh-keygen -F gitlab.com || ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
Nam G VU
  • 309
  • 2
  • 5
  • 15
-2

Here's my edge case:

I'm creating a fabric 2.5 script to deploy a website on a new site. At one point, it will create a ssh key, and add the public key to gitlab using it's api. Then, it will clone a repo (containing the website source code). The clone command failed in the script, and when I went on the server and manually launched the command I had the The authenticity of host 'host.com ()' can't be established. Are you sure you want to continue connecting (yes/no)? prompt.

At first my solution was to search how to auto-accept it, but for security concern I added the pty=True arg in the c.run("command") function, and I was given access to the error during the execution of my script.

sodimel
  • 97