Documentation improvements and clarifications.
Change-Id: Iba08b6385e4b1dfec595fc6edae244b01b66a861 Signed-off-by: Pino de Candia <giuseppe.decandia@gmail.com>
This commit is contained in:
parent
18413ba679
commit
5269c48085
@ -22,7 +22,7 @@ Note that there are 2 daemons: API daemon and Notifications daemon.
|
|||||||
Get the code
|
Get the code
|
||||||
------------
|
------------
|
||||||
|
|
||||||
On your controller node, in a development directory:
|
On your controller node, in a development directory::
|
||||||
|
|
||||||
git clone https://github.com/openstack/tatu
|
git clone https://github.com/openstack/tatu
|
||||||
cd tatu
|
cd tatu
|
||||||
@ -35,6 +35,7 @@ Modify Tatu’s cloud-init script
|
|||||||
|
|
||||||
tatu/files/user-cloud-config is a cloud-init script that needs to run once on
|
tatu/files/user-cloud-config is a cloud-init script that needs to run once on
|
||||||
every VM.
|
every VM.
|
||||||
|
|
||||||
* It extracts Tatu’s **dynamic** vendor data from ConfigDrive;
|
* It extracts Tatu’s **dynamic** vendor data from ConfigDrive;
|
||||||
* Finds the one-time-token and uses it in the call to Tatu /noauth/hostcerts
|
* Finds the one-time-token and uses it in the call to Tatu /noauth/hostcerts
|
||||||
API;
|
API;
|
||||||
@ -46,11 +47,11 @@ If you’re using my branch of Dragonflow
|
|||||||
(https://github.com/pinodeca/dragonflow/tree/tatu) then a VM can reach the Tatu
|
(https://github.com/pinodeca/dragonflow/tree/tatu) then a VM can reach the Tatu
|
||||||
API at http://169.254.169.254/noauth via the Metadata Proxy. However, if you’re
|
API at http://169.254.169.254/noauth via the Metadata Proxy. However, if you’re
|
||||||
using any other Neutron driver, you’ll need to modify the cloud-init script.
|
using any other Neutron driver, you’ll need to modify the cloud-init script.
|
||||||
Replace:
|
Replace::
|
||||||
|
|
||||||
url=http://169.254.169.254/….
|
url=http://169.254.169.254/….
|
||||||
|
|
||||||
in tatu/files/user-cloud-config **in 2 places**, with:
|
in tatu/files/user-cloud-config **in 2 places**, with::
|
||||||
|
|
||||||
url=http://<Tatu API’s VM-accessible address>/….
|
url=http://<Tatu API’s VM-accessible address>/….
|
||||||
|
|
||||||
@ -69,7 +70,7 @@ vendor-data by running the following command from the tatu directory:
|
|||||||
|
|
||||||
scripts/cloud-config-to-vendor-data files/user-cloud-config > /etc/nova/tatu_static_vd.json
|
scripts/cloud-config-to-vendor-data files/user-cloud-config > /etc/nova/tatu_static_vd.json
|
||||||
|
|
||||||
And now modify /etc/nova/nova-cpu.conf as follows:
|
And now modify /etc/nova/nova-cpu.conf as follows::
|
||||||
|
|
||||||
[api]
|
[api]
|
||||||
vendordata_providers = StaticJSON,DynamicJSON
|
vendordata_providers = StaticJSON,DynamicJSON
|
||||||
@ -89,22 +90,27 @@ Configure dynamic vendor data
|
|||||||
|
|
||||||
In order to configure SSH, Tatu’s cloud-init script needs some data unique
|
In order to configure SSH, Tatu’s cloud-init script needs some data unique
|
||||||
to each VM:
|
to each VM:
|
||||||
|
|
||||||
* A one-time-token generated by Tatu for the specific VM
|
* A one-time-token generated by Tatu for the specific VM
|
||||||
* The list of user accounts to configure (based on Keystone roles in the VM’s
|
* The list of user accounts to configure (based on Keystone roles in the VM’s
|
||||||
project)
|
project)
|
||||||
* The list of user accounts that need sudo access.
|
* The list of user accounts that need sudo access.
|
||||||
|
|
||||||
As well as some data that’s common to VMs in the project:
|
As well as some data that’s common to VMs in the project:
|
||||||
|
|
||||||
* The project’s public key for validating User SSH certificates.
|
* The project’s public key for validating User SSH certificates.
|
||||||
* A non-standard SSH port (if configured).
|
* A non-standard SSH port (if configured).
|
||||||
|
|
||||||
All this information is passed to the VM as follows:
|
All this information is passed to the VM as follows:
|
||||||
* At launch time, Nova Compute calls Tatu’s dynamic vendordata API using
|
|
||||||
Keystone authentication with tokens.
|
|
||||||
* Nova writes the vendordata to ConfigDrive
|
|
||||||
** Note: to protect the one-time-token and the user account names, it’s best
|
|
||||||
not to expose thiis information via the metadata API.
|
|
||||||
|
|
||||||
To enable ConfigDrive, add this to /etc/nova/nova-cpu.conf:
|
* At launch time, Nova Compute calls Tatu’s dynamic vendordata API using
|
||||||
|
Keystone authentication with tokens.
|
||||||
|
* Nova writes the vendordata to ConfigDrive
|
||||||
|
|
||||||
|
* Note: to protect the one-time-token and the user account names, it’s best
|
||||||
|
not to expose thiis information via the metadata API.
|
||||||
|
|
||||||
|
To enable ConfigDrive, add this to /etc/nova/nova-cpu.conf::
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
force_config_drive=True
|
force_config_drive=True
|
||||||
@ -113,7 +119,7 @@ To enable ConfigDrive, add this to /etc/nova/nova-cpu.conf:
|
|||||||
**TODO: disable Tatu vendor data availability via MetaData API. May require
|
**TODO: disable Tatu vendor data availability via MetaData API. May require
|
||||||
Nova changes.**
|
Nova changes.**
|
||||||
|
|
||||||
To get Nova Compute talking to Tatu, add this to /etc/nova/nova-cpu.conf:
|
To get Nova Compute talking to Tatu, add this to /etc/nova/nova-cpu.conf::
|
||||||
|
|
||||||
[api]
|
[api]
|
||||||
vendordata_providers = StaticJSON, DynamicJSON
|
vendordata_providers = StaticJSON, DynamicJSON
|
||||||
@ -135,11 +141,13 @@ appropriate.
|
|||||||
Prepare /etc/tatu/tatu.conf
|
Prepare /etc/tatu/tatu.conf
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
|
Do the following::
|
||||||
|
|
||||||
cd tatu
|
cd tatu
|
||||||
mkdir /etc/tatu
|
mkdir /etc/tatu
|
||||||
cp files/tatu.conf /etc/tatu/
|
cp files/tatu.conf /etc/tatu/
|
||||||
|
|
||||||
Edit /etc/tatu/tatu.conf:
|
Edit /etc/tatu/tatu.conf::
|
||||||
|
|
||||||
use_pat_bastions = False
|
use_pat_bastions = False
|
||||||
sqlalchemy_engine = <URI for your database, e.g. mysql+pymysql://root:pinot@127.0.0.1/tatu>
|
sqlalchemy_engine = <URI for your database, e.g. mysql+pymysql://root:pinot@127.0.0.1/tatu>
|
||||||
@ -152,19 +160,25 @@ Launch Tatu’s notification daemon
|
|||||||
Tatu’s notification daemon only needs tatu.conf, so we can launch it now.
|
Tatu’s notification daemon only needs tatu.conf, so we can launch it now.
|
||||||
|
|
||||||
Tatu listens on topic “tatu_notifications” for:
|
Tatu listens on topic “tatu_notifications” for:
|
||||||
* Project creation and deletion events from Keystone.
|
|
||||||
** To create new CA key pairs or clean up unused ones.
|
|
||||||
* Role assignment deletion events from Keystone.
|
|
||||||
** To revoke user SSH certificates that are too permissive.
|
|
||||||
* VM deletion events from Nova.
|
|
||||||
** To clean up per-VM bastion and DNS state.
|
|
||||||
|
|
||||||
Edit both /etc/keystone/keystone.conf and /etc/nova/nova.conf as follows:
|
* Project creation and deletion events from Keystone.
|
||||||
|
|
||||||
|
* To create new CA key pairs or clean up unused ones.
|
||||||
|
|
||||||
|
* Role assignment deletion events from Keystone.
|
||||||
|
|
||||||
|
* To revoke user SSH certificates that are too permissive.
|
||||||
|
|
||||||
|
* VM deletion events from Nova.
|
||||||
|
|
||||||
|
* To clean up per-VM bastion and DNS state.
|
||||||
|
|
||||||
|
Edit both /etc/keystone/keystone.conf and /etc/nova/nova.conf as follows::
|
||||||
|
|
||||||
[oslo_messaging_notifications]
|
[oslo_messaging_notifications]
|
||||||
topics = notifications,tatu_notifications
|
topics = notifications,tatu_notifications
|
||||||
|
|
||||||
Now launch Tatu’s notification listener daemon:
|
Now launch Tatu’s notification listener daemon::
|
||||||
|
|
||||||
python tatu/notifications.py
|
python tatu/notifications.py
|
||||||
|
|
||||||
@ -174,11 +188,14 @@ being created for all existing projects.
|
|||||||
Prepare /etc/tatu/paste.ini
|
Prepare /etc/tatu/paste.ini
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
cd tatu
|
cd tatu
|
||||||
mkdir /etc/tatu
|
mkdir /etc/tatu
|
||||||
cp files/paste.ini /etc/tatu/
|
cp files/paste.ini /etc/tatu/
|
||||||
|
|
||||||
paste.ini should only need these modifications:
|
paste.ini should only need these modifications:
|
||||||
|
|
||||||
* Host (address the daemon will listen on)
|
* Host (address the daemon will listen on)
|
||||||
* Port (port the daemon will listen on)
|
* Port (port the daemon will listen on)
|
||||||
|
|
||||||
@ -188,7 +205,7 @@ Launch Tatu’s API daemon
|
|||||||
Tatu’s API daemon needs both tatu.conf and paste.ini. We can launch it now.
|
Tatu’s API daemon needs both tatu.conf and paste.ini. We can launch it now.
|
||||||
|
|
||||||
I have done all my testing with Pylons (no good reason, I’m new to wsgi
|
I have done all my testing with Pylons (no good reason, I’m new to wsgi
|
||||||
frameworks):
|
frameworks)::
|
||||||
|
|
||||||
pip install pylons
|
pip install pylons
|
||||||
pserve files/paste.ini
|
pserve files/paste.ini
|
||||||
@ -200,6 +217,8 @@ certificates and the list of revoked keys).
|
|||||||
Register Tatu API in Keystone
|
Register Tatu API in Keystone
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
|
Run the following::
|
||||||
|
|
||||||
openstack endpoint create --region RegionOne ssh public http://147.75.72.229:18322/
|
openstack endpoint create --region RegionOne ssh public http://147.75.72.229:18322/
|
||||||
openstack service create --name tatu --description "OpenStack SSH Management" ssh
|
openstack service create --name tatu --description "OpenStack SSH Management" ssh
|
||||||
|
|
||||||
@ -209,11 +228,12 @@ to find Tatu.
|
|||||||
Installing tatu-dashboard
|
Installing tatu-dashboard
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
Do the following wherever horizon is installed:
|
Do the following wherever horizon is installed::
|
||||||
|
|
||||||
git clone https://github.com/openstack/tatu-dashboard
|
git clone https://github.com/openstack/tatu-dashboard
|
||||||
python setup.py develop
|
python setup.py develop
|
||||||
* Copy (or soft link) files from tatu-dashboard/tatudashboard/enabled to horizon/openstack_dashboard/local/enabled/
|
Copy (or soft link) files from tatu-dashboard/tatudashboard/enabled
|
||||||
|
to horizon/openstack_dashboard/local/enabled/
|
||||||
# From horizon directory, run
|
# From horizon directory, run
|
||||||
python manage.py compress
|
python manage.py compress
|
||||||
service apache2 restart
|
service apache2 restart
|
||||||
@ -221,7 +241,7 @@ Do the following wherever horizon is installed:
|
|||||||
Installing python-tatuclient
|
Installing python-tatuclient
|
||||||
============================
|
============================
|
||||||
|
|
||||||
On any host where you want to run "openstack ssh:
|
On any host where you want to run "openstack ssh" (Tatu) commands::
|
||||||
|
|
||||||
git clone https://github.com/pinodeca/python-tatuclient
|
git clone https://github.com/pinodeca/python-tatuclient
|
||||||
python setup.py develop
|
python setup.py develop
|
||||||
|
34
README.rst
34
README.rst
@ -25,19 +25,22 @@ Tatu provides APIs that allow:
|
|||||||
their public key, and to learn the public key of the CA for users.
|
their public key, and to learn the public key of the CA for users.
|
||||||
|
|
||||||
During VM provisioning:
|
During VM provisioning:
|
||||||
* Tatu's cloud-init script is passed to the VM via Nova static vendor data.
|
|
||||||
|
* Tatu's cloud-init script is passed to the VM via Nova **static** vendor data.
|
||||||
* VM-specific configuration is placed in the VM's ConfigDrive thanks to Nova's
|
* VM-specific configuration is placed in the VM's ConfigDrive thanks to Nova's
|
||||||
**dynamic** vendor data call to Tatu API.
|
**dynamic** vendor data call to Tatu API.
|
||||||
* The cloud-init script consumes the dynamic vendor data:
|
* The cloud-init script consumes the dynamic vendor data:
|
||||||
** A one-time-token is used to authenticate the VM's request to Tatu API to
|
|
||||||
sign the VM's public key (and return and SSH host certificate).
|
* A one-time-token is used to authenticate the VM's request to Tatu API to
|
||||||
** A list of the VM's project's Keystone roles is used to create user accounts
|
sign the VM's public key (and return and SSH host certificate).
|
||||||
on the VM.
|
* A list of the VM's project's Keystone roles is used to create user accounts
|
||||||
** A list of sudoers is used to decide which users get password-less sudo
|
on the VM.
|
||||||
privileges. The current policy is that any Keystone role containing "admin"
|
* A list of sudoers is used to decide which users get password-less sudo
|
||||||
should correspond to a user account with sudo privileges.
|
privileges. The current policy is that any Keystone role containing "admin"
|
||||||
** The public key of the CA for User SSH certificates is retrieved, and along
|
should correspond to a user account with sudo privileges.
|
||||||
with the requested SSH Host Certificate, is used to (re)configure SSH.
|
* The public key of the CA for User SSH certificates is retrieved, and along
|
||||||
|
with the requested SSH Host Certificate, is used to (re)configure SSH.
|
||||||
|
|
||||||
* A cron job is configured for the VM to periodically poll Tatu for the revoked
|
* A cron job is configured for the VM to periodically poll Tatu for the revoked
|
||||||
keys list.
|
keys list.
|
||||||
|
|
||||||
@ -60,11 +63,11 @@ During negotiation of the SSH connection:
|
|||||||
|
|
||||||
Use of host certificates prevents MITM (man in the middle) attacks. Without
|
Use of host certificates prevents MITM (man in the middle) attacks. Without
|
||||||
host certificates, users of SSH client software are presented with a message
|
host certificates, users of SSH client software are presented with a message
|
||||||
like this one when they first connect to an SSH server:
|
like this one when they first connect to an SSH server::
|
||||||
|
|
||||||
| The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
|
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
|
||||||
| ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
|
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
|
||||||
| Are you sure you want to continue connecting (yes/no)?
|
Are you sure you want to continue connecting (yes/no)?
|
||||||
|
|
||||||
There's no way to verify the fingerprint unless there's some other way of
|
There's no way to verify the fingerprint unless there's some other way of
|
||||||
logging into the VM (e.g. novnc with password - whhich is not recommended).
|
logging into the VM (e.g. novnc with password - whhich is not recommended).
|
||||||
@ -85,6 +88,7 @@ APIs, Horizon Panels, and OpenStack CLIs
|
|||||||
----------------------------------------
|
----------------------------------------
|
||||||
|
|
||||||
Tatu provides REST APIs, Horizon Panels and OpenStack CLIs to:
|
Tatu provides REST APIs, Horizon Panels and OpenStack CLIs to:
|
||||||
|
|
||||||
* Retrieve the public keys of the user and host CAs for each OpenStack project.
|
* Retrieve the public keys of the user and host CAs for each OpenStack project.
|
||||||
See ssh ca --help
|
See ssh ca --help
|
||||||
* Create (and revoke) SSH user certificates with principals corresponding to
|
* Create (and revoke) SSH user certificates with principals corresponding to
|
||||||
@ -172,6 +176,7 @@ Bastion Management
|
|||||||
|
|
||||||
Tatu aims to manage SSH bastions for OpenStack environments. This feature
|
Tatu aims to manage SSH bastions for OpenStack environments. This feature
|
||||||
would provide the following benefits:
|
would provide the following benefits:
|
||||||
|
|
||||||
* reduce operational burden for users that already manage bastions themselves.
|
* reduce operational burden for users that already manage bastions themselves.
|
||||||
* avoid assigning Floating IP addresses to VMs for sole purpose of SSH access.
|
* avoid assigning Floating IP addresses to VMs for sole purpose of SSH access.
|
||||||
* provide a single point of security policy enforcement, and especially one
|
* provide a single point of security policy enforcement, and especially one
|
||||||
@ -189,6 +194,7 @@ per VM, but it does not provide a single point of policy enforcement because
|
|||||||
PAT always translates and forwards without checking certificates as a full SSH
|
PAT always translates and forwards without checking certificates as a full SSH
|
||||||
proxy would. **PAT bastions are only supported by an experimental version
|
proxy would. **PAT bastions are only supported by an experimental version
|
||||||
of Dragonflow Neutron plugin.** It works as follows:
|
of Dragonflow Neutron plugin.** It works as follows:
|
||||||
|
|
||||||
* At setup time, Tatu reserves a configurable number of ports in the Public
|
* At setup time, Tatu reserves a configurable number of ports in the Public
|
||||||
network. Their IP addresses are used for PAT. Dragonflow randomly assigns
|
network. Their IP addresses are used for PAT. Dragonflow randomly assigns
|
||||||
each PAT addresses to a different compute node. That compute node then acts
|
each PAT addresses to a different compute node. That compute node then acts
|
||||||
|
98
TRY_IT.rst
98
TRY_IT.rst
@ -1,24 +1,100 @@
|
|||||||
Notes on using Tatu for the first time
|
Notes on using Tatu for the first time
|
||||||
======================================
|
======================================
|
||||||
|
|
||||||
If you don't already have one, generate an ssh key pair on your client machine.
|
**In this example, I'm the "demo" user and I need to connect to VMs in projects
|
||||||
|
named "demo" and "invisible_to_admin".**
|
||||||
|
|
||||||
ssh-keygen
|
**In the following examples, openstack commands will output a warning like this**::
|
||||||
|
|
||||||
Now generate a certificate for your public key (this can also be done in
|
Failed to contact the endpoint at http://147.75.65.211:18322/ for discovery. Fallback to using that endpoint as the base url.
|
||||||
|
|
||||||
|
**You can safely ignore this warning.**
|
||||||
|
|
||||||
|
Since you'll need separate SSH user certificates for each of your projects,
|
||||||
|
generate separate ssh keys for each of your projects::
|
||||||
|
|
||||||
|
ssh-keygen -f ~/.ssh/demo_key
|
||||||
|
ssh-keygen -f ~/.ssh/inv_key
|
||||||
|
|
||||||
|
Now generate the certificate for each of your projects (this can also be done in
|
||||||
Horizon). First set your environment variables to select your user and project.
|
Horizon). First set your environment variables to select your user and project.
|
||||||
|
Note that ssh client expects the certificate's name to be the private key name
|
||||||
|
followed by "-cert.pub"::
|
||||||
|
|
||||||
source openrc demo demo
|
source openrc demo demo
|
||||||
openstack ssh usercert create -f value -c Certificate "`cat ~/.ssh/id_rsa.pub`" > ~/.ssh/id_rsa-cert.pub
|
openstack ssh usercert create -f value -c Certificate "`cat ~/.ssh/demo_key.pub`" > ~/.ssh/demo_key-cert.pub
|
||||||
|
openstack ssh usercert create --os-project-name invisible_to_admin -f value -c Certificate "`cat ~/.ssh/inv_key.pub`" > ~/.ssh/inv_key-cert.pub
|
||||||
|
|
||||||
Now get the host CA public key for your project. This command appends the key
|
You can examine a certificate as follows::
|
||||||
to your known_hosts file and configures it to be trusted for any hostname in
|
|
||||||
any domain.
|
|
||||||
|
|
||||||
echo '@cert-authority * ' `openstack ssh ca show 626bfa8fd12b48d8b674caf4ef3a0cd7 -f value -c 'Host Public Key'` >> ~/.ssh/known_hosts
|
ssh-keygen -Lf ~/.ssh/inv_key-cert.pub
|
||||||
|
|
||||||
|
And the output will look like this::
|
||||||
|
|
||||||
|
/root/.ssh/inv_key-cert.pub:
|
||||||
|
Type: ssh-rsa-cert-v01@openssh.com user certificate
|
||||||
|
Public key: RSA-CERT SHA256:4h+zwW8L+E1OLyOz4uHh4ffcqJFS/p5rETlf15Q04x8
|
||||||
|
Signing CA: RSA SHA256:s8FpsDHkhly3ePtKDihO/x7UVj3sw3fSILLPLQJz2n0
|
||||||
|
Key ID: "demo_5"
|
||||||
|
Serial: 5
|
||||||
|
Valid: from 2018-03-09T13:05:23 to 2019-03-10T13:05:23
|
||||||
|
Principals:
|
||||||
|
Member
|
||||||
|
Critical Options: (none)
|
||||||
|
Extensions:
|
||||||
|
permit-X11-forwarding
|
||||||
|
permit-agent-forwarding
|
||||||
|
permit-port-forwarding
|
||||||
|
permit-pty
|
||||||
|
permit-user-rc
|
||||||
|
|
||||||
|
Note that the Signing CA is different for each certificate. You'll have to use
|
||||||
|
the corresponding key/certificate to ssh to a project's VM.
|
||||||
|
|
||||||
|
Now configure your ssh client to trust SSH host certificats signed by the Host
|
||||||
|
CAs of your projects. Given how Tatu currently generates Host certificates,
|
||||||
|
you must trust each CA for hostnames in any domain (hence the "*" in the command)::
|
||||||
|
|
||||||
|
demo_id=`openstack project show demo -f value -c id`
|
||||||
|
echo '@cert-authority * '`openstack ssh ca show $demo_id -f value -c 'Host Public Key'` >> ~/.ssh/known_hosts
|
||||||
|
inv_id=`openstack project show invisible_to_admin --os-project-name invisible_to_admin -f value -c id`
|
||||||
|
echo '@cert-authority * '`openstack ssh ca show $inv_id -f value -c 'Host Public Key'` >> ~/.ssh/known_hosts
|
||||||
|
|
||||||
|
Above, note that the --os-project-name option is necessary because we sourced
|
||||||
|
openrc with the "demo" project.
|
||||||
|
|
||||||
Now launch a VM without a Key Pair. Unless you're using Dragonflow and Tatu's
|
Now launch a VM without a Key Pair. Unless you're using Dragonflow and Tatu's
|
||||||
experimental PAT bastion feature, assign a floating IP to the VM, for example
|
experimental PAT bastion feature, assign a floating IP to the VM. In this example
|
||||||
172.24.4.10.
|
we'll assume the VM's Floating IP is 172.24.4.8
|
||||||
|
|
||||||
Use the following to
|
If you launched your VM in the demo project, use the following ssh command. Note
|
||||||
|
that the Linux user account must correspond to one of the principals in your
|
||||||
|
certificate, which in turn corresponds to one of your roles in the project::
|
||||||
|
|
||||||
|
ssh -i ~/.ssh/demo_key Member@172.24.4.8
|
||||||
|
|
||||||
|
** You should not get a warning like the following**::
|
||||||
|
|
||||||
|
The authenticity of host '172.24.4.8 (172.24.4.8)' can't be established.
|
||||||
|
RSA key fingerprint is SHA256:FS2QGF4Ant/MHoUPxgO6N99uQss57lKkPclXDgFOLAU.
|
||||||
|
Are you sure you want to continue connecting (yes/no)?
|
||||||
|
|
||||||
|
Re-run the command with verbose output::
|
||||||
|
|
||||||
|
ssh -v -i ~/.ssh/demo_key Member@172.24.4.8
|
||||||
|
|
||||||
|
You should see the SSH host presenting its host certificate::
|
||||||
|
|
||||||
|
debug1: Server host certificate: ssh-rsa-cert-v01@openssh.com SHA256:FS2QGF4Ant/MHoUPxgO6N99uQss57lKkPclXDgFOLAU, serial 0 ID "otto_0" CA ssh-rsa SHA256:b0BD63oM4ks4BT2Cxlzz9WaV0HE+AqwEG7mnk3vJtz4 valid from 2018-03-09T04:32:35 to 2019-03-10T04:32:35
|
||||||
|
debug1: Host '172.24.4.8' is known and matches the RSA-CERT host certificate.
|
||||||
|
debug1: Found CA key in /root/.ssh/known_hosts:1
|
||||||
|
|
||||||
|
You should also see your SSH client presenting your user certificate. Note that your
|
||||||
|
client first offers the public key, which is rejected, and then offers the certificate,
|
||||||
|
which is accepted::
|
||||||
|
|
||||||
|
debug1: Next authentication method: publickey
|
||||||
|
debug1: Offering RSA public key: /root/.ssh/inv_key
|
||||||
|
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic
|
||||||
|
debug1: Offering RSA-CERT public key: /root/.ssh/inv_key-cert
|
||||||
|
debug1: Server accepts key: pkalg ssh-rsa-cert-v01@openssh.com blen 1088
|
||||||
|
Loading…
x
Reference in New Issue
Block a user