【OpenStack】SSH登入虛擬機器出現"Read from socket failed: Connection reset by peer"問題的解決辦法
本部落格歡迎轉發,但請保留原作者資訊!
新浪微博:@孔令賢HW;
內容系本人學習、研究和總結,如有雷同,實屬榮幸!
1、問題現象
版本:Grizzly master分支程式碼2013.06.17部署:三個節點(Controller/Compute + Network + Compute)
使用的映象:precise-server-cloudimg-i386-disk1.img
建立虛擬機器命令:nova boot ubuntu-keypair-test --image 1f7f5763-33a1-4282-92b3-53366bf7c695 --flavor 2 --nic net-id=3d42a0d4-a980-4613-ae76-a2cddecff054 --availability-zone nova:compute233 --key_name mykey
虛擬機器ACTIVE之後,可以ping通虛擬機器的fixedip(10.1.1.6)和floatingip(192.150.73.5)。VNC訪問虛擬機器正常,出現登入介面。因為Ubuntu的映象無法使用密碼登入,所以只能通過SSH訪問,這也是建立虛擬機器時指定key_name的原因。
在NetworkNode通過ssh登入虛擬機器失敗:
- [email protected]:~# ssh -i mykey.pem [email protected] -v
- OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
- debug1: Reading configuration data /etc/ssh/ssh_config
- debug1: /etc/ssh/ssh_config line 19: Applying options for *
- debug1: Connecting to 192.150.73.5 [192.150.73.5] port 22.
- debug1: Connection established.
- debug1: permanently_set_uid: 0/0
- debug1: identity file mykey.pem type -1
- debug1: identity file mykey.pem-cert type -1
- debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1
- debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
- debug1: Enabling compatibility mode for protocol 2.0
- debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1
- debug1: SSH2_MSG_KEXINIT sent
- Read from socket failed: Connection reset by peer
[email protected]:~# ssh -i mykey.pem [email protected] -v
OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 192.150.73.5 [192.150.73.5] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file mykey.pem type -1
debug1: identity file mykey.pem-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1
debug1: SSH2_MSG_KEXINIT sent
Read from socket failed: Connection reset by peer
虛擬機器啟動日誌:
[plain]
view plaincopyprint?
- Begin: Running /scripts/init-bottom ... done.
- [ 1.874928] EXT4-fs (vda1): re-mounted. Opts: (null)
- cloud-init start-local running: Mon, 17 Jun 2013 03:39:11 +0000. up 4.59 seconds
- no instance data found in start-local
- ci-info: lo : 1 127.0.0.1 255.0.0.0 .
- ci-info: eth0 : 1 10.1.1.6 255.255.255.0 fa:16:3e:31:f4:52
- ci-info: route-0: 0.0.0.0 10.1.1.1 0.0.0.0 eth0 UG
- ci-info: route-1: 10.1.1.0 0.0.0.0 255.255.255.0 eth0 U
- cloud-init start running: Mon, 17 Jun 2013 03:39:14 +0000. up 8.23 seconds
- 2013-06-17 03:39:15,590 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: http error [404]
- 2013-06-17 03:39:17,083 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: http error [404]
- 2013-06-17 03:39:18,643 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: http error [404]
- 2013-06-17 03:39:20,153 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: http error [404]
- 2013-06-17 03:39:21,638 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]: http error [404]
- 2013-06-17 03:39:23,071 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [8/120s]: http error [404]
- 2013-06-17 03:41:15,356 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds
- no instance data found in start
- Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
- * Starting AppArmor profiles [ OK ]
- landscape-client is not configured, please run landscape-config.
- * Stopping System V initialisation compatibility [ OK ]
- * Stopping Handle applying cloud-config [ OK ]
- * Starting System V runlevel compatibility [ OK ]
- * Starting ACPI daemon [ OK ]
- * Starting save kernel messages [ OK ]
- * Starting automatic crash report generation [ OK ]
- * Starting regular background program processing daemon [ OK ]
- * Starting deferred execution scheduler [ OK ]
- * Starting CPU interrupts balancing daemon [ OK ]
- * Stopping save kernel messages [ OK ]
- * Starting crash report submission daemon [ OK ]
- * Stopping System V runlevel compatibility [ OK ]
- * Starting execute cloud user/final scripts [ OK ]
Begin: Running /scripts/init-bottom ... done.
[ 1.874928] EXT4-fs (vda1): re-mounted. Opts: (null)
cloud-init start-local running: Mon, 17 Jun 2013 03:39:11 +0000. up 4.59 seconds
no instance data found in start-local
ci-info: lo : 1 127.0.0.1 255.0.0.0 .
ci-info: eth0 : 1 10.1.1.6 255.255.255.0 fa:16:3e:31:f4:52
ci-info: route-0: 0.0.0.0 10.1.1.1 0.0.0.0 eth0 UG
ci-info: route-1: 10.1.1.0 0.0.0.0 255.255.255.0 eth0 U
cloud-init start running: Mon, 17 Jun 2013 03:39:14 +0000. up 8.23 seconds
2013-06-17 03:39:15,590 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: http error [404]
2013-06-17 03:39:17,083 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: http error [404]
2013-06-17 03:39:18,643 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: http error [404]
2013-06-17 03:39:20,153 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [5/120s]: http error [404]
2013-06-17 03:39:21,638 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [6/120s]: http error [404]
2013-06-17 03:39:23,071 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [8/120s]: http error [404]
2013-06-17 03:41:15,356 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds
no instance data found in start
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
* Starting AppArmor profiles [ OK ]
landscape-client is not configured, please run landscape-config.
* Stopping System V initialisation compatibility [ OK ]
* Stopping Handle applying cloud-config [ OK ]
* Starting System V runlevel compatibility [ OK ]
* Starting ACPI daemon [ OK ]
* Starting save kernel messages [ OK ]
* Starting automatic crash report generation [ OK ]
* Starting regular background program processing daemon [ OK ]
* Starting deferred execution scheduler [ OK ]
* Starting CPU interrupts balancing daemon [ OK ]
* Stopping save kernel messages [ OK ]
* Starting crash report submission daemon [ OK ]
* Stopping System V runlevel compatibility [ OK ]
* Starting execute cloud user/final scripts [ OK ]
nova-compute日誌中,注入金鑰過程無錯誤:[plain] view plaincopyprint?
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.api 436] [24770] Inject key fs=<nova.virt.disk.vfs.localfs.VFSLocalFS object at 0x3fa2210> key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDdG2ek7tGR4NLPHDHntNdPBu0hnEA4mts9FL+fuqMQar5k+anndsqTwtD4WTfoRCoXBoiDAiEhiy1LOgr6GDgJorMYkfuKgdrdViz2meT2F5wiZnxm/gdnGLko2jYmwsla/wIvRtjzMRYR/ut1OMcqRXwyGtFXkO3VlE8YJRZj0TqjKmKaAwsa0mkVU1G2w1RjT8FDVt2qW+UVGggaqM3KZLs9rwn/K56X+eSraNx+BSBqDa+OX1h6Z1e8nRNVxYviOHL3FybcvlgZXLVWRUSBemS6P4xgQq0dapRB+D3/0N0hzY67FUQNfhFk4EsZCxKMxIi6EH7ueCssPTz5ESmp Generated by Nova
- _inject_key_into_fs /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:436
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 102] [24770] Make directory path=root/.ssh make_path /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:102
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf mkdir -p /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 145] [24770] Set permissions path=root/.ssh user=root group=root set_ownership /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:145
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chown root:root /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 139] [24770] Set permissions path=root/.ssh mode=700 set_permissions /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:139
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chmod 700 /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.api 386] [24770] Inject file fs=<nova.virt.disk.vfs.localfs.VFSLocalFS object at 0x3fa2210> path=root/.ssh/authorized_keys append=True _inject_file_into_fs /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:386
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 107] [24770] Append file path=root/.ssh/authorized_keys append_file /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:107
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 583] [24770] Making synchronous call on conductor ... multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
- 2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 586] [24770] MSG_ID is 56a11872137f46998a7dac3acb225b83 multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
- 2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 337] [24770] UNIQUE_ID is d355a1b88fcc45709f184272ec22e903. _add_unique_id /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf tee -a /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 139] [24770] Set permissions path=root/.ssh/authorized_keys mode=600 set_permissions /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:139
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chmod 600 /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 131] [24770] Has file path=etc/selinux has_file /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:131
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/etc/selinux execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -e /tmp/openstack-vfs-localfsqvWMch/etc/selinux execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:48 DEBUG [nova.utils 232] [24770] Result was 1 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:48 DEBUG [nova.virt.disk.mount.api 203] [24770] Umount /dev/nbd6p1 unmnt_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:203
- 2013-06-17 09:46:48 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf umount /dev/nbd6p1 execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:49 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
- 2013-06-17 09:46:49 DEBUG [nova.virt.disk.mount.api 179] [24770] Unmap dev /dev/nbd6 unmap_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:179
- 2013-06-17 09:46:49 DEBUG [nova.virt.disk.mount.nbd 126] [24770] Release nbd device /dev/nbd6 unget_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/nbd.py:126
- 2013-06-17 09:46:49 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-nbd -d /dev/nbd6 execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
- 2013-06-17 09:46:49 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.api 436] [24770] Inject key fs=<nova.virt.disk.vfs.localfs.VFSLocalFS object at 0x3fa2210> key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDdG2ek7tGR4NLPHDHntNdPBu0hnEA4mts9FL+fuqMQar5k+anndsqTwtD4WTfoRCoXBoiDAiEhiy1LOgr6GDgJorMYkfuKgdrdViz2meT2F5wiZnxm/gdnGLko2jYmwsla/wIvRtjzMRYR/ut1OMcqRXwyGtFXkO3VlE8YJRZj0TqjKmKaAwsa0mkVU1G2w1RjT8FDVt2qW+UVGggaqM3KZLs9rwn/K56X+eSraNx+BSBqDa+OX1h6Z1e8nRNVxYviOHL3FybcvlgZXLVWRUSBemS6P4xgQq0dapRB+D3/0N0hzY67FUQNfhFk4EsZCxKMxIi6EH7ueCssPTz5ESmp Generated by Nova
_inject_key_into_fs /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:436
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 102] [24770] Make directory path=root/.ssh make_path /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:102
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf mkdir -p /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 145] [24770] Set permissions path=root/.ssh user=root group=root set_ownership /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:145
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chown root:root /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 139] [24770] Set permissions path=root/.ssh mode=700 set_permissions /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:139
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chmod 700 /tmp/openstack-vfs-localfsqvWMch/root/.ssh execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.api 386] [24770] Inject file fs=<nova.virt.disk.vfs.localfs.VFSLocalFS object at 0x3fa2210> path=root/.ssh/authorized_keys append=True _inject_file_into_fs /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py:386
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 107] [24770] Append file path=root/.ssh/authorized_keys append_file /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:107
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 583] [24770] Making synchronous call on conductor ... multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:583
2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 586] [24770] MSG_ID is 56a11872137f46998a7dac3acb225b83 multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:586
2013-06-17 09:46:47 DEBUG [nova.openstack.common.rpc.amqp 337] [24770] UNIQUE_ID is d355a1b88fcc45709f184272ec22e903. _add_unique_id /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:337
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf tee -a /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 139] [24770] Set permissions path=root/.ssh/authorized_keys mode=600 set_permissions /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:139
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf chmod 600 /tmp/openstack-vfs-localfsqvWMch/root/.ssh/authorized_keys execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.virt.disk.vfs.localfs 131] [24770] Has file path=etc/selinux has_file /usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py:131
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -nm /tmp/openstack-vfs-localfsqvWMch/etc/selinux execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:47 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:47 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf readlink -e /tmp/openstack-vfs-localfsqvWMch/etc/selinux execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:48 DEBUG [nova.utils 232] [24770] Result was 1 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:48 DEBUG [nova.virt.disk.mount.api 203] [24770] Umount /dev/nbd6p1 unmnt_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:203
2013-06-17 09:46:48 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf umount /dev/nbd6p1 execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:49 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2013-06-17 09:46:49 DEBUG [nova.virt.disk.mount.api 179] [24770] Unmap dev /dev/nbd6 unmap_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/api.py:179
2013-06-17 09:46:49 DEBUG [nova.virt.disk.mount.nbd 126] [24770] Release nbd device /dev/nbd6 unget_dev /usr/lib/python2.7/dist-packages/nova/virt/disk/mount/nbd.py:126
2013-06-17 09:46:49 DEBUG [nova.utils 208] [24770] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf qemu-nbd -d /dev/nbd6 execute /usr/lib/python2.7/dist-packages/nova/utils.py:208
2013-06-17 09:46:49 DEBUG [nova.utils 232] [24770] Result was 0 execute /usr/lib/python2.7/dist-packages/nova/utils.py:232
2、問題分析
有問題,多google。
社群給出的解釋(https://lists.launchpad.net/openstack/msg12202.html):Ubuntu cloud images do not have any ssh HostKey generated inside them (/etc/ssh/ssh_host_{ecdsa,dsa,rsa}_key). The keys are generated by cloud-init after it finds a metadata service. Without a metadata service, they do not get generated. ssh will drop your connections immediately without HostKeys.
看來是因為虛擬機器訪問169.254.169.254不通造成的。於是到NetworkNode檢視下iptables規則。
NetworkNode的nat表規則: [plain] view plaincopyprint?
- [email protected]:~# ip netns exec qrouter-b147a74b-39bb-4c7a-aed5-19cac4c2df13 iptables-save -t nat
- # Generated by iptables-save v1.4.12 on Mon Jun 17 10:14:57 2013
- *nat
- :PREROUTING ACCEPT [28:8644]
- :INPUT ACCEPT [90:12364]
- :OUTPUT ACCEPT [0:0]
- :POSTROUTING ACCEPT [7:444]
- :quantum-l3-agent-OUTPUT - [0:0]
- :quantum-l3-agent-POSTROUTING - [0:0]
- :quantum-l3-agent-PREROUTING - [0:0]
- :quantum-l3-agent-float-snat - [0:0]
- :quantum-l3-agent-snat - [0:0]
- :quantum-postrouting-bottom - [0:0]
- -A PREROUTING -j quantum-l3-agent-PREROUTING
- -A OUTPUT -j quantum-l3-agent-OUTPUT
- -A POSTROUTING -j quantum-l3-agent-POSTROUTING
- -A POSTROUTING -j quantum-postrouting-bottom
- -A quantum-l3-agent-OUTPUT -d 192.150.73.3/32 -j DNAT --to-destination 10.1.1.4
- -A quantum-l3-agent-OUTPUT -d 192.150.73.4/32 -j DNAT --to-destination 10.1.1.2
- -A quantum-l3-agent-OUTPUT -d 192.150.73.5/32 -j DNAT --to-destination 10.1.1.6
- -A quantum-l3-agent-POSTROUTING ! -i qg-08db2f8b-88 ! -o qg-08db2f8b-88 -m conntrack ! --ctstate DNAT -j ACCEPT
- -A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
- -A quantum-l3-agent-PREROUTING -d 192.150.73.3/32 -j DNAT --to-destination 10.1.1.4
- -A quantum-l3-agent-PREROUTING -d 192.150.73.4/32 -j DNAT --to-destination 10.1.1.2
- -A quantum-l3-agent-PREROUTING -d 192.150.73.5/32 -j DNAT --to-destination 10.1.1.6
- -A quantum-l3-agent-float-snat -s 10.1.1.4/32 -j SNAT --to-source 192.150.73.3
- -A quantum-l3-agent-float-snat -s 10.1.1.2/32 -j SNAT --to-source 192.150.73.4
- -A quantum-l3-agent-float-snat -s 10.1.1.6/32 -j SNAT --to-source 192.150.73.5
- -A quantum-l3-agent-snat -j quantum-l3-agent-float-snat
- -A quantum-l3-agent-snat -s 10.1.1.0/24 -j SNAT --to-source 192.150.73.2
- -A quantum-postrouting-bottom -j quantum-l3-agent-snat
- COMMIT
- # Completed on Mon Jun 17 10:14:57 2013
[email protected]:~# ip netns exec qrouter-b147a74b-39bb-4c7a-aed5-19cac4c2df13 iptables-save -t nat
# Generated by iptables-save v1.4.12 on Mon Jun 17 10:14:57 2013
*nat
:PREROUTING ACCEPT [28:8644]
:INPUT ACCEPT [90:12364]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [7:444]
:quantum-l3-agent-OUTPUT - [0:0]
:quantum-l3-agent-POSTROUTING - [0:0]
:quantum-l3-agent-PREROUTING - [0:0]
:quantum-l3-agent-float-snat - [0:0]
:quantum-l3-agent-snat - [0:0]
:quantum-postrouting-bottom - [0:0]
-A PREROUTING -j quantum-l3-agent-PREROUTING
-A OUTPUT -j quantum-l3-agent-OUTPUT
-A POSTROUTING -j quantum-l3-agent-POSTROUTING
-A POSTROUTING -j quantum-postrouting-bottom
-A quantum-l3-agent-OUTPUT -d 192.150.73.3/32 -j DNAT --to-destination 10.1.1.4
-A quantum-l3-agent-OUTPUT -d 192.150.73.4/32 -j DNAT --to-destination 10.1.1.2
-A quantum-l3-agent-OUTPUT -d 192.150.73.5/32 -j DNAT --to-destination 10.1.1.6
-A quantum-l3-agent-POSTROUTING ! -i qg-08db2f8b-88 ! -o qg-08db2f8b-88 -m conntrack ! --ctstate DNAT -j ACCEPT
-A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A quantum-l3-agent-PREROUTING -d 192.150.73.3/32 -j DNAT --to-destination 10.1.1.4
-A quantum-l3-agent-PREROUTING -d 192.150.73.4/32 -j DNAT --to-destination 10.1.1.2
-A quantum-l3-agent-PREROUTING -d 192.150.73.5/32 -j DNAT --to-destination 10.1.1.6
-A quantum-l3-agent-float-snat -s 10.1.1.4/32 -j SNAT --to-source 192.150.73.3
-A quantum-l3-agent-float-snat -s 10.1.1.2/32 -j SNAT --to-source 192.150.73.4
-A quantum-l3-agent-float-snat -s 10.1.1.6/32 -j SNAT --to-source 192.150.73.5
-A quantum-l3-agent-snat -j quantum-l3-agent-float-snat
-A quantum-l3-agent-snat -s 10.1.1.0/24 -j SNAT --to-source 192.150.73.2
-A quantum-postrouting-bottom -j quantum-l3-agent-snat
COMMIT
# Completed on Mon Jun 17 10:14:57 2013
NetworkNode的filter表規則:
[plain]
view plaincopyprint?
- [email protected]:~# ip netns exec qrouter-b147a74b-39bb-4c7a-aed5-19cac4c2df13 iptables-save -t filter
- # Generated by iptables-save v1.4.12 on Mon Jun 17 13:10:10 2013
- *filter
- :INPUT ACCEPT [1516:215380]
- :FORWARD ACCEPT [81:12744]
- :OUTPUT ACCEPT [912:85634]
- :quantum-filter-top - [0:0]
- :quantum-l3-agent-FORWARD - [0:0]
- :quantum-l3-agent-INPUT - [0:0]
- :quantum-l3-agent-OUTPUT - [0:0]
- :quantum-l3-agent-local - [0:0]
- -A INPUT -j quantum-l3-agent-INPUT
- -A FORWARD -j quantum-filter-top
- -A FORWARD -j quantum-l3-agent-FORWARD
- -A OUTPUT -j quantum-filter-top
- -A OUTPUT -j quantum-l3-agent-OUTPUT
- -A quantum-filter-top -j quantum-l3-agent-local
- -A quantum-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT
- COMMIT
- # Completed on Mon Jun 17 13:10:10 2013
[email protected]:~# ip netns exec qrouter-b147a74b-39bb-4c7a-aed5-19cac4c2df13 iptables-save -t filter
# Generated by iptables-save v1.4.12 on Mon Jun 17 13:10:10 2013
*filter
:INPUT ACCEPT [1516:215380]
:FORWARD ACCEPT [81:12744]
:OUTPUT ACCEPT [912:85634]
:quantum-filter-top - [0:0]
:quantum-l3-agent-FORWARD - [0:0]
:quantum-l3-agent-INPUT - [0:0]
:quantum-l3-agent-OUTPUT - [0:0]
:quantum-l3-agent-local - [0:0]
-A INPUT -j quantum-l3-agent-INPUT
-A FORWARD -j quantum-filter-top
-A FORWARD -j quantum-l3-agent-FORWARD
-A OUTPUT -j quantum-filter-top
-A OUTPUT -j quantum-l3-agent-OUTPUT
-A quantum-filter-top -j quantum-l3-agent-local
-A quantum-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT
COMMIT
# Completed on Mon Jun 17 13:10:10 2013
可以看到iptables規則沒有問題。於是就想著看一下metadata-proxy的日誌,果不其然,發現如下列印: Remote metadata server experienced an internal server error.
接著看metadata agent的日誌,同樣的,發現如下錯誤:
content=: 404 Not Found. The resource could not be found.
繼續搜尋nova-api的日誌,找到根源:
ERROR [nova.api.metadata.handler 141] [4541] Failed to get metadata for ip: 192.168.82.232
192.168.82.232是我NetworkNode的IP地址,而metadata應該是從ControllerNode獲取啊。於是搜尋程式碼,來到如下地方:
[python] view plaincopyprint?
- if CONF.service_quantum_metadata_proxy:
- meta_data = self._handle_instance_id_request(req)
- else:
- if req.headers.get('X-Instance-ID'):
- LOG.warn(
- _("X-Instance-ID present in request headers. The "
- "'service_quantum_metadata_proxy' option must be enabled"
- " to process this header."))
- meta_data = self._handle_remote_ip_request(req)
if CONF.service_quantum_metadata_proxy:
meta_data = self._handle_instance_id_request(req)
else:
if req.headers.get('X-Instance-ID'):
LOG.warn(
_("X-Instance-ID present in request headers. The "
"'service_quantum_metadata_proxy' option must be enabled"
" to process this header."))
meta_data = self._handle_remote_ip_request(req)
發現進入了else分支,而與Quantum metadata配合時應該是進入if,於是搜尋配置項service_quantum_metadata_proxy,發現預設配置為False,而在/etc/nova/nova.conf中並沒有進行覆蓋,所以導致問題出現。至此,問題分析完畢。
3、問題解決
修改/etc/nova/nova.conf中配置項service_quantum_metadata_proxy=True,重啟程序。重啟虛擬機器,檢視其console日誌輸出:
[plain] view plaincopyprint?
- cloud-init start-local running: Mon, 17 Jun 2013 05:45:40 +0000. up 3.44 seconds
- no instance data found in start-local
- ci-info: lo : 1 127.0.0.1 255.0.0.0 .
- ci-info: eth0 : 1 10.1.1.6 255.255.255.0 fa:16:3e:31:f4:52
- ci-info: route-0: 0.0.0.0 10.1.1.1 0.0.0.0 eth0 UG
- ci-info: route-1: 10.1.1.0 0.0.0.0 255.255.255.0 eth0 U
- cloud-init start running: Mon, 17 Jun 2013 05:45:44 +0000. up 7.66 seconds
- found data source: DataSourceEc2
- 2013-06-17 05:45:55,999 - __init__.py[WARNING]: Unhandled non-multipart userdata ''
- Generating public/private rsa key pair.
- Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
- Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
- The key fingerprint is:
- de:04:ec:82:0c:09:d8:b3:12:ac:4a:40:94:81:e7:48 [email protected]
- The key's randomart image is:
- +--[ RSA 2048]----+
- |B=o |
- |=E+. . |
- |+=oo o |
- |+.oo . . . |
- |o. o . S . |
- |. o o |
- | . . |
- | |
- | |
- +-----------------+
- Generating public/private dsa key pair.
- Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
- Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
- The key fingerprint is:
- 6e:aa:2a:da:bb:ef:43:8b:14:5a:99:36:64:74:10:c7 [email protected]
- The key's randomart image is:
- +--[ DSA 1024]----+
- | .++o |
- | ooE |
- | o o |
- | B |
- | + o S |
- |. . . . |
- | . o . o |
- |... o o |
- |o.=*+o. |
- +-----------------+
- Generating public/private ecdsa key pair.
- Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
- Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
- The key fingerprint is:
- 66:c0:a3:48:cb:d7:0b:bf:6e:e2:6d:e5:24:3b:66:f7 [email protected]
- The key's randomart image is:
- +--[ECDSA 256]---+
- | |
- | . |
- | . + |
- | o o o o |
- | + + . S |
- | . o.+o |
- | o* |
- | ..B.o |
- | ..B+o .E |
- +-----------------+
- * Starting system logging daemon [ OK ]
- * Starting Handle applying cloud-config [ OK ]
- Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
- * Starting AppArmor profiles [ OK ]
- landscape-client is not configured, please run landscape-config.
- * Stopping System V initialisation compatibility [ OK ]
- * Starting System V runlevel compatibility [ OK ]
- * Starting automatic crash report generation [ OK ]
- * Starting save kernel messages [ OK ]
- * Starting ACPI daemon [ OK ]
- * Starting regular background program processing daemon [ OK ]
- * Starting deferred execution scheduler [ OK ]
- * Starting CPU interrupts balancing daemon [ OK ]
- * Stopping save kernel messages [ OK ]
- * Starting crash report submission daemon [ OK ]
- * Stopping System V runlevel compatibility [ OK ]
- Generating locales...
- en_US.UTF-8... done
- Generation complete.
- ec2:
- ec2: #############################################################
- ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
- ec2: 1024 6e:aa:2a:da:bb:ef:43:8b:14:5a:99:36:64:74:10:c7 [email protected] (DSA)
- ec2: 256 66:c0:a3:48:cb:d7:0b:bf:6e:e2:6d:e5:24:3b:66:f7 [email protected] (ECDSA)
- ec2: 2048 de:04:ec:82:0c:09:d8:b3:12:ac:4a:40:94:81:e7:48 [email protected] (RSA)
- ec2: -----END SSH HOST KEY FINGERPRINTS-----
- ec2: #############################################################
- -----BEGIN SSH HOST KEY KEYS-----
- ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwe6gbVpdgs1dOskAl8M42wwaTZJdfGV3JslsDy9g04f4/JCGJskDSm4Tgv9d4p+a6G85/NofsZSbmj8/6nWZ8= [email protected]
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUrTgq3oTDuw1Bvh62LaYOOxjsEkfOk9IIVOdqASG5c2ExucIAKdRZY8XqlmoN3d64VI65ArsBWQ+PeuofUFfE5z8DvFr13ieNlLw8VgD46TGZ9XYLzZgs1CpN1evoU6Np3NN8q3CihprzcBCh7uKlAsgmwULh22+vDJPMnJamtn0Nk3NVtLJKqyujoN/pEIsWYouyBOJIKWjPLUPnGRpVeqQ1NkRED5w2SHbK9I49e6fItPnA9jVdTG06K2/xThXVUjVE3iwXr/uHMfNpJoejZzSqCmdhD68pIMleOI/Hd6+RPMJurw5CVYvdLOv4lWQMOEOpBzzXSp44JMlN3AKP [email protected]
- -----END SSH HOST KEY KEYS-----
- cloud-init boot finished at Mon, 17 Jun 2013 05:46:31 +0000. Up 54.86 seconds
cloud-init start-local running: Mon, 17 Jun 2013 05:45:40 +0000. up 3.44 seconds
no instance data found in start-local
ci-info: lo : 1 127.0.0.1 255.0.0.0 .
ci-info: eth0 : 1 10.1.1.6 255.255.255.0 fa:16:3e:31:f4:52
ci-info: route-0: 0.0.0.0 10.1.1.1 0.0.0.0 eth0 UG
ci-info: route-1: 10.1.1.0 0.0.0.0 255.255.255.0 eth0 U
cloud-init start running: Mon, 17 Jun 2013 05:45:44 +0000. up 7.66 seconds
found data source: DataSourceEc2
2013-06-17 05:45:55,999 - __init__.py[WARNING]: Unhandled non-multipart userdata ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
de:04:ec:82:0c:09:d8:b3:12:ac:4a:40:94:81:e7:48 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|B=o |
|=E+. . |
|+=oo o |
|+.oo . . . |
|o. o . S . |
|. o o |
| . . |
| |
| |
+-----------------+
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
6e:aa:2a:da:bb:ef:43:8b:14:5a:99:36:64:74:10:c7 [email protected]
The key's randomart image is:
+--[ DSA 1024]----+
| .++o |
| ooE |
| o o |
| B |
| + o S |
|. . . . |
| . o . o |
|... o o |
|o.=*+o. |
+-----------------+
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
The key fingerprint is:
66:c0:a3:48:cb:d7:0b:bf:6e:e2:6d:e5:24:3b:66:f7 [email protected]
The key's randomart image is:
+--[ECDSA 256]---+
| |
| . |
| . + |
| o o o o |
| + + . S |
| . o.+o |
| o* |
| ..B.o |
| ..B+o .E |
+-----------------+
* Starting system logging daemon [ OK ]
* Starting Handle applying cloud-config [ OK ]
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
* Starting AppArmor profiles [ OK ]
landscape-client is not configured, please run landscape-config.
* Stopping System V initialisation compatibility [ OK ]
* Starting System V runlevel compatibility [ OK ]
* Starting automatic crash report generation [ OK ]
* Starting save kernel messages [ OK ]
* Starting ACPI daemon [ OK ]
* Starting regular background program processing daemon [ OK ]
* Starting deferred execution scheduler [ OK ]
* Starting CPU interrupts balancing daemon [ OK ]
* Stopping save kernel messages [ OK ]
* Starting crash report submission daemon [ OK ]
* Stopping System V runlevel compatibility [ OK ]
Generating locales...
en_US.UTF-8... done
Generation complete.
ec2:
ec2: #############################################################
ec2: -----BEGIN SSH HOST KEY FINGERPRINTS-----
ec2: 1024 6e:aa:2a:da:bb:ef:43:8b:14:5a:99:36:64:74:10:c7 [email protected] (DSA)
ec2: 256 66:c0:a3:48:cb:d7:0b:bf:6e:e2:6d:e5:24:3b:66:f7 [email protected] (ECDSA)
ec2: 2048 de:04:ec:82:0c:09:d8:b3:12:ac:4a:40:94:81:e7:48 [email protected] (RSA)
ec2: -----END SSH HOST KEY FINGERPRINTS-----
ec2: #############################################################
-----BEGIN SSH HOST KEY KEYS-----
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwe6gbVpdgs1dOskAl8M42wwaTZJdfGV3JslsDy9g04f4/JCGJskDSm4Tgv9d4p+a6G85/NofsZSbmj8/6nWZ8= [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDUrTgq3oTDuw1Bvh62LaYOOxjsEkfOk9IIVOdqASG5c2ExucIAKdRZY8XqlmoN3d64VI65ArsBWQ+PeuofUFfE5z8DvFr13ieNlLw8VgD46TGZ9XYLzZgs1CpN1evoU6Np3NN8q3CihprzcBCh7uKlAsgmwULh22+vDJPMnJamtn0Nk3NVtLJKqyujoN/pEIsWYouyBOJIKWjPLUPnGRpVeqQ1NkRED5w2SHbK9I49e6fItPnA9jVdTG06K2/xThXVUjVE3iwXr/uHMfNpJoejZzSqCmdhD68pIMleOI/Hd6+RPMJurw5CVYvdLOv4lWQMOEOpBzzXSp44JMlN3AKP [email protected]
-----END SSH HOST KEY KEYS-----
cloud-init boot finished at Mon, 17 Jun 2013 05:46:31 +0000. Up 54.86 seconds
再次在NetworkNode上ssh登入虛擬機器:
相關推薦
【OpenStack】SSH登入虛擬機器出現"Read from socket failed: Connection reset by peer"問題的解決辦法
宣告: 本部落格歡迎轉發,但請保留原作者資訊! 新浪微博:@孔令賢HW; 內容系本人學習、研究和總結,如有雷同,實屬榮幸! 1、問題現象 版本:Grizzly master分支程式碼2013.06.17 部署:三個節點(Controller/Compute + Netw
CentOS7使用ssh不能登錄,報錯:Read from socket failed: Connection reset by peer
read from socket failed: connection reset by peer使用xshell登錄CentOS7,不能登錄,使用另外一臺Linux主機,telent 22端口是同的,ssh連接報以下錯誤:Read from socket failed: Connection reset b
SSH error ( Read from socket failed: Connection reset by peer ) and it's solution
ssh cann't connected ,event in localhost[[email protected] ssh]# ssh 127.0.0.1Read from socket failed: Connection reset by peertry the following refe
【已解決】VMWare執行虛擬機器出現內部錯誤
因為想要自制一份ubuntu系統映象,擔心因為自己的錯誤操作而破壞了原系統,所以打算在VMware下建立ubuntu虛擬機器,並在虛擬機器內執行拷貝。(這是一個很好的辦法,也是虛擬機器存在的意義吧) 第一天晚上建立好虛擬機器後,便關機了。今天早上再
【VMware】VMware 15 虛擬機器安裝win7
目錄 一、準備虛擬機器 二、win7 ghost ISO映象檔案下載 三、VMware新建虛擬機器 四、VMware安裝win7 五、解決vmware虛擬機器螢幕沒有適應視窗全屏問題 一、準備虛擬機器 檢視我的另一博文:【VMware】VMware Worksta
ssh 登入虛擬機器的linux
(虛擬機器環境為:VMware Workstation) 1.將虛擬機器的網路介面卡的連線方式設定為 橋接並在linux上安裝ssh server。2.在linux內的終端內配置IP地址配置的IP地址應與宿主機在一個網段內,假設宿主機的IP為192.168.10.11
【VMware】VMware Workstation虛擬機器不能聯網的解決辦法
我們要保證實體機的網路是通暢的,網頁能開啟。首先檢查VMware Workstation虛擬機器網路服務是否已經啟動,因為360加速球等軟體可能會自動禁止虛擬機器的網路服務。選中“計算機”*(W7叫“計算機”,XP系統叫“我的電腦”)右鍵—管理—服務,找到以VM開頭的服務,選中後,右鍵—啟動,就可以了。
【發現】淺談虛擬機器的優點
【前言】 最近看到不少同學都在用虛擬機器學習,百度學習之後,虛擬機器果然有其自身優越性,做一篇分享文章,加深學習。 【虛擬機器】 定義:虛擬機器指通過軟體模擬的具有完整硬體系統功能的、執行在一個完全
vmware虛擬機器下CentOS7.2出現ssh連線被connection reset by peer錯誤
用vmware 11 安裝了centos7.2後,用ssh工具遠端連接出現connection reset by peer錯誤,檢查防火牆和/etc/hosts.deny等都沒有發現問題,於是停止sshd服務,啟用debug模式跟蹤: /usr/sbin/sshd -d
【問題】安裝linux虛擬機器-問題彙總
問題是什麼? 問題1: 在使用VMWare Workstation 安裝虛擬機器的時候偶爾會報錯: intel vt-x 處於禁用狀態 效果如下: 或者出現類似的問題: VMware Workstation 不可恢復的錯誤:(vcpu-0)
【VMware】完整克隆虛擬機器後連不上網
記錄一下完整克隆虛擬機器後連不上網的問題解決方案。 背景: 使用static的方式手動設定了靜態IP,完整克隆之後卻不能連線到虛擬機器了。產生這個問題的原因是克隆之後,新的克隆 副本的網絡卡資訊被修改了,所以需要手動設定下。原先使用的eth0網絡卡已經被使用了所
【VMware】VMware linux虛擬機器無法獲取uuid
解決方法挺簡單,記錄一下,如下: 1.在虛擬機器關閉以後,進入虛擬機器的目錄 2、用文字編輯器修改vmx檔案,在vmx檔案中任意位置(通常在最後)新增如下行: disk.EnableUUID = "TRUE" 3、重新啟動虛擬機器,之後就可以正確獲取SCSI ID # /s
【轉載】Remote System Explorer Operation總是運行後臺服務,卡死eclipse解決辦法
free ons down 地址 log system ack star rdquo 原來是eclipse後臺進程在遠程操作,就是右下角顯示的“Remote System Explorer Operation”。折騰了半天,在Stack Overfl
2-1對於在VMware克隆虛擬機器centOSLinux的時候,找不到IP地址的解決辦法
我們在VMware克隆虛擬機器centOSLinux的時候,找不到IP地址,如圖。 開啟閘道器會報錯 Device eth0 does not seem to be present,delaying initialization. 重
【.net】未在本地計算機上註冊“microsoft.ACE.oledb.12.0”提供程式解決辦法
#錯誤描述: 在開發.net專案中,通過microsoft.ACE.oledb讀取excel檔案資訊時,報錯: “未在本地計算機上註冊“microsoft.ACE.oledb.12.0”提供程式” #程式碼示例: 1 static void Main(string[]
【Python】django切換資料庫為mysql後,報錯Error loading MySQLdb module解決辦法
初學django 將預設資料庫換成mysql後 修改setting.py檔案的資料庫配置 DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME
虛擬機器執行Windows7系統,安裝vmtools按鈕為灰色的解決辦法
1. 搜尋vmtools下載iso檔案 附連結: Vmtools下載 2.開啟VMware,點選Windows7,再點開虛擬機器(M),開啟裡面的設定 3.如圖點選新增 4.如圖操作 5.如圖所示 6.開啟Wi
虛擬機器中安裝ubuntu 16.04 遇到的問題及解決辦法
虛擬機器中ubuntu系統無法全屏,只有中間一部分。輸入:xrandr再輸入:xrandr -s 1920x1080 (將螢幕解析度調製1920x1080尺寸)如果尺寸中沒有1920x1080這個尺寸,通過以下程式碼新增螢幕尺寸:cvt 1920 1080xrandr --
【轉載】 Eclipse 外掛Maven在使用 add dependency,找不到包,解決辦法
通過右鍵單擊pom.xml檔案選擇maven –> add dependency 或者是開啟pom.xml檔案,選擇dependencies –>add 時,搜尋不到依賴的jar包,解決方法如下: 1、eclipse選單 window-> show view –> other –&g
virtualbox虛擬機器新增雙網絡卡不起作用的解決辦法
對於virtualbox虛擬機器,我們最常用的網路方式可能就要數網路地址轉換(NAT)了,基本上不需要什麼額外配置虛擬機器就可以訪問外網了,設定埠轉發也可以很容易實現真機訪問虛擬機器,但想實現虛擬機器和真機,以及虛擬機器之間的通訊就比較難了,看到網上的解決方