If you use AWS, you might of accidentally did something to lock yourself out of your instance which doesnt allow you to ssh anymore for some reason or another.  Previously, to regain access to your instance, you would of needed to detach your EBS volume and attach it to a recovery instance and then start sniffing for the cause of the ssh failure.

There is an easier way, and thats using userdata. To inject userdata into your instance, you will need to do the following using the AWS Management Console:

1- Stop your instance.
2- In the console, select your instance, go to Actions -> Instance Settings -> View/Change User Data

Once inside userdata, you will need to add the commands you are going to run to regain ssh access. This will vary however depending on the flavor of linux your are using as seen below:

 

#cloud-config
bootcmd:
– [ chmod, 700, /home/ubuntu ]
– [ chmod, 700, /home/ubuntu/.ssh ]
– [ chmod, 600, /home/ubuntu/.ssh/* ]
– [ chmod, 600, /etc/ssh/ssh_host_*_key ]
– [ chmod, 600, /home/ubuntu/.ssh/authorized_keys ]
– [ sh, -c, “chown -R ubuntu:ubuntu /home/ubuntu” ]
– [ chmod, 711, /var/empty/sshd ]

 

#cloud-config
bootcmd:
– [ chmod, 700, /home/ec2-user ]
– [ chmod, 700, /home/ec2-user/.ssh ]
– [ chmod, 600, /home/ec2-user/.ssh/* ]
– [ chmod, 600, /etc/ssh/ssh_host_*_key ]
– [ chmod, 711, /var/empty/sshd ]
– [ chmod, 600, /home/ec2-user/.ssh/authorized_keys ]
– [ sh, -c, “chown -R ec2-user:ec2-user /home/ec2-user” ]

 

#cloud-config
bootcmd:
– [ chmod, 700, /home/centos ]
– [ chmod, 700, /home/centos/.ssh ]
– [ chmod, 600, /home/centos/.ssh/* ]
– [ chmod, 600, /etc/ssh/ssh_host_*_key ]
– [ chmod, 600, /home/centos/.ssh/authorized_keys ]
– [ sh, -c, “chown -R centos:centos /home/centos” ]
– [ chmod, 711, /var/empty/sshd ]

 

Once you have added the correct userdata, all you will need to do is restart your instance. I found that this fixes probably about 90% of all ssh issues.

 

I hope this helps.

Leave a Reply

Your email address will not be published. Required fields are marked *