SSH is one of the most-used commands in a sysadmin’s toolbox but it’s not commonly seen alongside Docker. Here’s how you can SSH into a running container and why you should think twice before you do.
Should You Use SSH With Docker Containers?
SSH-ing into a Docker container is generally a bad practice which you should avoid. It’s nearly always better to use the docker exec command to get a shell inside a container.
Docker newcomers can be tempted to use SSH to update files inside a container. Containers are meant to be disposable though so they should be treated as immutable after creation, except for persistent data stored inside volumes. Create a new image and restart your container when you edit source code.
Aside from the multi-step configuration process, installing SSH in a Docker image adds several dependency packages and exposes another potential attack vector. On a system with several active containers, you’ll be running multiple independent SSH processes and will have to remember the correct port for each container.
Instead of adding SSH to individual containers, install it once on the physical host that’s running Docker. Use SSH to connect to your host, then run docker exec -it my-container bash to access individual containers.
While docker exec is the preferred approach, there are still scenarios where SSH might be useful. You could introduce it as a stopgap measure to integrate with legacy deployment systems. It may also be used by some IDEs and build tools to provide live reload capabilities during development.
Installing the SSH Server in a Docker Container
Most popular Docker base images are kept intentionally streamlined. You’ll need to add the OpenSSH server yourself, even on images derived from popular operating system distriubtions.
Here’s an example Dockerfile for a Debian-based image:
The SSH configuration is modified so you can login as root, the default user in a Docker container. For greater security, setup a dedicated user account instead:
This creates a new user called sshuser with a home directory (-m). The -s switch sets the user’s default login shell to Bash.
The use of ENTRYPOINT ensures the SSH service always starts when the container does. Execution is then handed off to Bash as the container’s foreground process. You could replace this with your application’s binary.
Configuring Authentication
Next you need to setup an authentication system. You could assign a password to your sshuser account and login with that:
A more secure way is to set up SSH key authentication. You’ll need to create a key pair on your client machine, then copy the public part into the container. This way the SSH daemon can verify your machine’s identity when you connect.
Alter your Dockerfile to setup the .ssh configuration folder for your user. Copy in a public key from your working directory, either with a docker cp command or a COPY instruction in the Dockerfile. In the latter case, the key would be baked into the image, visible to anyone with access.
This sequence of commands creates SSH’s authorized_keys file with the id_rsa.pub public key in your working directory. The filesystem permissions are adjusted to match SSH’s requirements.
Connecting to the Container
Now you’re ready to connect to your container. Run the container with port 22 bound to the host:
Running ssh sshuser@example.com will give you a shell inside your container.
You can skip binding the port if you’ll be connecting from the machine that’s hosting the Docker container. Use docker inspect to get your container’s IP address, then pass it to the SSH connection command.
Use the SSH client on your machine to connect to the container:
You’ll need to use an alternative port if you’re running a separate SSH server on the host or you’ve got multiple containers that need port 22. Here’s how to initiate a connection when SSH is bound to port 2220:
Setting Up Container Shortcuts With SSH Config
You can manipulate your SSH config file to simplify connections to individual containers. Edit ~/.ssh/config to define shorthand hosts with preconfigured ports:
Now you can run ssh my-container to drop straight into your container. This makes it easier to juggle multiple connections without remembering container IPs and ports.
Use Dockssh to Simplify Container Management Instead
The Dockssh project takes this a step further by providing another daemon that lets you run ssh my-container@example.com, without any manual SSH configuration. You don’t need to install an SSH server in your containers; Dockssh automatically proxies SSH connections and runs the correct docker exec command instead.
You must first install Redis to store Dockssh’s configuration data:
Next, define the containers you want to expose by adding a Redis record with the container’s name and a password for SSH connections:
Then download Dockssh:
Now you can connect to your container:
Dockssh listens on port 22022 by default. The firewall is opened to allow incoming connections using the port.
You’ll be prompted for the container’s password when you connect. This was set as container-password-here in our Redis record above.
Using Dockssh makes it easy to SSH into a large number of Docker containers. This approach is ideal when you regularly connect to your containers from a remote host as it streamlines the two-step “SSH then docker exec” sequence into a single memorable command.
Register Dockssh as a system service for long-term use:
Enable the service using systemctl:
Dockssh will now start automatically when your system boots.
Summary
Combining SSH with Docker containers is broadly considered to be an anti-pattern yet it still has its uses in development, testing, and legacy environments. When there’s no alternative you can add the SSH server to your container, copy in a public key, and connect via the container’s IP or a host port binding.
System admins who want to remotely manage large numbers of Docker containers can try out Dockssh. It lets you run familiar ssh commands via a seamless behind-the-scenes mapping to docker exec, giving you the best of both worlds using unmodified images.