This post is older than a year. Consider some information might not be accurate anymore.
Jan 12, 2020 nsenter allows you to enter a specific process namespace, for example in this scenario, it allows you to enter the network namespace of the container and still keep access to the host tooling. You can read more about it here. Docker learning - Docker's Overlay network Docker installation and simple use of the Windows system Docker Docker environment builds and uses Docker Visual Management Tools Dockerui Docker Practice Quick Getting Started.
Used: Docker version 18.06.1-ce, build e68fc7a RedHat 7.5 (Maipo)
Docker has its strength by isolating applications through containers. Each container has its namespace and a network subsystem. Starting with Docker containers there is a different approach to check connections for your running application.
In this scenario, I arranged a Linux Server having an application running on the Linux host and in a docker container.
The current Java EE application is running on port 8443. To check existing connections to the JBoss Server, I use netstat
.
The most common mistake is to assume it works equally for the application running in a Docker container. The check with netstat
.
Docker’s networking subsystem default driver is bridge
. The network of the container is isolated from the host. The connection check for the application has to happen inside the namespace of the container.
There are several solutions I would like to illustrate. In my favoured order:
nsenter
on Linuxnetstat
inside the containerhost
The command nsenter
runs a program with namespaces of other processes. It is part of the util-linux
package and thus should be available for most Linux flavours.
To use nsenter
, we need to determine the process id of the Docker container. Following command with docker inspect
illustrates a natural way. The Docker container is named value-mapper
.
Now we use the obtained process id, to enter the namespace of the Docker container process and run netstat
on it.
If nsenter
is not available, like on a Mac OS, you still can enter the docker container and execute netstat
. It requires to install netstat
on the running Docker container or add it to your Docker base image.
My need was operational, so I did it live. The Docker base was RHEL 7.5, so I needed the rpm net-tools
, that contains netstat
.
Download rpm from CentOS repository
Copy it into Docker container.
Login into Docker container as root.
Install it with yum and exit the container.
Log in as the regular user and use netstat in the docker container.
For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. So you can use netstat like before. All you have to do is to start the docker container in host mode.
Depending on your case there are several solutions to check connections for a Docker container. Independent on which environment you work, you can always use netstat
within the container.
When they start using Docker, people often ask: “How do I get insidemy containers?” and other people will tell them “Run an SSH serverin your containers!” but that’s a very bad practice. We will seewhy it’s wrong, and what you should do instead.
Note: if you want to comment or share this article, use thecanonical version hosted on the Docker Blog. Thank you!
…Unless your container is an SSH server, of course.
It’s tempting to run the SSH server, because it gives an easy way to“get inside” of the container. Virtually everybody in our craft usedSSH at least once in their life. Most of us use it on a daily basis,and are familiar with public and private keys, password-less logins,key agents, and even sometimes port forwarding and other niceties.
With that in mind, it’s not surprising that people would advise youto run SSH within your container. But you should think twice.
Let’s say that you are building a Docker image for a Redis serveror a Java webservice. I would like to ask you a few questions.
What do you need SSH for?
Most likely, you want to do backups, check logs, maybe restartthe process, tweak the configuration, possibly debug the serverwith gdb, strace, or similar tools. We will see how to do thosethings without SSH.
How will you manage keys and passwords?
Most likely, you will either bake those into your image, orput them in a volume. Think about what you should do when youwant to update keys or passwords. If you bake them into theimage, you will need to rebuild your images, redeploy them, andrestart your containers. Not the end of the world, but not veryelegant neither. A much better solution is to put the credentialsin a volume, and manage that volume. It works, but has significantdrawbacks. You should make sure that the container does not havewrite access to the volume; otherwise, it could corrupt thecredentials (preventing you from logging into the container!),which could be even worse if those credentials are shared acrossmultiple containers. If only SSH could be elsewhere, that wouldbe one less thing to worry about, right?
How will you manage security upgrades?
The SSH server is pretty safe, but still, when a security issuearises, you will have to upgrade all the containers using SSH.That means rebuilding and restarting all of them. That alsomeans that even if you need a pretty innocuous memcached service,you have to stay up-to-date with security advisories, becausethe attack surface of your container is suddenly much bigger.Again, if SSH could be elsewhere, that would be a nice separationof concerns, wouldn’t it?
Do you need to “just add the SSH server” to make it work?
No. You also need to add a process manager; for instance Monitor Supervisor. This is because Docker will watch one singleprocess. If you need multiple processes, you need to add oneat the top-level to take care of the others. In other words,you’re turning a lean and simple container into something muchmore complicated. If your application stops (if it exits cleanlyor if it crashes), instead of getting that information throughDocker, you will have to get it from your process manager.
You are in charge of putting the app inside a container, butare you also in charge of access policies and security compliance?
In smaller organizations, that doesn’t matter too much. But inlarger groups, if you are the person putting the app in a container,there is probably a different person responsible for definingremote access policies. Your company might have strict policiesdefining who can get access, how, and what kind of audit trailis required. In that case, you definitely don’t want to puta SSH server in your container.
Your data should be in a volume. Then, you can run anothercontainer, and with the --volumes-from
option, share thatvolume with the first one. The new container will be dedicatedto the backup job, and will have access to the required data.
Added benefit: if you need to install new tools to make yourbackups or to ship them to long term storage (like s3cmd
or the like), you can do that in the special-purpose backupcontainer instead of the main service container. It’s cleaner.
Use a volume! Yes, again. If you write all your logs undera specific directory, and that directory is a volume, thenyou can start another “log inspection” container (with--volumes-from
, remember?) and do everything you need here.
Again, if you need special tools (or just a fancy ack-grep
),you can install them in the other container, keeping yourmain container in pristine condition.
Virtually all services can be restarted with signals. Whenyou issue /etc/init.d/foo restart
or service foo restart
,it will almost always result in sending a specific signal toa process. You can send that signal with docker kill -s <signal>
.
Some services won’t listen to signals, but will accept commandson a special socket. If it is a TCP socket, just connect overthe network. If it is a UNIX socket, you will use… a volume,one more time. Setup the container and the service so that thecontrol socket is in a specific directory, and that directory isa volume. Then you can start a new container with access to thatvolume; it will be able to use the socket.
“But, this is complicated!” - not really. Let’s say that yourservice foo
creates a socket in /var/run/foo.sock
, andrequires you to run fooctl restart
to be restarted cleanly.Just start the service with -v /var/run
(or add VOLUME/var/run
in the Dockerfile). When you want to restart,execute the exact same image, but with the --volumes-from
option and overriding the command. This will look like this:
It’s that simple!
If you are performing a durable change to the configuration, itshould be done in the image - because if you start a new container,the old configuration will be there again, and your changes willbe lost. So, no SSH access for you!
“But I need to change my configuration over the lifetime of myservice; for instance to add new virtual hosts!”
In that case, you should use… wait for it… a volume! Theconfiguration should be in a volume, and that volume should beshared with a special-purpose “config editor” container. Youcan use anything you like in this container: SSH + your favoriteeditor, or an web service accepting API calls, or a crontabfetching the information from an outside source; whatever.
Again, you’re separating concerns: one container runs the service,another deals with configuration updates.
“But I’m doing temporary changes, because I’m testing differentvalues!
In that case, check the next section!
That’s the only scenario where you really need to get a shellinto the container. Because you’re going to run gdb, strace,tweak the configuration, etc.
In that case, you need nsenter
.
nsenter
nsenter
is a small tool allowing to enter
into n
ames
paces.Technically, it can enter existing namespaces, or spawn a processinto a new set of namespaces. “What are those namespaces you’reblabbering about?” They are one of the essential constituantsof containers.
The short version is: with nsenter
, you can get a shell intoan existing container, even if that container doesn’t run SSHor any kind of special-purpose daemon.
nsenter
?Check jpetazzo/nsenter on GitHub. The short version is that if you run:
… this will install nsenter
in /usr/local/bin
and you will be ableto use it immediately.
nsenter
might also be available in your distro (in the util-linux
package).
First, figure out the PID of the container you want to enter:
Then enter the container:
You will get a shell inside the container. That’s it.
If you want to run a specific script or program in an automated manner,add it as argument to nsenter
. It works a bit like chroot
, exceptthat it works with containers instead of plain directories.
If you need to enter a container from a remote host, you have (at least)two ways to do it:
nsenter
;nsenter
).The first solution is pretty easy; but it requires root access to theDocker host (which is not great from a security point of view).
The second solution uses the command=
pattern in SSH’s authorized_keys
file. You are probably familiar with “classic” authorized_keys
files,which look like this:
(Of course, a real key is much longer, and typically spans multiple lines.)
You can also force a specific command. If you want to be able to checkthe available memory on your system from a remote host, using SSH keys,but you don’t want to give full shell access, you can put this in theauthorized_keys
file:
Now, when that specific key connects, instead of getting a shell, it willexecute the free
command. It won’t be able to do anything else.
(Technically, you probably want to add no-port-forwarding
; check themanpage authorized_keys(5)
for more information.)
The crux of this mechanism is to split responsibilities. Alice putsservices within containers; she doesn’t deal with remote access, logging,and so on. Betty will add the SSH layer, to be used only in exceptionalcircumstances (to debug weird issues). Charlotte will take care of logging. And so on.
Is it really Wrong (uppercase double you) to run the SSH server ina container? Let’s be honest, it’s not that bad. It’s even superconvenient when you don’t have access to the Docker host, but stillneed to get a shell within the container.
But we saw here that there are many ways to not run an SSH serverin a container, and still get all the features we want, with amuch cleaner architecture.
Docker allows you to use whatever workflow is best for you. Butbefore jumping in the “my container is really a small VPS” bandwagon,be aware that there are other solutions, so you can make aninformed decision!