Modal Title
Containers

How to Share Data Between Docker Containers

there will be times when you need to have a centralized volume of data that more than one container can share. You need to be able to share that data between containers.
Feb 11th, 2022 9:11am by
Featued image for: How to Share Data Between Docker Containers

Let’s talk Docker. After all, without Docker, your entry into the world of containers might be a bit of a challenge. Imagine, your first steps with containerized deployments being centered completely on Kubernetes. That could quickly overwhelm the newly-minted.

What I want to specifically talk about is sharing data between containers within the realm of Docker.

If you’re new to Docker, you might be thinking, “Okay, that sounds cool.” If you already have a solid understanding of containers, you might be asking, “But aren’t containers supposed to be self-sufficient entities?” Yes, they are. But that doesn’t mean they can’t (or shouldn’t) share data.

Consider this: You deploy containers that need to share data. You might have a website, a web application, and a mobile application that all depend on the same data. Imagine the challenge it would be to keep each of those containerized applications in sync with one another.

What happens if the service in charge of the sync fails and you wind up with three different containers with three different sets of data. That could wind up being your biggest nightmare challenge of the week. Not necessarily an impossible task, but it’s certainly not one you’d want to deal with.

Now, before we get into this, know that your containers will most likely be self-sufficient. However, there will be times when you need to have a centralized volume of data that more than one container can share. Say, for example, you deploy multiple instances of an application or service that needs to use the same persistent data or cache. These could be simple websites, or even more complex database-driven applications. No matter the use case, you need to be able to share that data between containers.

Let me show you how this is done. We’re going to start from the very beginning, so get ready to head back to the basics.

How to Install Docker

I’m going to be demonstrating on my go-to server, Ubuntu. If you use a different distribution of Linux (or even Windows or macOS), make sure to alter the installation steps accordingly.

The first thing we’re going to do is install the necessary Docker dependencies. For this, log into your Ubuntu server instance and issue the command:

sudo apt-get install ca-certificates curl gnupg lsb-release -y

We’ll now add the official Docker GPG with the command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Next, add the required Docker repository:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

We can now update apt and install Docker with the command:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io -y

When the installation completes, dd your user to the docker group with:

sudo usermod -aG docker $USER

Finally, log out and log back in, so the changes take effect.

Create a Volume

The first thing to be done is the creation of a volume that will house our data. Let’s create a volume named persistent-data with the command:

docker volume create --name persistent-data

The volume is created in the /var/lib/docker/volumes directory. You can add data to that folder, but you can only do so as the root user.

Now that our volume has been created, we can then deploy our first container, which will use the persistent volume. The commands to deploy the container looks like this:

docker run -ti --name=conatiner1 -v persistent-data:/data ubuntu:latest

The above command will create a container named container1, mount the persistent-data volume into the /data directory within the new container (based on the latest version of Ubuntu). It will also give you access to the running container.

Once the container is deployed, and you have access to the bash prompt, create a new file in the /data directory with the command:

echo "Hello, New Stack" >> /data/test.txt

Exit out of the running container with the command:

exit

We’ll now deploy a second container with the command:

docker run -ti --name=conatiner2 -v persistent-data:/data ubuntu:latest

At the container’s bash prompt, type the command:

cat /data/test.txt

You should see the following printed out:


Let’s install the nano editor with the following commands:

apt-get update
apt-get install nano -y

Edit the test.txt file with the command:

nano /data/test.txt

Add the following at the bottom of the file:


Save and close the file.

Exit the container with:

exit

In order to once again gain access the containers, we have to redeploy them. First, locate the container ID with the command:

docker ps -a

Start both the containers (because they exited with the exit command) with:

docker start ID

Where ID is the ID of the containers.

Access the container with:

docker access -it ID /bin/bash

Where ID is the container ID for container1.

Issue the command:

cat /data/test.txt

You should see (in both instances of text.txt):


Exit the running container with the exit command. This time around, both containers will remain running. You can stop and remove them with the commands:

docker stop ID
docker rm ID

Where ID is the container ID for each container.

And that’s all there is to share data between Docker containers with the help of volumes.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.