In the modern world, the container is a fascinating technology. People do use some orchestration software such as Kubernetes, Openshift, Docker Swarm, and AWS ECS, etc to run their production workloads on containers.
Docker is one of them and Docker has revolutionized the way we deploy and manage applications by providing a lightweight, portable, and efficient containerization solution.
But the question is how these containers communicate with each other in various situations? In this article, we’ll look at simple communication between Docker containers, when they are running on the same host.
Way of communication?
Two containers can talk to each other in one of two ways,
- Communicating through networking: Containers are designed to be isolated. But they can send and receive requests to other applications, using networking.
For example: a web server container might expose a port, so that it can receive requests on port 80. Or an application container might make a connection to a database container.
- Sharing files on disk: Some applications communicate by reading and writing files. These kinds of applications can communicate by writing their files into a volume, which can also be shared with other containers.
For example: a data processing application might write a file to a shared volume which contains customer data, which is then read by another application. Or, two identical containers might even share the same files.
Communicating through networking:
Essentially, docker networking is used to create communication between the docker containers and the outside world via host machine or you might say that this is a communication passage through which all the isolated containers communicate with each other in different situations to perform the required acts.
In other words, if we run different workloads on the docker, there should be a communication mechanism through which applications/workloads will interact with each other.
Containers are ideal for applications or processes which expose some sort of network service. The most well-known examples of these kinds of applications are:
- Web servers – e.g. Nginx, Apache
- Backend applications and APIs – e.g. Asp.net Core, Node, Python, Spring Boot, Next.js API.
- Databases and data stores – e.g. MongoDB, MS SqlServer
There are more examples, but these are probably the most common ones!
So docker provides different types of docker networks and we can leverage them as per our needs.
Networking Types:
Docker utilizes three inbuilt networks that come with fresh docker installation. If you run,
you might see three networks:
- Bridge
- None
- Host
Let’s take a brief overview of these networks:
- A bridge network is the default network that comes with all docker installations. It is also known as the docker0 network. If not mentioned otherwise, all docker containers are created within the docker0 network.
- No network is generally known as a container-specific network. A container can be attached to none network. This is utilized for internal communication between containers being isolated to the outside network.
- The host network adds a container to the host’s network stack. As far as the network is concerned, there is no isolation between the host machine and the container.
For instance, if you run a container that runs a web server on port 80 using host networking, the web server is available on port 80 of the host machine.
What Are Docker Volumes?
Volumes are a mechanism for storing data outside containers. All volumes are managed by Docker and stored in a dedicated directory on your host, usually /var/lib/docker/volumes for Linux systems.
Volumes are mounted to filesystem paths in your containers. When containers write to a path beneath a volume mount point, the changes will be applied to the volume instead of the container’s writable image layer.
The written data will still be available if the container stops – as the volume’s stored separately on your host, it can be remounted to another container or accessed directly using manual tools.
Volumes work with both Linux and Windows containers. Several different drivers are available to store volume data in different services.
Local storage on your Docker host is the default, but NFS volumes, CIFS/Samba shares, and device-level block storage adapters are available as alternatives. Third-party plugins can add extra storage options too.
When to Use Docker Volumes?
Volumes are designed to support the deployment of stateful Docker containers. You’ll need to use a volume when a container requires persistent storage to permanently save new and modified files.
Typical volume use cases include the following:
- Database storage – You should mount a volume to the storage directories used by databases such as MySQL, Postgres, and Mongo. This will ensure your data persists after the container stops.
- Application data – Data generated by your application, such as file uploads, documents, and profile photos, should be stored in a volume.
- Essential caches – Consider using a volume to persist the contents of any caches, which would take significant time to rebuild.
- Convenient data backups – Docker’s centralized volume storage makes it easy to backup container data by mirroring /var/lib/docker/volumes to another location. Community tools and Docker Desktop extensions can automate the process, providing a much simpler experience than manually copying individual bind-mounted directories.
- Share data between containers – Docker volumes can be mounted to multiple containers simultaneously. Containers have real-time access to the changes made by their neighbors.
- Write to remote filesystems – You’ll need to use a volume when you want containers to write to remote filesystems and network shares. This can facilitate simpler workflows for applications that interact with your LAN resources.
You don’t need to mount a volume to containers that don’t have writable filesystem paths or that only store disposable content. As a general rule, create a volume when your container’s writing data which will cause disruption if it’s lost.
Now we’ll try to do some practical things we discussed above. You can clone this repo, all the needed files are there. https://github.com/basharovi/Product.API
Here is a docker-compose file named ‘docker-compose.yml’. This Docker Compose file creates a development environment where an ASP.NET Core API (product_api) communicates with a postgres instance (docker_postgres) using Docker containers. It ensures that the postgres container starts before the API container to satisfy dependencies.
As our postgres will run inside the docker container make sure you have updated connectionStrings from app. Settings.json, the server name will be your docker postgres container name and the database and password should be the same as that you provide in your db.DockerFile as environment variable.
version: '3.4'
name: product_services
services:
product_api:
container_name: product_api
build:
context: .
dockerfile: Product.API/Dockerfile
ports:
- "5001:8080"
depends_on:
- docker_postgres
docker_postgres:
container_name: docker_postgres
build:
context: .
dockerfile: Product.API/postgres.Dockerfile
ports:
- "5432:5432"
Then run the command docker-compose up — build will start to build the containers from the compose file like above image.
After fetching all the necessary data, Product services are running on the bridge network. If we to check which networks are running on our docker machine then just put a command docker network ls
If we want to deep dive into this with network id 89a84c849c34 with this command => docker inspect 89a84c849c34 (netwrok_id)
- https://docs.docker.com/get-started
- https://spacelift.io/blog/docker-volumes
- https://docs.docker.com/storage/volumes
- https://learn.microsoft.com/en-us/dotnet/architecture/microservices/container-docker-introduction/docker-defined
- https://learn.microsoft.com/en-us/shows/containers-with-dotnet-and-docker-for-beginners