Greetings everyone! In this blog post, we will be diving into the world of Docker. As a leading platform for containerization, Docker allows developers to package and distribute their applications easily and consistently across different environments.
From understanding the core concepts such as images and containers, to more advanced topics like networking and data storage, this guide will provide a comprehensive introduction to Docker and its capabilities. So, let’s get started and discover the power of Docker together!
What is Docker?
To make it simple Docker is an ecosystem that assists programmers in building and running a software. Docker works by wrapping an application and its necessary components into one single package, called a Docker Image. The running instance of such image is called a container.
Think of a container like a box, and the application and its dependencies are the items inside the box. This box can be moved around and opened on different computers, and it will always run the same way, regardless of the environment.
This is important because prior to Docker, it was difficult for developers to run their programs on different computers. They had to worry about different versions of software, dependencies, frameworks, and even the operating system on each computer. With Docker, developers can be sure that their program will run the same way on any computer Docker is installed on, eliminating these compatibility issues forever!
Since the release of Docker in 2013, many major companies have made their software available as Docker images. This makes it easy to quickly run popular software, such as Jenkins, WordPress, MS SQL Server, in a containerized environment. This has become the norm for software distribution !
Installing Docker on a computer is as easy as running a single command line:
sudo apt-get install docker-ce docker-ce-cli containerd.io
Let me explain:
- Docker-ce is the community version of Docker which is supported by Docker, it contains the Docker engine.
- Docker-ce-cli is the command-line interface of Docker CE, that allows you to interact with the Docker daemon.
- Containerd is a daemon responsible for managing the low-level details of containers’ lifecycle: I won’t go into details about this.
Docker Engine and Docker Daemon are both components of the Docker platform. The Docker Engine is the underlying technology that runs the containers, while the Docker Daemon is the background process that manages the containers. Think of the Docker Engine as the “brain” of the platform and the Docker Daemon as the “worker” that does the heavy lifting. The Docker Engine communicates with the Docker Daemon to create, start, stop, and manage containers. Together, these two components make it possible to use Docker to run applications in a portable and efficient way.
Docker CLI allows users to interact with the Docker daemon using commands in a terminal. With the Docker CLI, users can create, run, and manage Docker containers, images, and networks.
Running your first container
Once you have Docker running on your computer, pulling a Docker Image and running your first container is a piece of cake! Eg. :
docker run --name my-wordpress -p 8080:80 -d wordpress
This command will automatically pull the official WordPress image from Docker Hub and run it in a container named “my-wordpress”, it will also map port 8080 on the host machine to port 80 on the container. The -d flag runs the container in detached mode, allowing it to run in the background.
And Voilà, your WordPress is accessible on port 8080.
You can expose & secure your instance using a Reverse Proxy, see my post about Nginx Proxy Manager
Creating your own Docker Image
Creating your own Docker Image is not difficult. All you need to do is create a file called a Dockerfile & run the docker build command. The Dockerfile contains instructions on how to build the image. This can include things like what base image to use (this is the foundation for your new image) and what dependencies to install. But don’t worry, many modern development environments have built-in tools to generate the Dockerfile for you with just a single click.
Networking and data storage
When it comes to networking and data storage within Docker, it’s important to understand that each container runs as its own isolated process on the host machine. This means that by default, containers cannot communicate with each other or access data on the host machine. However, there are ways to configure networking and data storage to allow for communication and data sharing between containers and the host machine.
One important component to understand is the docker0 bridge. This is a virtual network bridge that is created by Docker when it is installed on a machine. It allows containers to communicate with each other, as well as with the host machine and other networks. Each container that is running on the host machine is connected to the docker0 bridge, and it is assigned its own IP address within the bridge’s subnet. This allows the containers to communicate with each other as if they were on the same network.
Additionally, you can use docker network command to create virtual networks and connect containers together.
For data storage, you can use volumes to persist data outside the container’s filesystem. This is useful for storing data that needs to be retained even if the container is deleted or recreated. Volumes can also be used to share data between multiple containers.
It’s also possible to configure data storage using bind mounts, which allows you to mount a directory on the host machine as a directory inside the container. This makes it easy to access data on the host machine from inside the container, and vice versa.
This concludes our post on Docker.
With this information, you should have a solid foundation to start working with Docker and utilizing its many benefits in your development and operations workflow. Thanks for reading and I hope you found this information helpful.
Leave a Reply