Docker has been in the developers’ world for years, but for many people it is still something remote and enigmatic. In a series of posts, I would like to introduce both theory and practice - you will find out how Docker can help you in your daily work, how to prepare an environment based on Docker and how to use this environment.

Why you should use Docker

In short: to provide a uniform, automated runtime environment for our applications that anyone who starts working on the project can run. Only and so much 😁.

Let’s go back a few years, when I worked in a team of several developers. The team had multiple internal applications in its portfolio, their stable and trouble-free operation was not the highest priority - if something did not work, users informed us about it, then we fixed it. Each of the developers worked on his own computer, installed all the tools necessary for the application to work on his own. This led to a situation where one developer had PHP 7.0, another 7.1, one had MySQL 5.6, another MySQL 5.7 🤷‍♂️. Someone working on functionalities installed a PHP extension, other developers did not have it. As if that wasn’t enough, sometimes not even everyone was aware of what versions were running in the production instance and whether the runtime had all the required extensions and libraries. So you could write code that worked in development but didn’t work in production. “It works on my workstation…

In retrospect, I am shocked that it worked at all, because differences in the language and tool versions can easily lead to serious problems. And sometimes they did, generating a lot of work both in terms of analysis and repair.

Docker eliminates all these problems. Thanks to Docker, we can easily compose the stack so that it contains everything needed to run the application, and that everyone working on the project has exactly the same environment. Is there a need to add a low-level system library or a PHP extension? No problem, one person adds such steps in the build definition, the other people synchronise the repository and have the same runtime.

The unquestionable advantage of working with Docker is that on the first day of work, when you get a new, company computer, you can install just two things and start working on the project right away. You just install Docker and Git, clone the repository, and run the Docker stack - done! Of course, we are talking about a simplified scenario here, because there are more complex applications that will require a bit more to run 😉. However, this does not change the fact that Docker allows us to unify runtime environments, moreover: it allows us to prepare a standardised runtime tailored for development, as well as a production runtime, optimised in terms of security and performance.

Glossary of terms

Before we move on to discussing the actual topics, we need to familiarise ourselves with a few concepts that we will be using. This will make it easier for us to understand the individual elements of the process of building and running containers.

Docker

Docker is a runtime engine for containerised applications. It is installed as a system package that provides CLI tools, GUI interface (Docker Desktop), and also installs and configures processes running in the background of our operating system. Think of Docker as a system framework for running applications.

Docker Compose

Docker Compose is an extension of the basic tool and is used to define and run complete sets of services that make up our system (the stack). For example, it can be our application, database, queuing system and others. Essentially, it’s about us being able to simulate a production environment and create a fully operational system for development. I don’t say that Compose can’t be used outside of your local machine, but that’s another story 😉.

Dockerfiles

The Dockerfile is the runtime definition for our application. It contains step-by-step instructions on what needs to be done to ensure a functional runtime. On its basis, the process of building images takes place.

compose.yml

A file in YAML format that defines the application stack, i.e. all services, volumes, networks and dependencies between them. It contains information about what services are called, what images they are launched from or how they are built, how they communicate with each other and how they are seen from the outside (from the perspective of the local computer and operating system).

Image

The image is a product of the Dockerfile execution process: a build. Such an image is created as a result of invoking the docker build command or running the Compose stack (docker compose up -d), in which services define build context.

The image usually consists of the operating system, the required libraries, the runtime for the required programming language, extensions for it, and of course the code of the application to be launched.

Registry

The Docker Image Registry is a place where built images are uploaded and from which they can be downloaded by anyone who needs them. Of course, there is an official and freely available Docker Hub, but in fact images can be stored in many other places: Gitlab has a built-in registry for each project, there are also dedicated apps like Harbor.

Container

A container is an instance of an image. From a programmer’s perspective, we can think of it as a class (image) → object (container) relationship. The container has a root process (pid 0), which is the basis of its operation.

Volume

Volumes are used to map files from the local file system to the running container. Thanks to this operation, we can overwrite a fragment of the file tree inside the container (e.g. files that were created as a result of build and are part of the image from which the container was launched).

Thanks to this, we can map our local project to the place from which the application is launched inside the container for local development and verify the changes made in real time.

Volumes are also used for persistency of data created and/or modified inside running container, e.g. MySQL’s schema and data. With volumes, we retain this type of data even when we stop a running stack - when we resume it, it will be in exactly the same state.

Network

Networking in Docker is very important because it allows separation of services. We can run different services side by side, which will not know about each other and will not have physical access to each other.

Compose by default provides a network where all the services within the stack are. However, nothing stands in the way of opening communication between services in separate stacks.

Summary

You just learned the basics of Docker 😎! If the theory does not appeal to you, I invite you to the next entries in the series - in the next one we will discuss Dockerfile based on real examples.