Docker from Zero

Docker from Zero

Who has never heard the phrase: “It works on my machine…”. This is (or was) a common problem for developers, as each machine has different configurations and when running the application, many variables could cause its failure. In this way, Docker appears as a way of standardizing the development environment so that, with just a few commands, we can run the entire application, with its dependencies and connections.

It may seem complicated at first, but understanding some concepts and commands, we can quickly extract the best that Docker has to offer and make our lives much easier.

Docker runs a virtual Linux machine, so it’s recommended to know a little about Linux and terminal commands.

But before learning how to create or execute commands, let’s understand some initial concepts:

Building a house

To understand how Docker works on our machine and its use, I like to use the analogy of when we’re building a house. In this case, let’s think that we have a piece of land that is a condominium of houses, and we are going to build several houses there. The houses will have a standard for the main structure, and then we can customize each one with furniture, etc.

Firstly we need to think about the size of our house, the walls, doors and windows. Where the hydraulic and electrical parts will be located.
After this is defined, we pass it on to the architect or engineer to build the house blueprint.
With this, we can build one or several houses based on the house blueprint.
And then we can enter each house and put whatever we want inside. Even modify a door or window, work on each house individually. We can also demolish a house and build another in the same location, based on the initial blueprint.
Another issue is the bills (electricity, water). Let’s assume that we always want to have a total of how much we are spending on the condominium. If we demolish a house, we cannot lose that value. So we need to have a place to store the bills outside each house, in addition to having a copy in the house itself.

Now that we have an idea of ​​the hypothetical steps to build our condominium of houses, let’s relate them to running an application with Docker.

First, we define the steps to build an image, that is, the house blueprint. These steps are saved in a Dockerfile, a file where is defined what we will need to have on the machine(s) we are going to run .
From this file, we built the image, the blueprint of our house. An image is basically a template for creating containers.
The containers are our homes. They are instances of a virtual machine running on our machine, with the settings specified in our image. We can upload a container and access it through the terminal. There, we can modify it as we want, installing new applications or libraries, running applications, creating a database, etc.
Finally, we can remove a container. This will erase any files or installations we have made inside it. When we upload a new container, we will have the initial specifications defined in our image.
What if we want to keep data from our container to the next one? We can use volumes for this. Like house bills, we need to save the information outside the container to persist it. This way, we create a “bridge” between our machine and the container, copying everything from one to the other. By removing the containers, we keep the data on our machine and can use it again.

Workflow

Once you understand these concepts, it becomes easier to understand a workflow using Docker. The commands below are better described at the end of this post.

Firstly, we need an image to work with. We can create a Dockerfile to build (docker build) one or download one from Docker Hub (docker pull).

After we have the image (docker images), we can run it (docker run) and generate a container, a virtual machine with the settings defined in the image.

Container can run once and stop, or keep running on our machine (docker ps). We can access the container (docker exec) and run commands inside it (remembering that any changes will be lost when we kill the container).

After using the container, we can kill it (docker stop) (docker rm). And if we wish, we can also remove the image (docker rmi), if we are no longer going to use it to create new containers.

Creating a Dockerfile

When creating a file, like the example above, we can build the Dockerfile into an image, running docker build -t <IMAGE-NAME>:<VERSION> <DOCKERFILE-DIRECTORY>. This creates the image with the commands and installations specified in the Dockerfile already executed.

From there, we can define the steps to configure the image with the initial settings. We always use an image as a base with the FROM command. RUN, for example, executes the commands.

Dockerfile main instructions

FROM: image that we will use as base.

WORKDIR: Defines the folder where we will execute the commands in the container.

RUN: Runs the specified commands.

COPY: Copies files from our pro image machine.

ENTRYPOINT: Command to be executed when we initialize the container.

Some others can be seen here.

Main Docker commands I use

Below, I have separated some of the main features that I use on a daily basis.

Using Images

docker pull <IMAGE-NAME>:<VERSION>: Pulls the image to our local machine. If the version is not passed, latest is used.

docker build <DOCKERFILE-PATH>: We build the Dockerfile in an image

-t <IMAGE-NAME>:<VERSION>: We create an image name. It is interesting to use our Docker Hub user name followed by / and an image name: user/project-abc:v1

docker images: Lists the images downloaded to the machine.

docker rmi <IMAGE-ID>: Removes the image.

Listing and removal of containers.

docker ps: Lists the containers that are running.

-a: Shows active containers or those that have already been run.

-q: Lists container ids.

docker stop <CONTAINER-ID>: Stops the container that is running.

docker rm <CONTAINER-ID>: Removes the stopped container.

-f: Forces removal, even if the container is running.

To remove all active and non-active containers:

docker rm -f $(docker ps -aq)

Manipulating containers

docker run <IMAGE> <COMMAND>: Run the image in our terminal with the given command.

-i: Interactive module, keeping the process running.

-t: Allows you to type things in the terminal.

-p: Points to the port to be mapped.

-d: Detaches the terminal from the process.

-v: Volume for data persistence. We pass the local path, with ‘:’ separating the path in the container. Changes to the container folder will persist on the local machine.

–rm: Removes the container after finishing running it.

–name: Defines a name for the container.
Example: docker run -it ubuntu bash we run an Ubuntu image and run bash, allowing us to execute code in the terminal inside the container.
Example: docker run -d -p 8080:80 nginx we run nginx and map port 8080 on our machine to port 80 on the container.

docker exec <CONTAINER-NAME> <COMMAND>: Executes a command in our already active container. (We can use the same flags as docker run, such as -it)

Working with volumes

docker volume ls: List volumes.

docker volume create <VOLUME-NAME>: Creates a volume.

docker volume inspect <VOLUME-NAME>: Shows volume data.

docker volume prune: Removes volumes and frees up space.

With this, we can create a volume by passing the –mount flag with the name of the volume created. Example:

docker run -d -p 8080:80 –mount type=volume,source=<VOLUME-NAME>,target=/app nginx

Or with the -v flag:

docker run -d -p 8080:80 -v <VOLUME-NAME>:/app nginx

Considerations

I tried to summarize the concepts and main commands I use in Docker. It’s worth delving deeper and practicing uploading images for personal use. Another point is to study Docker Compose, which orchestrates multiple Docker containers to work together.
If you have any questions or comments, write them below and I will be happy to answer them.

Please follow and like us:
Pin Share