Desarrollo

The Ultimate Docker Tutorial Guide

10 minutes

 The Ultimate Docker Tutorial Guide

Introduction to Docker

It’s been over a decade since Docker appeared on the software development scene, and today, almost no one can imagine a software project without using Docker at some stage. The emergence of Docker has revolutionized how we develop, deploy, and run our applications.

In this post, we'll explain what Docker is, its advantages, and how to get started with it. We'll also provide practical examples to help you make the most of this powerful technology.

What is Docker?

Docker is an open-source technology that simplifies deploying applications across different environments using containers. The main features of these containers are that they are lightweight, portable, and self-sufficient. This allows developers to focus on their code rather than the multitude of configurations needed to replicate their applications in different environments.

Benefits of Using Docker in Development and Production

Using Docker in any environment offers numerous benefits, including:

  • Consistency: Ensures that software runs consistently in any environment.
  • Portability: Allows containers to run on any machine with Docker installed. 
  • Isolation: Promotes separation of applications, ensuring each container has its own resources isolated from other containers.
  • Ease of Management: Simplifies deploying complex applications with many dependencies using tools like Docker Compose.

 Getting Started with Docker

Installing Docker

The installation process can vary depending on the operating system used. Below, we describe the installation for the most common systems.

How to Install Docker on Different Operating Systems

  • Windows / macOS: For both systems, the easiest method is to install Docker Desktop, which can be downloaded from the official website.
  • Linux: On Linux, you can choose to install only the Docker engine (CLI only) or use Docker Desktop. If it's your first time using Docker, we recommend the second option, as the interface will help manage images and containers more easily. Like on Windows and macOS, it can be obtained from the official website.

System Requirements and Initial Setup

  • Windows / MacOS: A 64-bit version of the OS is required, and virtualization must be enabled in the BIOS.
  • Linux: It's recommended to use a Debian or Red Hat-based distribution with an updated kernel. Ensure your user belongs to the docker group to avoid needing to run commands with sudo, which can be risky: Here is how to set it up.

First Steps Once Docker is Installed

You can verify the installation by running:

docker --version

To test Docker, you can run a test image:

docker run hello-world

Docker Basics

Docker Images and Containers

Aunque los contenedores y las máquinas virtuales (VM) permiten ejecutar aplicaciones en entornos aislados, existe una diferencia clave entre ellos: Los contenedores comparten el sistema operativo (kernel) del host, lo que los hace más eficientes y ligeros. En cambio, las VM incluyen su propio sistema operativo completo lo que las hace menos eficientes y más pesadas.

Imágenes y contenedores Docker

  • Image: It’s a template for creating containers and defines how a container should be built and executed. It's static and unchanging, similar to a blueprint that defines how a building should be constructed but isn't the building itself. It's created in a file called Dockerfile.
  • Container: When an image is run, it creates a container, which is a running instance of that image. It's dynamic, meaning it can start, stop, restart, and be removed as needed.

To summarize: the image defines what's needed and how it should run, while the container is the practical, executable manifestation of that definition.

Basic Docker commands

  • docker run: Runs a container.
  • docker pull: DDownloads an image from Docker Hub.
  • docker build: Builds an image from a Dockerfile.
  • docker ps: Lists running containers.

Find a complete list of available commands here.

Working with Dockerfiles

What is a Dockerfile?

A Dockerfile is a text file containing the instructions needed to create a Docker image. It defines how to build the image, including the base, dependencies, files, and configuration.

Basic Structure of a Dockerfile

Here's a basic example of the usual structure of a Dockerfile:

FROM ubuntu:latest

RUN apt-get update && apt-get install -y python3

COPY . /app

WORKDIR /app

CMD ["python3", "app.py"]

How to create a Dockerfile

Creating a Dockerfile is straightforward and follows a series of instructions. Here are the most common ones:

Step-by-Step Guide to Creating a Dockerfile

  1. Define the Base Image: Use the FROM command.
  2. Install Dependencies: Use RUN to execute commands like package installations.
  3. Copy Files: Use COPY or ADD to copy files from the host to the container.
  4. Set the Working Directory: Use WORKDIR to set the working directory.
  5. Startup Command: Use CMD to define the command that will run when the container starts.

Explore all instructions in Docker's official documentation.

Layers in a Dockerfile

Each instruction in a Dockerfile creates a new layer that is cached to speed up rebuilds. If an instruction is modified, all previous layers remain cached, but the modified instruction and those that follow must be rebuilt, increasing build time.

Best Practices and Tips

  • Use lightweight base images. For example, using alpine is better than ubuntu.
  • To reduce the number of layers, group commands together. For example, group installation commands.
  • If you need to exclude some files from the container, use the .dockerignore file.
  • When creating and testing a Dockerfile, put the most change-prone instructions last. This way, when rebuilding, only the last layers are updated, saving time.

Dockerfile Examples

Using Volumes

Using the VOLUME instruction:

FROM python:3.9-slim

WORKDIR /app

COPY . /app

VOLUME /app/data

CMD ["python", "app.py"]

Environment Variables

Define environment variables in the Dockerfile

Once defined, you can use these variables in other instructions in the Dockerfile:

FROM ubuntu:20.04

ENV APP_ENV=production
ENV APP_PORT=8080

RUN echo "Running in ${APP_ENV} mode on port ${APP_PORT}"

Pass environment variables during build or runtime

First, declare the argument with ARG in the Dockerfile:

FROM ubuntu:20.04

ARG APP_ENV
ARG APP_PORT

ENV APP_ENV=${APP_ENV}
ENV APP_PORT=${APP_PORT}

RUN echo "Running in ${APP_ENV} mode on port ${APP_PORT}"

During build: Using “–build-arg”:

docker build --build-arg APP_ENV=production --build-arg APP_PORT=8080 -t myapp .

At runtime: Using the -e option with “docker run”:

docker run -e APP_ENV=production -e APP_PORT=8080 myapp

Introduction to Docker Compose

What is Docker Compose?

Docker Compose is a tool designed to define and manage multi-container applications. Through a YAML file, you can configure all the services that make up your application.

When to Use It?

It's especially useful for applications that depend on multiple services, such as databases and caching systems. This tool simplifies the management and orchestration of these services, allowing them to work together in a coordinated way in containers.

Advantages of Using Docker Compose

  • Simplification: Your entire application can be defined in a single docker-compose.yml file.
  • Reproducibility: Makes it easier to replicate the development environment in production.
  • Scalability: Allows you to easily scale the services of your application.

How to Install Docker Compose

Docker Desktop installs Docker Compose by default, so no additional installation is required.

You can verify the installation with:

docker-compose --version

Basic Structure of a docker-compose.yml file

version: '3'

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./app:/var/www/html
  db:
    image: mariadb:10
    ports:
      - "6033:3306"
    environment:
      MYSQL_ROOT_PASSWORD: example

Explanation of Each Section:

  • vversion: Specifies the version of the Docker Compose file format. Version '3' is one of the most common and supports many modern Docker Compose features.
  • services: Defines the different containers that will be part of the application.
  • web / db: Service names; it's recommended to use meaningful names for easy identification.
  • image: Indicates the Docker image that will be used to create the container.
  • ports: Maps host ports to container ports:
    • “{puerto_host}:{puerto_contenedor}”
  • volumes: Maps host directories to container directories:
    • “{directorio_host}:{directorio_contenedor}”
  • environment: Defines environment variables for the container, such as MYSQL_ROOT_PASSWORD in this case.

Deploying and Managing the Environment with Docker Compose

Once the development environment is defined in Docker Compose, several commands can be used to build, deploy, and manage the containers.

Building and Deploying Containers

To build and deploy the containers specified in the docker-compose.yml file, use the command:

docker-compose up

This command creates and runs the containers according to the configurations, displaying the output in the terminal to monitor their status and detect possible errors.

Basic Commands for Management and Monitoring

Docker Compose offers several useful commands for managing and monitoring the development environment:

  • docker-compose start: Starts existing containers.
  • docker-compose stop: Stops existing containers.
  • docker-compose restart: Restarts existing containers.
  • docker-compose ps: Shows the status of the containers.
  • docker-compose logs: Displays container logs.

These commands make controlling and monitoring the development environment efficient.

Conclusion About Docker

Docker has transformed software development by allowing applications and their dependencies to be packaged into portable, lightweight containers, improving efficiency and consistency across various environments.

Docker Compose simplifies the management and orchestration of multi-container applications, offering significant benefits like portability, isolation, and scalability.

Adopting Docker and following recommended best practices optimizes the development and deployment of complex applications.

Post relacionados

Continúa leyendo