What is Docker, What is it Used For, and How Does it Work?
Vishal Pallerla•

Hey there, techies! Chances are you've heard of Docker. This containerization technology has taken the software development world by storm, and for a good reason. It's being used by many companies to ship their software, and is a vital piece of the software development puzzle. But if you're new to the world of Docker, it can be overwhelming to understand what it is and how it works. Fear not! In this post, I'm going to help you comprehend what Docker is, what it's used for, and how it works so that you too can start using Docker today.
What is Docker? #
When you develop software, you usually have to worry about many dependencies, like libraries, databases, and other software tools that your application needs to run. These dependencies can cause a lot of headaches when you try to move your application from one computer to another, or from development to production environments.
That's where Docker comes in. It allows you to package your application and all its dependencies into a standardized and portable unit, which can easily move around, shared, deployed and runs on any operating system. This way Docker removes the friction between development, QA, and production environments by providing a consistent way to package and deploy code.
Docker is a platform that simplifies software development, testing, deployment, and maintenance. It was first released in 2013 and has become a widely used tool in software development.

What is the deal with all these ships and container images popping up when I search for Docker? #
Do you mean, just like the one above?
Alright, Let me spill the beans on why ship and container images are popping up when you search for Docker. It's not because the internet is trying to send you on a sea-faring adventure, but rather because these images represent the wondrous world of containerization!
Containerization is taking an application and all its dependencies, packaging them into a single package, and shipping them. The result is an isolated environment that looks and feels like a virtual machine, but takes far less time to start up and run.
As such, Docker is like a virtual container that holds everything you need to run your application. This container is a lightweight, standalone executable package that contains everything your application needs to run, such as the code, runtime, system tools, libraries, and settings.
Just as a shipping container can be easily moved between trucks, ships, and trains, Docker containers can be shipped anywhere without worrying about compatibility issues or whether there are enough resources available locally. In this way, containers make it easier to move applications between different environments — such as from development to production ensuring your application will run consistently, no matter where it's deployed.
Furthermore, Docker's containerization technology enables you to run multiple applications on the same machine, without interfering with each other. Each container has its isolated environment, so you can avoid conflicts and collisions between different software applications, and make better use of your hardware resources.
So, next time you see those massive vessels and cargo crates, just remember that you're not a sailor – you're a coding ninja, ready to conquer the tech seas!
What is Docker used for? #
Docker is used for a variety of purposes, including:
1. Simplifying Development Environments #
One of the main benefits of Docker is that it simplifies the process of setting up development environments. Developers can create a Docker container that includes all the necessary dependencies for their application, and then share that container with other developers. This makes it easier to ensure everyone is working in the same environment, which can help reduce bugs and other issues.
Developers can also create a development environment that mirrors their production environment. They can use the same language stack, tools, frameworks and libraries as they do in production — no matter their operating system.
2. Streamlining Deployment #
Docker also makes it easier to deploy applications to production environments. Developers can create a Docker container that includes their application and all its dependencies, and then deploy that container to any environment that supports Docker. This helps reduce the time and effort required to deploy applications, and makes it easier to scale applications as needed.
Deploying code from source control directly into production is also a great way to enable continuous delivery/deployment. It avoids manual steps like compiling code into binaries or packaging software into virtual machines that don't always work as expected.
3. Creating Isolated Environments #
Docker containers are isolated from each other and the host operating system. This means that developers can run multiple containers on the same server without worrying about conflicts. It also means that if one container crashes or is compromised, it won't affect the other containers or the host operating system.
4. Microservices Architecture #
Microservices architectures allow you to break down monolithic applications into smaller pieces that can be independently deployed on-premises or in the cloud. A microservices architecture is also easier to scale up because each service can be scaled independently based on demand (e.g., more instances of an image).
Docker is well-suited for implementing microservices architecture enabling developers to create and deploy each service as a separate container, which can be easily scaled up or down as needed.
5. Hybrid Cloud Environments #
Docker can be used to create portable applications that can run across different cloud platforms, such as AWS, Azure, and Google Cloud. This can be particularly useful in scenarios where organizations want to take advantage of the scalability and cost-effectiveness of the cloud, but need to ensure their applications are compatible across different platforms.
How Docker works? #
The best way to understand how Docker works is by looking at its components.
Image #
The first is the image, which contains everything needed to run your application — files, system libraries and tools, configuration options, runtime, etc.
Container #
The second one is the container, which consists of an isolated process that runs inside the image and has its own filesystem (it can even have its IP address). Container images become containers at runtime and in the case of Docker containers – images become containers when they run on Docker Engine.
Networking #
The third component is networking — this allows containers to communicate via built-in tools like Docker Swarm or Flannel (both supported by Kubernetes).
Storage #
Finally, there’s storage — this allows you to save images on remote hosts or share them between them.
It's time to get into the details of how docker works, since you've got a good idea what docker is with an overview of docker components.
Docker works by creating containers that isolate an application and its dependencies from the host system. The Docker engine uses a layered file system to create and manage these images, allowing them to be shared and reused across different environments. Docker uses a client-server architecture to manage containers. The Docker client communicates with the Docker daemon, responsible for creating and managing containers.
When developers create a Docker container, they start by creating a Dockerfile, a text file specifying the container's configuration.
A Dockerfile is a text file that has a set of instructions for building a Docker image. An image is a lightweight, read-only template that contains the application code and all its dependencies. It includes instructions for installing dependencies, copying files into the container, and configuring the container's network settings.
Let's say we want to create a Docker container for a simple Python web application that uses the Flask framework. Here's what the Dockerfile might look like:
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
Let's break down what each line does:
FROM
specifies the base image we want to use, in this case,python:3.9-slim
WORKDIR
sets the working directory for the following instructions to/app
COPY
copies the current directory contents into the container at/app
RUN
runs thepip install
command inside the container to install all the dependencies.ENV NAME
sets the environment variableEXPOSE
exposes port 80 to the host machine.CMD
specifies the command to run when the container starts, in this case,python app.py
To summarize, we start with a base image that includes Python 3.9, which is the runtime we need for our application. We set the working directory to /app and copy the current directory into the container. We then install the required packages using pip, and expose port 80 to the outside world. Finally, we set an environment variable and specify that app.py should be run when the container launches.
Once the Dockerfile is complete, the developer can use the Docker command-line interface to build the container. This process creates an image, a read-only template that includes the container's configuration and dependencies.
To build the Docker image, we run the following command:
docker build -t my-flask-app .
This command tells Docker to build an image using the current directory (.), and tag it with the name my-flask-app.
The developer can then use the Docker CLI to run the container, which creates a new container based on the image. The container runs in an isolated environment, with its own file system, network settings, and process space.
We can run the container using the following command:
docker run -p 4000:80 my-flask-app
This command starts the container and maps port 80 inside the container to port 4000 on the host machine. Now, if we visit http://localhost:4000 in a web browser, we should see our Flask application running inside the Docker container.
Docker is an open-source platform with a vast and active community of developers. This means plenty of resources are available to learn more about Docker and its capabilities. One such resource is the Docker Hub, a repository of pre-built images that can be used as the basis for your containers. The Docker Hub includes images for various applications and services, including databases, web servers, and programming languages.
The way it works is that when you want to run your containerized app, you simply download the container image from a registry that stores them (such as Docker Hub) and then run it with the docker run command.
Conclusion #
Docker can make your life as an engineer much easier by providing you with a fast and reliable way to deploy complex applications quickly and consistently. It also allows you to automate repetitive tasks, like setting up development environments or deploying code changes across multiple servers.

Vishal Pallerla
Developer Advocate, DevZero