In my previous article, I discussed about how Docker and Kubernetes are making their way into automating the deployment cycles. In this article, we will take this a step further and dive deeper into understanding the terminologies and concepts associated with the dockers. The article will take you through the concept of docker, the creation of a very basic docker container and assist you in running the docker container on your machine. It will cover most of the docker command that would be required for a normal use in the Dockers.
Docker – Concept
Dockers at a high level is a way of abstracting your installation processes and execution processes into a single file called a Docker Image. A docker image is portable across various operating systems and runs with the help of a docker client installed on the system. A docker image can be considered analogous to a Java application that can run across system as long as the Java Virtual Machine is installed. A docker image is a setup of applications packaged into a portable bundle. A Docker normally uses a base image on top of which the other applications can run.
Functionally, Docker will act as an independent virtual server that can share folders and network with the host machine. In this article, we will discuss the basic process of building a Docker image and understand few of its commands in the section below. Let us begin with the Hello World docker to start with.
Building a basic container
The creation of a docker container involves four major steps:
- Install Docker environment on your system
- Prepare Docker file
- Build the docker image
- Push the docker image to the repository
Let us go through this process step by step.
Installing Docker Environment
The Docker installation process varies from Environment to environment. The docker documentation provides sufficient documentation to follow the process and get the docker environment installed. The below links will guide you to the precise docker installation link as per your operating system.
Dockers are important in applications where there are constant new deployments are taking place. Hence, apart from these operating systems, Docker community also provides separate installation processes for the cloud deployment. The execution guideline is provided for popular deployment platforms like Google and AWS.
Preparing Dockerfile using Docker Comands
The Dockerfile is the starting point of Docker container creation. To create a blank Docker file, you can use the below command:
This command creates an empty Docker file which we will edit further. You can also create a Dockerfile using any text editor. You need to however make sure that it is not given any file extension. Let us now create our first Docker image.
The Dockers run on any OS given that the Docker environment is available. However, the Dockers themselves are also running on a specific base OS version. This version definition is the starting point if you wish to get kickstarted with the Dockers. You can find some of the popular containers at DockerHub. We will choose the Base Ubuntu image from there to get started.
Some of the common token that a Dockerfile contains are RUN, COPY, CMD, ENTRYPOINT, EXPOSE, FROM, etc. All these tokens signify specific tasks upon the project you want to dockerize in the docker. Let us understand some of the above commands.
Remember that you create a docker file in the same folder where our project is, but outside the project folder. Also, we will not give any file extension. It can contain some of the following commands :
- FROM – it is the first instruction to the docker file
FROM <image>. For example we can take the base image for building a new image like
FROM node, here it means getting node Docker image from registry.
- ENTRYPOINT – it is used to configure a container that will run as an executable
ENTRYPOINT ["executable", "param1", "param2"].
- COPY – it is used copy files from docker client’s current directory
COPY <src> [<src> ...] <dest>. If destination doesn’t exist, it is created along with all missing directories in its path.
COPY newproject /home/node/newproject
- RUN – it is used to execute a command during the build process of the docker image
RUN <command>. For example,
RUN npm install -g @angular/[email protected]. The command is run in a shell, which by default is
/bin/sh -con Linux or
cmd /S /Con Windows.
- EXPOSE – it is used to inform docker that the container listens on the specified network ports at runtime.
EXPOSE <port>. For example
- CMD – it is used to set default command and parameters, which can be overwritten from command line when docker container runs.
CMD ["<executable>","<param1>","<param2>"]. There can only be one CMD instruction in a Dockerfile. For example
CMD ["/bin/bash","-c","cd /home/node/newproject && ng serve --host 0.0.0.0"]. This statement will serve/execute the project.
This is one of the simple docker file example which I have created.
# base image
COPY devlogger /home/node/devlogger
RUN npm install -g @angular/cli@latest
RUN cd /home/node/devlogger && npm install
# start app
CMD ["/bin/bash","-c","cd /home/node/devlogger && ng serve --host 0.0.0.0"]
After we save this dockerfile, we need to build this file.
Building docker image
In the previous section, we understood the basic structure of a Docker file and the meaning of tokens used in the docker file. In this section, we discuss the Docker commands associated with building and viewing images. In order to build the image, we use the below docker command.
docker build -t myangular_image .
This docker command will build the Dockerfile in your current directory. Notice the log that is provided as output. Every token – FROM, RUN, COPY are executed as separate steps. Each step creates a layer of Docker image. These layers ensure that whenever you add more commands to your Dockerfile, it does not create the entire image back again.
Let us understand each part of the above command in detail:
docker build is the command line utility to build an image that can be run anytime – anywhere. The flag
-t indicates the tag for the Docker image. A docker image tag is important because it also identifies its remote location when we try to push it to a remote repository. For instance, a docker tag
datsabk/mydockerimage will point to the account id
datsabk on Docker hub and create an image named
mydockerimage in that account. The
. at the end of the command passes the path of Dockerfile and the directory that Docker should consider as root directory.
Executing & Monitoring docker image
Now, that we have the image ready, let us run our docker image. For this we have to write the command,
docker run -p 4201:4200 myangular_image. Let us understand each piece of this command.
docker run is the command line utility using which you can execute any Docker image that you have built or pulled from a remote repository. The flag
-p is used to map an internal port of the Docker to an external port on the host system. The argument
-p 4201:4200 will expose the port 4200 on the Docker and connect it to port 4201 on the local system. The last argument is the image name that you wish to execute.
Now if you open your browser and hit http://localhost:4201/ it will launch your dockerized application.
Let us say, you wish to run a Docker image directly from the remote repository – Docker Hub. For instance, you wish to start a RabbitMQ server using Docker. As long as the image is public, all you need to do is execute the below command:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
I have added some additional flags here for better explaining the command. Notice in the above Docker that when you execute the docker run, it gets stuck on the terminal with continuous log output. On exiting the terminal, the corresponding docker image will stop running. To prevent this, we need to run the Docker in detached mode. The flag
-d is used to signal the Docker to run the iage in a detached mode.
The next flag is
--hostname is used to identify docker within the machine. It is an alias you can utilise to access the web application running inside the Docker. The last flag in the command is
--name. Docker normally gives a fancy name to its image automatically. However, there can be scenarios where you wish to name them specifically so that you can identify them later. To do so, you can utilise this flag and pass a name for the docker execution.
Since, we have the docker container up and running now, let us understand how to view, stop and kill the containers as needed. These are simple commands explained in the next section.
View, Stop and Kill running containers
Docker creates a unique hash for each container to ensure they can be identified independently. To view the running Docker containers, use the command
docker container ls. This command would list down all the running Docker containers. The output will be as shown below. Let us understand the output further.
The container ID shown above is the unique identifier you can use to stop or kill the container. You can also utilise the ID to get more details about the container. The image attribute indicates the image that we ran. The command attribute tells you the command that was executed during the execution of the container. For instance, for an Angular application,
ng serve could be your command. The created attribute signifies the duration since how long the docker has been created. Status indicates the amount of uptime for the corresponding container.
The ports attribute tells about the ports that you have mapped from inside the docker to the host machine. There can be multiple ports mapping possible. The last attribute is the fancy name that Docker auto-generates in case you do not provide a name with the
Now, we have multiple container up and running. Let us try to stop a container and understand the involved command. For stopping a container, you need to use the below command:
docker container stop (unique-identifier)
The unique identifier in the above command can be anything that can identify the container uniquely. For instance, the container ID is a unique identifier that can be used to stop the container. This command has been executed below.
Additionally, you can also use a part of container ID. For instance
docker container stop 443 can also stop the same container. Another unique identifier to stop the container is the name of the container. This means
docker container stop mystifying_nash can also stop the container.
docker stop command could occasionally take a few seconds to few minutes to actually stop all its processes. In an event where we want it to terminate immediately, we utilise the
docker kill utility. To kill a container, you can execute the below command:
docker container kill (container_id)
It will send a kill signal to that particular container and docker will abruptly kill the container. A kill would essentially kill the process completely and hence it would be an immediately destruction of the container.
Checkout the output of
docker container ls now. You would see that all your containers are now killed or stopped. However, try the command
docker container ls -a.
Publishing an image
A docker image built locally can be utilised only within your system. However, a Docker image needs to be published for allowing others to pull and utilise it. In order to do so, you need to push the docker image to an online repository like Docker Hub. The location to push an image is identified by its name. By default, the Docker utility uses hub.docker.com as the destination. To push an image to Docker hub, follow the below steps.
- Register on https://hub.docker.com/ and verify.
- Login through command line interface
docker login --username --password. This command will login to your docker hub account with your username and password.
- Push the image to the hub
docker push username/imagename. This will push the docker image named myangularapp to the docker hub.
In case you wish to push the image to a specific remote repository like the AWS ECR or Azure Container Registry, then you need to tag the image with a relevant url. For example, below command tags the image for an Azure Container registry.
docker tag myangular_image abkregistry.azurecr.io/myangular_image
Docker command for docker containers and cache cleanup
In docker, at some point of time one need to clean up the resources, as resources occupy space in your machine’s memory. These cached resources can be images, intermediate layers as well as stopped or failed containers. If not cleaned regularly, the occupied space could reach GBs of data in no time. In this section, we discuss the various ways of cleaning this data.
Prune docker images using docker command
One can clean all unused images by
docker image prune. All those images which are not tagged or referenced by any container are removed. One can also pass the flag
-a, this will remove only those images which are not used by existing containers.
Prune Docker containers using Docker command
When one stops the container, it does not get automatically removed, you need to remove it from the memory.
docker container prune is the command used to remove all the stopped containers.
Here, as you can see these were the stopped container’s id which are removed and the space cleaned.
Prune complete docker system using Docker command
When you want to remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes from your system
docker system prune is used.
In all the above prune commands
--force flag can be used to bypass the prompt of confirmation. Also a
--filter flage can be used to filter expression. For example,
docker image prune --filter "until=24h" will prune the images which were created 24 hours ago.
Remove specific Docker images & container
There can be situation where you wish to remove specific docker images instead of all. In this scenario,
docker rmi <image_name> command is used. It will remove the image with that image name or unique id that you specify.
docker rm <container_id> will remove container with that id
This elaborate article discussed and summarized all the major Docker commands that would be needed for you to work with Docker. Docker is an amazing concept that is widely used for continuous simplified deployments. Docker enables packaging of applications simpler and makes the work of deployers easy. For any further help or guidance, feel free to drop a comment below.