In this shot, we’re going to start using Docker.
I’m going to structure this shot in the order of how one would generally use the commands mentioned. Before we start, there are two important key concepts that you must be aware of, images and containers.
The simplest way to explain these two terms is this one line, “Images are blueprints for containers.”
A Docker image is built from your code and sets up all the dependencies required to run it. Images are the “movable” part in the “isolated moveable environments” discussed in my previous shot. These images are then used to start containers.
To put it simply, containers are running instances of images. You can create multiple containers from the same image.
Images can either be downloaded from sites like Docker Hub or they can be tailor-made for your particular application using a Dockerfile.
Now that you know what images are, let me show you how you can build your own images. For this, you’ll need to write a Dockerfile. Let’s say you have a simple server (in our case a Node.js
app) and you want to dockerize it. In the root folder of your app, add a file and name it
Dockerfile (no extensions). Now, paste the following code in it:
// FROM node COPY . . RUN npm install EXPOSE 80 CMD ["node", "server.js"]
Let’s analyze this Dockerfile, which will help you understand the basic way of writing a Dockerfile. Not all Dockerfiles will look like this, but after understanding this one, you’ll have a solid foundation and will be able to build upon that for your particular use. Let’s start:
You must remember that this Dockerfile contains the information required to build our image. We will use this image to run containers at a later stage. This app requires
npm in order to run.
Theoretically, we can write the code for getting
npm in this Dockerfile, but basing our image on the official node image is much easier.
The official node image takes care of all of our needs and gives us an environment where
npm are available for us to use. The
FROM node command does exactly that. This node image that we use as a base comes from Docker Hub. So, when you build the image for your application, the first step is to pull this node image from Docker Hub.
After pulling this image, we use the
COPY . . command to copy our code from the local machine to the root folder in the file system of the container. The first
. is used for the directory where the Dockerfile is present (which is the location of the code we want to copy) and the second
. is the destination of the file system of the container.
A few things to note here:
After copying the code, we want to install the dependencies with the
npm install command for Node.js apps. We simply use the
RUN keyword followed by our actual command to get all the dependencies to install.
Now, let’s assume that our Node.js app exposes port 80. However, port 80 is on our docker container and is not available outside it. You are not able to go to
localhost:80 and access it. Therefore, we
EXPOSE port 80 so that we can easily access it outside of our container.
Finally, after setting all this up, we run the command
node server.js to start the server. We use
CMD ["node", "server.js"] syntax to do this.
npm start and
node server.js are commands we would run in our terminal, you might be wondering why the way of specifying them is so different in the Dockerfile?
This is because
RUN is used as a part of the image building step, whereas
CMD is something we want to run in our container. To put it in even simpler terms, we don’t want to run
node server.js whenever we build our image from the Dockerfile. Instead, we want to run
node server.js when we start the container; this is where the two keywords
CMD differ in functionality. It is very important to understand that building an image is not the same as running a container. Images are built with a command, and containers are run using the built image with a different command.
Now that we have the Dockerfile for our app ready, let’s go about building our image and starting our first container.
Open a terminal in the directory where your Dockerfile is present and run:
// docker build .
This is the command that will build up your image. Once it is done building up your image, you should see a long output, with the following line at the end:
Successfully built abcd1234
abcd1234, you’ll see the actual image ID. This is what you will use to spin up a container from this particular image. Before doing that, let’s go back to ports and exposing them.
Since our Node.js server opens up on port 80 inside the container, we need to make this port accessible outside the container in order to be able to use our app. We use the
EXPOSE 80 instruction in our Dockerfile, and we also need to specify which port on our local system we want to use for this connection with port 80 of the container while starting the container. To make this a bit clearer, let’s look at the command we run to start our container:
// docker run -p 3000:80 abcd1234
Now, if you send requests to
localhost:3000 from your browser or using some other tool, you should be getting the response that your Node.js server sends. Had we simply used
docker run abcd1234, we would not have been able to interact with our server because we did not specify which port we are going to use to access port 80 of the container. The
-p flag does that for us.
We could have chosen to open any port of our machine to connect with port 80 of the container. For example, if we wanted to use port 4000 for the Node.js app, we would use
docker run -p 4000:80 abcd1234.
Finally, I would like to end this shot by showing you how you can list your containers and stop running containers.
The command below will show you all the running containers:
// docker ps
docker ps, you can grab the name or ID of the container and run the following to stop that particular container:
// docker stop container_name_or_id
If you want to see all containers you ever ran and not just the currently running containers, you can use this command:
// docker ps -a
I hope that now you’re a bit more comfortable with actually using Docker. There is a lot to Docker, and this shot by no means covers everything about building images and running containers, but hopefully, it gives you a solid start.
View all Courses