Most of the VPSes I run are OpenVZ nodes, so I really haven’t gotten to understanding Docker and how to use it. In this article I share my recent learnings and hope that those who are not on the Docker bandwagon (or should I say the Docker ship?) get a basic understanding of what this technology is all about.
Docker Vs. Other Virtualization Options
Whether it is OpenVZ or KVM, traditional Virtual Machines involve setting up entire stacks right from the Operating system to the final application. This is often an overkill considering that of the entire set of files and libraries on the VM’s OS, one may need just a fraction for the application in question.
As you can see from the above image, we have lost a bulky “Guest OS”. We also see that Libraries and binaries are shared across applications which means that if you plan to run two applications that use common python libraries, you don’t have to install them twice (like how you would need in case of individual virtual machines).
The other reason where Docker stands out is portability. As the name “container” suggests, it is easy to move a container from one location to another. This translates to working in a development environment, making changes, testing and when you are happy you could commit the container and ship it over to your production server. Even though it involves a number of steps, it is simpler than archiving multiple directories (on the development machine), SCPing/rsyncing to the production server and unarchiving.
The other aspect of portability is that the hardware specs and capabilities of the development system vs. the production server could be really different, but as long as they run the Docker platform, your code will work without many changes
History of Docker
Containers have been around for much longer than Docker. There have been projects such as OpenVZ and LXC (Linux Containers) that offer secure compartmentalization. Indeed, earlier versions of Docker used LXC as the default execution environment. Since v0.9, Docker created its own libcontainer library written in Go. Libcointainer allowed for Docker to standardize container creation as it no longer had to depend on LXC, whose implementation may vary by Linux flavor.
Though Docker was originally created with Linux in mind, as the Linux kernel offered various resource isolation features such as cgroups, union-capable file systems (such as OverlayFS).
Fun fact: Did you know that the whale in Docker’s logo is called “Moby Dock”?
OpenVZ and Docker
Until OpenVZ 7, OpenVZ did not support Docker. This is because of the way OpenVZ virtualization works. All VMs on the server use the same kernel and OpenVZ 6 used older kernels (2.6.32 version). If uname -a on your linux VPS shows a value of 2.6.32-042stab105.4 or greater, you should be able to run Docker. OpenVZ 7 runs on 3.10 kernel which supports Docker out of the box.
KVM VPSes on the other hand allowed for each VPS to install/upgrade their own kernel, so even if the host server ran an older kernel, you could upgrade and start using Docker.
It feels a bit strange to put the “Getting Started” section at the end of this article, but in some ways it marks the beginning of your journey into the Docker world. Before we install the Docker platform, let us talk about two terms that are often used interchangeably but shouldn’t – Image & Container
An Image is the base version which in many ways is the blueprint for your development. An image could contain the essential file system and the execution parameters for using the image. You can build on top of an image and save the new version as another image. To optimize space, the new image will contain only the delta required for this image.
On the other hand, Containers are running instances of the Image. You start a container when you issue the run command. You can exit (Docker’s term for stopping) the container and the changes made (such as new files added) will remain in the container.
Installation & Hello World
Docker CE is available through various repositories for your Linux flavor. Most of the steps below are around adding and enabling the repository before running the simple install docker-ce command.
Centos 7 is required for installation. You must enable centos-extra repo. Begin by installing required packages
# yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
Add the Docker Repo
# yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE through
# yum install docker-ce
Once installed, start Docker
# systemctl start docker
You can execute your first container as
# docker run hello-world
Docker is supported on 64-bit Ubuntu versions from 14.04. Begin by updating the apt package index
# apt-get update
Install additional packages
# apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common
Add Docker’s GPG key
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add Docker CE’s stable repo
# add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
Update the apt package repository again and begin installation
# apt-get update # apt-get install docker-ce
After installation you should be able to run your first docker image
# docker run hello-world
Docker looks like it is here to stay still going strong after the initial announcements and media coverage. Many applications are now available as Docker Images to speed up the entire installation process. As you get more familiar with Docker, you will hear terms like the Docker Hub (a public repo of images) and also advanced concepts like container orchestration. Share your Docker journey in the comments below, I am interested to hear & Get More Info.