Tips for using Docker container/virtualization platform.
- Dockerfile: A specification (plan) for building a Docker image.
- Image: A template for Docker containers that has a specific purpose. Basically, the image provides the custom file system structure required for the application.
- Container: An instance (either running or finished) of an image for running process/application. Containers are isolated from other processes on the host machine using Linux kernel namespaces and cgroups, which have been part of Linux for a long time.
Command | Action |
---|---|
docker run -it ubuntu bash |
Run (start) Docker image named ubuntu , downloading it, if necessary, and execute bash application in container in interactive (-it ) mode. |
docker ps -a |
List all (-a ) of the containers, running or not. |
docker start --attach container_name |
Launch existing ("reuse") container with name container_name and show output (--attach ). |
docker stop container_name |
Stop running container named container_name. |
docker rm -f container_name |
Remove/delete container named container_name and force (-f ) delete if container is still running. |
docker image ls |
List the Docker images downloaded to the system. |
In this example, our HTML, CSS, etc. files are in the /var/local/html
directory.
docker run -v /var/local/html:/usr/share/nginx/html:ro -p 8080:80 -d nginx
Here's what the various parts of the command mean:
-v /var/local/html:/usr/share/nginx/html:ro
: Maps the local/var/local/html
directory with our web page resources to/usr/share/nginx/html
in the container. Specifyingro
tells Dockers to mount it in read-only mode, meaning that the container can't/won't make any changes.-p 8080:80
: Maps network service port 80 in the container to port 8080 on the host system (the system running the Docker instance). This means that you would access the web site at port 8080 from the host (e.g., http://127.0.0.1:8080/).-d
: Detaches the container from the command line session. In other words, the container continues running in the background.nginx
: The name of the Docker image to use for the container.
sudo apt-get purge -y docker-engine docker docker.io containerd runc
sudo apt-get autoremove -y --purge docker-engine docker docker.io containerd runc
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-cache policy docker-ce
You should see output similar to the following. The most important aspect is that it references the https://download.docker.com/
repository.
docker-ce:
Installed: (none)
Candidate: 5:20.10.6~3-0~ubuntu-focal
Version table:
*** 5:20.10.6~3-0~ubuntu-focal 500
500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
sudo apt-get install -y docker-ce
sudo systemctl status docker
You should see output similar to the following. The most important aspect is that it shows that the service is active (running)
.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-05-06 14:21:00 CDT; 12s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 746864 (dockerd)
Tasks: 11
Memory: 68.2M
CGroup: /system.slice/docker.service
└─746864 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
sudo usermod -aG docker ${USER}
You will have to logout (not just close your shell/terminal window) to get permissions for the docker
group. Of course, you can proceed without these permissions, but you'll need to enter your account password whenever you rung the docker
(or related) command.
As an alternative, you can open a sub-shell with the new docker
group member with this command:
exec su -l ${USER}
After logging in, run id -nG
to confirm that you are a member of the docker
group.
docker run hello-world
You should see Docker pull down (or perhaps update, if you had Docker previously installed) the "Hello, World" Docker image and launch it. You'll see some output including the message Hello from Docker!
, which confirms successfully installation and configuration.
The Docker Compose tool allows you to build Docker applications made up of multiple containers and services from a single YAML configuration file, docker-compose.yml
. Many packaged applications that use Docker require/expect Docker Compose to be installed. To install it, run:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
NOTE: This will delete all Docker images, containers, volumes, and user-created configurations on your system.
dpkg -l | grep -i docker
sudo systemctl stop docker
sudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli
sudo apt-get autoremove -y --purge docker-engine docker docker.io docker-ce
sudo rm -rf /usr/local/bin/docker-machine /usr/local/bin/docker-machine /etc/bash_completion.d/docker-machine*
sudo rm -rf /var/lib/docker /etc/docker
sudo rm /etc/apparmor.d/docker
sudo groupdel docker
sudo rm -rf /var/run/docker.sock
Docker changed their licensing model recently and many developers must now use the new subscription-based model to use Docker Desktop tool. However, frequently, developers only use the command-line tools in their day-to-day work and so the Docker Desktop tool is overkill for their needs anyway.
This article explains how to run Docker on Windows in a very simple command-line-only configuration. And it works on even relatively old Windows 10 systems, including those with Windows Subsystem for Linux (WSL) 1. Essentially, we will be running the Docker daemon (or "service", if you prefer) in a Virtualbox or Hyper-V virtual machine (VM) and accessing it from our Windows 10 host machine in WSL or PowerShell (or Windows Command Prompt).
To use this configuration, we will use the following environment.
- Windows 10
- WSL 1 or WSL 2 with Ubuntu 20.04
- Virtualbox 6.2 with Ubuntu 20.04 guest or Hyper-V with Ubuntu 20.04 guest As noted, we can use either Virtualbox or Hyper-V for the virtualization platform. This allows Windows 10 Home users to use this process, even though they do not Hyper-V support on their platform.
Install Ubuntu 20.04 (or 18.04) in WSL according to the standard installation process. Here is a brief outline of the process.
- To install WSL, open PowerShell (or Windows Command Prompt) and run
wsl.exe --install
- By default, the WSL installation will install Ubuntu. You can also check for other available distributions and versions:
wsl.exe --list --online
- Then, you can install one of the available distributions from this list:
wsl.exe --install -d <distroname>
where <distroname>
is the name from the earlier list, such as ubuntu2004
.
As explained earlier, you can use either Virtualbox or Hyper-V virtualization platform, depending on your preference and what is supported in your environment. Simply follow the standard installation process for the selected tool. (Note that you cannot use both Virtualbox and Hyper-V simultaneously due to the Hyper-V architecture.)
After installing Virtualbox or Hyper-V, you will need to install Ubuntu (or other Debian-based) Linux as a guest operating system (OS) on the virtualization platform. You can install a standard GUI version or a very minimal command-line only version. This guest OS will only run Docker daemon (or "service") process, so the OS GUI is entirely optional.
Follow the standard installation process for either Virtualbox or Hyper-V for installing guest OSes. Make sure to allocate at least 2GB of RAM to the guest.
Once you've successfully installed Ubuntu as a guest OS in Virtualbox or Hyper-V, in that Ubuntu instance (and not the WSL instance yet!), install the Docker daemon application. Again, we will be running the Docker daemon in the guest virtual machine (VM) and accessing it from our Windows 10 host machine.
Update the Ubuntu packages.
sudo apt update
sudo apt upgrade -yy
Remove any existing Docker installation from standard Ubuntu repositories. (If you just installed Ubuntu guest OS, it's unlikely that they are installed, but it doesn't hurt to check.)
sudo apt remove docker docker-engine docker.io containerd runc -y
Configure the official Docker repository and install Docker from it.
source /etc/os-release
curl -fsSL https://download.docker.com/linux/${ID}/gpg | sudo apt-key add -
echo "deb [arch=amd64] https://download.docker.com/linux/${ID} ${VERSION_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
The installation process creates a docker
group in Linux. We need to add our user to that group to allow us to run Docker commands without using sudo
.
sudo usermod -a -G docker ${USER}
You must close the terminal window and open a new one (or log out and log back in, if you are using a console/command-prompt only VM) to get a session in which your user belongs to the docker
group. To confirm, run the groups
command and ensure that docker
is included in the list (it will probably be the last one).
To verify that everything is working properly, while still in our Linux guest VM, run docker info
. You should see some output divided up into Client
and Server
sections. See the docker info
command documentation for details and examples.
Now that the Docker daemon is installed and working, we need to make it accessible outside of the Virtualbox or Hyper-V guest OS. For our case, to simplify things, we will configure it without encryption. Obviously, this involves some risk, but presumably, we will only be accessing from within the same machine. You can learn more about this in the Docker security documentation.
Create a systemd service directory for our configuration and create the daemon (service) configuration file.
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo nano /etc/systemd/system/docker.service.d/options.conf
In options.conf
add the following lines and save the file. Note that there indeed two lines starting with ExecStart=
.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375
Refresh the systemd
configuration and restart Docker.
sudo systemctl daemon-reload
sudo systemctl restart docker
Presumably, you won't receive any errors on restart. In any case, you can check that the Docker daemon restarted (is running) by running sudo systemctl status docker
, if you like.
Basically, this configuration allows local connections from within the Virtualbox or Hyper-V guest OS VM via -H unix://
and from any external client over TCP on port 2375 via -H tcp://0.0.0.0:2375
.
The final step involving the Ubuntu Linux guest OS in Virtualbox or Hyper-V is to determine its IPv4 address. We need this IP address to use on the host (WSL or PowerShell) to connect to the Docker daemon remotely.
Use Docker for Windows in WSL1
How to run docker on Windows without Docker Desktop
Setting Up Docker for Windows and WSL to Work Flawlessly
Docker Tip #73: Connecting to a Remote Docker Daemon