May 04, 2020
This is my study logs, while taking the docker mastery course on Udemy.
It basically covers fundamental of containers, images, and the use of docker compose.
docker version
# verified cli can talk to engine
docker info
# most config values of engine
docker <command> <sub-command>
docker container run --publish --detach 80:80 nginx
nginx
from Docker Hu--detach
will make it run on the background.
docker container ls
docker container stop ea
docker container ls -a
# Container are created on every container run
docker container run --publish --detach 80:80 nginx
# with names
docker container logs webhost
# Checks logs
docker container top webhost
# Check the process of the container
docker container rm -f webhost
# Remove running container
docker container run
docker container run --publish 8080:80 --name webhost -d nginx:1.11 nginx -T
CMD
in the image DockerfileContainers are just process
Creating 3 different containers
docker container run --name nginx --publish 80:80 --detach nginx
docker container run --name httpd --publish 8080:80 --detach httpd
docker container run --name mysql --publish 3306:3306 --env MYSQL_RANDOM_ROOT_PASSWORD=yes --detach mysql
docker container logs mysql
# to check the generated root password
docker container top <container_id>
# list proces in one container
docker container inspect <container_ID>
# Gets JSON detail of one container
docker container stats
# Can check all the running container, and make sure to check THE CPU, and memory
docker container run -it --name proxy nginx bash
docker container start -ai proxy
# start again, for stopped containers
docker container exec -it mysql bash
# creates additional process, and jump it to the shell for running container
Virtual Network can have multiple containers, can communicate together.
docker container port <container>
# Quick port check
docker network ls
# show network
docker network inspect <network>
# inspect network, and which containers are inside this network
docker network create my_app_net
docker network connect/disconnect
# connect/disconnect current containers
The container name works as a host name of the container, so if two containers are in the same virtual network, you can ping each other.
But this needs to be using the default bridge
network drive, allowing containers to communicate with each other when running on the same docker host.
docker container run -d --name new_nginx --network my_app_net nginx
# specifying network when running container
docker network inspect my_app_net
docker network create search_net
docker container run -d --name search1 --network search_net --network-alias search elasticsearch:2
docker container run -d --name search2 --network search_net --network-alias search elasticsearch:2
docker container run --rm --net search_net alpine nslookup search
# Server: 127.0.0.11
# Address: 127.0.0.11:53
#
# Non-authoritative answer:
#
# Non-authoritative answer:
# Name: search
# Address: 172.20.0.3
# Name: search
# Address: 172.20.0.2
docker container run -it --name centos --net search_net centos
# Check if the response of the elasticsearch has different two different names, when you request couple of times
Consider small OS like alphine
Layered
docker image history <image>
docker image inspect <image>
docker image tag nginx kawamurakazushi/nginx
docker image push kawmaurakazushi/nginx
docker image tag kawamurakazushi/nginx kawamurakazushi/nginx:testing
FROM
ENV
# set environment variable
WORKDIR
# bascially change directory
COPY
# copy from local
CMD
You should put the commands that change the least on the top, and the command that changet most on the bottom, since it caches from the top.
docker image build customnginx .
CMD
, it will inherit from the image from FROM
.docker image build -t nginx-with-html .
docker container run -p 80:80 --rm nginx-with-html
docker image prune
docker system prune
# Delete everything that is not running
docker system prune -a
# Delete everything that is not used
docker system df
# Check the disk usaage
Volumes => make special location outside of container UFS
Bind Mounts => link container path to host path
VOLUME
will create a persistent data.
This will be never removed, when the container is removed.
docker volume ls
# DRIVER VOLUME NAME
# local 2fa2e747189d574821f59592e541822c1c96485e93d8640a8ee57b366b6c45fc
# local 3b385d7fa684c1664b37ac4eb025f8d8edb97b3ed95266f4a67462627b46d1ae
# local 3d49b042e6f5279fe27ee21d99698f3ef41fedecec88bbf01f87895f9c3c9c62
# ...
But it's quite hard to keep track, without the names.
docker container run -d --name mysql -e ALLOW_EMPTY_PASSWORD=true -v mysql-db:/var/lib/mysql mysql
docker volume ls
# DRIVER VOLUME NAME
# local 2fa2e747189d574821f59592e541822c1c96485e93d8640a8ee57b366b6c45fc
# local mysql-db
docker volume inspect mysql-db
# [
# {
# "CreatedAt": "2020-05-05T03:22:58Z",
# "Driver": "local",
# "Labels": null,
# "Mountpoint": "/var/lib/docker/volumes/mysql-db/_data",
# "Name": "mysql-db",
# "Options": null,
# "Scope": "local"
# }
# ]
Maps a host file or directory to a container file or directory. Can't be used in a Dockerfile, and should be specified on container run
d container run -d --name nginx -p 80:80 -v $(pwd):/usr/share/nginx/html nginx
docker-compose up
docker-compose down
version: "3"
services:
drupal:
image: "drupal"
ports:
- 8080:80
psql:
image: "postgres:11"
environment:
POSTGRES_DB: drupal
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
version: "2"
services:
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
ports:
- "80:80"
web:
image: httpd
volumes:
- ./html:/usr/local/apache2/htdocs/
--build
option, to build from Dockerfile again.docker-compose up --build