Docker Deep Drive 2020 [2]

img

9. Deploying apps with Docker Compose

How to deploy multi-container applications using Docker Compose

Methods for Deploying container apps:

  • Docker Compose:
    • Deploys and manages multi-container applications on Docker nodes running in single-engine mode
  • Docker Stacks:
    • Deploys and manage multi-container apps on Docker nodes running in swarm mode

Micro-services : Modern cloud-native apps are made of multiple smaller services that interact to form a useful app. We call this pattern “micro-services”

Example (each of below is one service):

  • Web front-end
  • Ordering
  • Catalog
  • Backend database
  • Logging
  • Authentication
  • Authorization

Docker Compose lets you describe an entire app in a single declarative configuration file, and deploy it with a single command

❯ docker-compose --version
docker-compose version 1.27.4, build 40524192

YAML is a subset of JSON.

pwd
/Users/azat/Data/Containers/counter-app
❯ ls -l
total 40
-rw-r--r--  1 azat  admin  109 Jan 16 16:20 Dockerfile
-rw-r--r--  1 azat  admin  187 Jan 16 16:20 README.md
-rw-r--r--  1 azat  admin  599 Jan 16 16:20 app.py
-rw-r--r--  1 azat  admin  367 Jan 16 16:20 docker-compose.yml
-rw-r--r--  1 azat  admin   11 Jan 16 16:20 requirements.txt

Docker Compose file:

  • version:
    • mandatory
    • Version of the compose file format (basically the API) — should normally use the latest version
  • services:
    • Different application microservers (container apps)
  • networks:
    • bridge
    • overlay
      • atatchable : true
  • volumes:
  • secrets
  • configs
version: "3.5"
services:
  web-fe:
    build: .
    command: python app.py
    ports:
      - target: 5000
        published: 5000
    networks:
      - counter-net
    volumes:
      - type: volume
        source: counter-vol
        target: /code
  redis:
    image: "redis:alpine"
    networks:
      counter-net:

networks:
  counter-net:

volumes:
  counter-vol:
  • build: . : build a new image in current directory
  • command: python app.py — main app how to run. This can be in the docker file
  • ports:
    • -target: port in the container
    • published : public port (in the host)
  • network : network to attach
  • volumes: Tells Docker to mount the ‘counter-vol’ volume (source: , is in the host) to /code (target, is in the container)
    • source : location in the host
    • target : location in the container

Note: Technically speaking, we don’t need the command: python app.py option. This is because the application’s Dockerfile already defines python app.py as the default app for the image. However, we’re showing it here so you know how it works. You can also use Compose to override CMD instructions set in Dockerfiles.

Deploying app with compose

File: docker-compose.yaml

docker-compose up &

— >

  • Build or pulls required images
  • Creates required networks
  • Creates required volumes
  • Starts all required containers
docker-compose up -d 

— > -d flag to bring the app up in the background

docker-compose -f prod-equus-bass.yml up -d

— >

  • -f specify the file name of the compose yaml (when it’s default to docker-compose.yaml , do not need to use -f)
  • -d run in the background
docker image ls
❯ docker container ls
CONTAINER ID   IMAGE                COMMAND                  CREATED          STATUS          PORTS                    NAMES
d349875bf598   counter-app_web-fe   "python app.py"          37 minutes ago   Up 37 minutes   0.0.0.0:5000->5000/tcp   counter-app_web-fe_1
0cabc4ff675c   redis:alpine         "docker-entrypoint.s…"   37 minutes ago   Up 37 minutes   6379/tcp                 counter-app_redis_1
❯ docker network ls
NETWORK ID     NAME                      DRIVER    SCOPE
2a182a6b0bcd   bridge                    bridge    local
1c23527ac7cc   counter-app_counter-net   bridge    local
3cb8900f1c15   host                      host      local
0e1926106769   minikube                  bridge    local
0d92c2941210   none                      null      local
❯ docker volume ls
DRIVER    VOLUME NAME
local     counter-app_counter-vol
local     f39a5f8b44cb95ab2d5f672f37be83be466f306759e2987de9a46b1f9ddc93b6
local     minikube

image-20210116170352787

Managing an app with Compose

❯ docker-compose down
Stopping counter-app_web-fe_1 ...
Stopping counter-app_redis_1  ...
redis_1   | 1:signal-handler (1610795088) Received SIGTERM scheduling shutdown...
redis_1   | 1:M 16 Jan 2021 11:04:48.445 # User requested shutdown...
redis_1   | 1:M 16 Jan 2021 11:04:48.445 * Saving the final RDB snapshot before exiting.
Stopping counter-app_web-fe_1 ... done
counter-app_redis_1 exited with code 0
counter-app_web-fe_1 exited with code 0
Removing counter-app_web-fe_1 ... done
Removing counter-app_redis_1  ... done
Removing network counter-app_counter-net
[1]  + 41666 done       docker-compose up

It’s important to note that the counter-vol volume was not deleted

❯ docker-compose ps
Name   Command   State   Ports
------------------------------
❯ docker-compose up -d
Creating network "counter-app_counter-net" with the default driver
Creating counter-app_web-fe_1 ... done
Creating counter-app_redis_1  ... done
❯ docker-compose top
counter-app_redis_1
UID   PID    PPID   C   STIME   TTY     TIME         CMD
-------------------------------------------------------------
999   5015   4961   0   11:14   ?     00:00:00   redis-server

counter-app_web-fe_1
UID    PID    PPID   C   STIME   TTY     TIME                    CMD
------------------------------------------------------------------------------------
root   5028   4985   0   11:14   ?     00:00:00   python app.py
root   5149   5028   0   11:14   ?     00:00:00   /usr/local/bin/python /code/app.py
❯ docker-compose ps
        Name                      Command               State           Ports
--------------------------------------------------------------------------------------
counter-app_redis_1    docker-entrypoint.sh redis ...   Up      6379/tcp
counter-app_web-fe_1   python app.py                    Up      0.0.0.0:5000->5000/tcp
❯ docker-compose restart
Restarting counter-app_web-fe_1 ... done
Restarting counter-app_redis_1  ... done
❯ docker-compose stop
Stopping counter-app_web-fe_1 ... done
Stopping counter-app_redis_1  ... done
❯ docker container ls -a
CONTAINER ID   IMAGE                COMMAND                  CREATED              STATUS                      PORTS     NAMES
d9afb0a2f90f   counter-app_web-fe   "python app.py"          About a minute ago   Exited (0) 20 seconds ago             counter-app_web-fe_1
62907319e651   redis:alpine         "docker-entrypoint.s…"   About a minute ago   Exited (0) 20 seconds ago             counter-app_redis_1
❯ docker-compose down
Removing counter-app_web-fe_1 ... done
Removing counter-app_redis_1  ... done
Removing network counter-app_counter-net
❯ docker container ls -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Compose builds networks and volumes before deploying services.

image-20210116171756067

Deploying apps with compose - The commands

  • docker compose up : expects:
    • docker-compose.yml or docker-compose.yaml
  • docker-compose stop
    • Stop container
    • DO NOT delete container
  • docker-compose rm
    • Delete a stopped container
  • docker-compose restart
  • docker-compose ps
  • docker-compose top
  • docker-compose down:
    • Stop containers
    • Delete containers
  • NOTE : compose reserves volumes

10. Docker Swarm

Docker Swarm:

  • Enterprise-grade secure cluster of Docker hosts
  • Engine for orchestrating microservices

— > :

  • Define apps in declarative manifest files
  • Perform rolling updates
  • Perform rollbacks
  • Scaling

Docker Swarm: vs Kubernets

  • Easier to configure
  • Easier to deploy
  • Excellent for small-to-medium businesses and application developments

image-20210116174051660

Swarm: consists of

  • Docker Nodes:
    • Docker installed
    • Can communicate over network

Atomic unit of scheduling on a swarm is the service:

  • Service as an enhanced container

Swarm cluster:

  • Manager node
    • Control plane of the cluster
  • Worker nodes
    • Accept work from manager node

Example :

image-20210116174501609

Ports should be open for Docker Swarn:

  • 2377/tcp secure client-to-swarm communication
  • 7946/tcp and ump: for control plane gossip
  • 4789/udp for VXLAN-based overlay networks (Virtual Extensible LAN)

Initializing a swarm (the process of building a swarm ) :

  • Initialize the first manager node
  • Join additional manager nodes
  • Join worker nodes
  • done

Docker nodes that are not part of a swarm are said to be in single-engine mode. Once they’re added to a swarm they’re automatically switched into swarm mode.

inet 10.22.22.10 netmask 0xffffff80 broadcast 10.22.22.127
❯ docker node ls
ID                            HOSTNAME         STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
nhcbt33c1v8bz9mxk4zedqx6z *   azt              Ready     Active         Leader           20.10.2
jzbm7bqavqcndy88zium23gfj     docker-desktop   Ready     Active                          20.10.2
p1swhwajjcpzo0vcaah4b22z0     rpi48            Ready     Active                          19.03.13
sx9r6mvdc4mh1ak5rnmtki1hf     ux32             Ready     Active                          20.10.0+dfsg2s
❯ docker network create -d overlay uber-net
04dqq32xybayqd5j6wzu4mxsz
❯ docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
1eb8b8f691a9   bridge            bridge    local
d1a503020060   docker_gwbridge   bridge    local
d04af8a178f6   host              host      local
ywf8d7w1rmh8   ingress           overlay   swarm
77b174752c1e   none              null      local
04dqq32xybay   uber-net          overlay   swarm

image-20210119153934416

To update :

❯ docker service update --image nigelpoulton/tu-demo:v2 --update-parallelism 2 --update-delay 20s uber-svc
uber-svc

Docker Swarm - The commands

  • docker swarm init

  • docker swarm join-token
  • docker node ls
  • docker service create
  • docker service ls
  • docker service ps <service>
  • docker service inspect
  • docker service scale
  • docker service update
  • docker service logs
  • docker service rm

11. Docker Networking

networks are at the center of everything — no network, no app!

image-20210119162620403

CNM defines three major building blocks:

  • Sandboxes
    • Is an isolated network stack
    • Includes:
      • Ethernet interfaces
      • ports
      • Routing tables
      • DNS configs
  • Endpoints:
    • Virtual network interfaces : like veth
    • Responsible for making connection
    • Job : endpoint to connect a sandbox to a network
  • Networks:
    • Software implementation of a switch (802d bridge)

image-20210119163018282

image-20210119163059235

Container A has a single interface (endpoint) and is connected to Network A. Container B has two interfaces (endpoints) and is connected to Network A and Network B. The ==two containers will be able to communicate== because they are both connected to Network A. However, ==the two endpoints in Container B cannot communicate== with each other without the assistance of a layer 3 router.

End points behave like regular network adapters, meaning they can only be connected to a single network.

image-20210119163311684

The CNM is the design doc, and libnetwork is the canonical implementation.

Actual relationship :

image-20210119163537478

Docker native drovers (local drivers):

  • Linux:
    • bridge:
      • Single host bridge networks:
        • Single host: only exists on single docker host and can only connect containers that are on the same host
        • Bridge: it’s an implementation of an 802.1d bridge (layer 2 switch).
    • overlay
    • macvlan
  • Windows:
    • nat
    • overlay
    • transparent
    • l2bridge

Single host bridge networks

image-20210119164401949

❯ docker network inspect
"docker network inspect" requires at least 1 argument.
See 'docker network inspect --help'.

Usage:  docker network inspect [OPTIONS] NETWORK [NETWORK...]

Display detailed information on one or more networks
❯ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "1eb8b8f691a9e896c151ed36b31227f9be635fff5ca9343d108b28122ab58375",
        "Created": "2021-01-19T09:48:55.392836442+06:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
❯ ip link show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:8f:36:4f:8c brd ff:ff:ff:ff:ff:ff
❯ docker network inspect bridge | grep bridge.name
            "com.docker.network.bridge.name": "docker0",

image-20210119165550590

image-20210119165614349

image-20210119165933261

image-20210119170019609

Port mappings let you map a container to a port on the Docker host. Any traffic hitting the Docker host on the configured port will be directed to the container.

image-20210119170245264

❯ docker container run -d --name web --publish 5000:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a076a628af6f: Already exists
0732ab25fa22: Pull complete
d7f36f6fe38f: Pull complete
f72584a26f32: Pull complete
7125e4df9063: Pull complete
Digest: sha256:10b8cc432d56da8b61b070f4c7d2543a9ed17c2b23010b43af434fd40e2ca4aa
Status: Downloaded newer image for nginx:latest
67f03cb72e5692dc859b7ca566784067f20724804e4e41cccba59f3344d62dfe

image-20210119170443286

❯ docker port web
80/tcp -> 0.0.0.0:5000

Multi-host overlay Networks

Overlay networks are multi-host. They allow a single network to span multiple hosts so that containers on different hosts can communicate directly.

Connecting to existing networks

the containerized parts need a way to communicate with the non-containerized parts still running on existing physical networks and VLANs.

The built-in MACVLAN driver (transparent on Windows) was created with this in mind. It makes containers first- class citizens on the existing physical networks by giving each one its own MAC address and IP addresses

image-20210119170813353

image-20210119171004049

image-20210119171134393

image-20210119171210167

Service discovery

Service discovery allows all containers and Swarm services to locate each other by name. The only requirement is that they be on the same network.

image-20210119171348483

Every Swarm service and standalone container started with the –name flag will register its name and IP with the Docker DNS service.

However, service discovery is network-scoped.

Ingress load balancing

Swarm supports two publishing modes:

  • Ingress mode (default)
  • Host mode

image-20210119171824193

Behind the scenes, ingress mode uses a layer 4 routing mesh called the Service Mesh or the Swarm Mode Service Mesh.

image-20210119172015470

image-20210119172154538

Docker Networking - The commands

  • docker netwoerk ls
  • docker network create
  • docker network inspect
  • docker network prune — delete all unused networks on a docker host
  • docker network rm

12. Docker overlay networking

image-20210121092640590

This is because swarm mode is a pre- requisite for overlay networks.

image-20210121093733705

We can ping containers on overlay using container name or the ip.

First and foremost, Docker overlay networking uses VXLAN tunnels to create virtual Layer 2 overlay networks. So, before we go any further, let’s do a quick VXLAN primer.

image-20210121102102541

image-20210121102312347

To accomplish this, a new sandbox (network namespace) was created on each host. As mentioned in the previous chapter, a sandbox is like a container, but instead of running an application, it runs an isolated network stack —one that’s sandboxed from the network stack of the host itself.

A virtual switch (a.k.a. virtual bridge) called Br0 is created inside the sandbox.

image-20210121102521452

image-20210121102551585

For this example, we’ll call the container on node1 “C1” and the container on node2 “C2”. And let’s assume C1 wants to ping C2 like we did in the practical example earlier in the chapter.

image-20210121102650357

Arp table — Mac address table

![Layer 2 vs Layer 3 Switch: Which One Do You Need? FS Community](https://img-en.fs.com/community/wp-content/uploads/2017/11/Layer-2-Layer-3-in-OSI-model.jpg)

Switch (L2/L3) Vs Router: Comparison and Differences in TCP/IP Networks

Layer 2 vs Layer 3 Switch: Which One Do You Need?  FS Community

Layer 2 vs Layer 3 Switch: What's the Difference?

Layer 2 3 And 4 Switching  Black Box

Docker overlay Networking — The commands

  • docker network create
  • ` docker network ls`
  • docker network inspect
  • docker network rm

13. Volumes and persistant data

Data :

  • Persistent
  • Non-persistent

image-20210122135832415

image-20210122140227141

image-20210122140437324

❯ docker volume inspect myvol
[
    {
        "CreatedAt": "2021-01-22T14:03:50+06:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/myvol/_data",
        "Name": "myvol",
        "Options": {},
        "Scope": "local"
    }
]

If you specify an existing volume, Docker will use the existing volume • If you specify a volume that doesn’t exist, Docker will create it for you

❯ docker volume ls
DRIVER    VOLUME NAME
local     bizvo

Sharing storage across cluster nodes

image-20210122141341931

Volumes and persistent data — the commands

  • ` docker volume create`
  • ` docker volume ls`
  • ` docker volume inspect`
  • ` docker volume prune`
  • ` docker volume rm`
  • ` docker plugin install`
  • ` docker plugin ls`

14. Deploying apps with Docker Stacks

image-20210122142014996

5 Services, 3 networks, 4 secrets, and 3 port mappings

stack.yaml:

  • version
  • services
  • secrets
  • networks

One difference between Docker Stacks and Docker Compose is that stacks do not support builds. This means all images have to be built prior to deploying the stack.

image-20210122144744180

15. Security in Docker

image-20210122150409595

Security on Docker:

  • Linux security technologies:
    • Namespaces
    • Control Groups
    • Capabilities
    • Mandatory Access Control
  • Docker platform security technologies:
    • Swarm mode
    • Image scanning
    • Docker content trust
    • Docker secrets

image-20210122150827046

Docker on linux currently utilizes the following kernel namespaces :

  • Process ID (pid) : Docker uses the pid namespace to provide isolated process trees for each container
  • Network (net): Docker uses the net namespace to provide each container its own isolated network stack
  • Filesystem/mount (mnt) ; Every container gets its own unique isolated root (/) filesystem.
  • Inter-process Communication (ipc) : Docker uses the ipc namespace for shared memory access within a container. It also isolates the container from shared memory outside of the container.
  • User (user): Docker lets you use user namespaces to map users inside of a container to different users on the Linux host.
  • UTS (uts): Docker uses the uts namespace to provide each container with its own hostname.

Docker containers are an organized collection of namespaces.

every container has its own pid, net, mnt, ipc, uts, and potentially user namespace.

image-20210122151119292

If namespaces are about isolation, control groups (cgroups) are about setting limits

Think of containers as similar to rooms in a hotel. While each room might appear isolated, every room shares a common set of infrastructure resources — things like water supply, electricity supply, shared swimming pool, shared gym, shared breakfast bar etc. Cgroups let us set limits so that (sticking with the hotel analogy) no single container can use all of the water or eat everything at the breakfast bar.

In the real world, not the hotel analogy, containers are isolated from each other but all share a common set of OS resources — things like CPU, RAM, network bandwidth, and disk I/O. Cgroups let us set limits on each of these so a single container cannot consume everything and cause a denial of service (DoS) attack.

image-20210122151724555

image-20210122152059744

image-20210122152124205

image-20210122152147541

image-20210122152248225

image-20210122152333224

16. What next

image-20210122152413252