+ - 0:00:00
Notes for current slide
Notes for next slide

Introductions

  • Hello!

  • On stage: Jérôme (@jpetazzo)

  • Backstage: Alexandre, Amy, Antoine, Aurélien (x2), Benji, David, Julien, Kostas, Nicolas, Thibault

  • The training will run from 9:30 to 13:00

  • There will be a break at (approximately) 11:00

  • You should must ask questions! Lots of questions!

  • Use Mattermost to ask questions, get help, etc. logistics.md

2/791

Exercises

  • At the end of each day, there is a series of exercises

  • To make the most out of the training, please try the exercises!

    (it will help to practice and memorize the content of the day)

  • We recommend to take at least one hour to work on the exercises

    (if you understood the content of the day, it will be much faster)

  • Each day will start with a quick review of the exercises of the previous day

logistics.md

3/791

A brief introduction

  • This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials

  • Credit is also due to multiple contributors — thank you!

  • You can also follow along on your own, at your own pace

  • We included as much information as possible in these slides

  • We recommend having a mentor to help you ...

  • ... Or be comfortable spending some time reading the Kubernetes documentation ...

  • ... And looking for answers on StackOverflow and other outlets

k8s/intro.md

4/791

Accessing these slides now

  • We recommend that you open these slides in your browser:

    https://2022-02-enix.container.training/

  • Use arrows to move to next/previous slide

    (up, down, left, right, page up, page down)

  • Type a slide number + ENTER to go to that slide

  • The slide number is also visible in the URL bar

    (e.g. .../#123 for slide 123)

shared/about-slides.md

5/791

Accessing these slides later

shared/about-slides.md

6/791

These slides are open source

  • You are welcome to use, re-use, share these slides

  • These slides are written in Markdown

  • The sources of these slides are available in a public GitHub repository:

    https://github.com/jpetazzo/container.training

  • Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...

👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.

shared/about-slides.md

7/791

Extra details

  • This slide has a little magnifying glass in the top left corner

  • This magnifying glass indicates slides that provide extra details

  • Feel free to skip them if:

    • you are in a hurry

    • you are new to this and want to avoid cognitive overload

    • you want only the most essential information

  • You can review these slides another time if you want, they'll be waiting for you ☺

shared/about-slides.md

8/791

Chat room

  • We've set up a chat room that we will monitor during the workshop

  • Don't hesitate to use it to ask questions, or get help, or share feedback

  • The chat room will also be available after the workshop

  • Join the chat room: Mattermost

  • Say hi in the chat room!

shared/chat-room-im.md

9/791

Pre-requirements

  • Be comfortable with the UNIX command line

    • navigating directories

    • editing files

    • a little bit of bash-fu (environment variables, loops)

  • Some Docker knowledge

    • docker run, docker ps, docker build

    • ideally, you know how to write a Dockerfile and build it
      (even if it's a FROM line and a couple of RUN commands)

  • It's totally OK if you are not a Docker expert!

shared/prereqs.md

10/791

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.

Misattributed to Benjamin Franklin

(Probably inspired by Chinese Confucian philosopher Xunzi)

shared/prereqs.md

11/791

Hands-on sections

  • The whole workshop is hands-on

  • We are going to build, ship, and run containers!

  • You are invited to reproduce all the demos

  • All hands-on sections are clearly identified, like the gray rectangle below

shared/prereqs.md

12/791

Where are we going to run our containers?

shared/prereqs.md

13/791

You get a cluster of cloud VMs

  • Each person gets a private cluster of cloud VMs (not shared with anybody else)

  • They'll remain up for the duration of the workshop

  • You should have a little card with login+password+IP addresses

  • You can automatically SSH from one VM to another

  • The nodes have aliases: node1, node2, etc.

shared/prereqs.md

15/791

Why don't we run containers locally?

  • Installing this stuff can be hard on some machines

    (32 bits CPU or OS... Laptops without administrator access... etc.)

  • "The whole team downloaded all these container images from the WiFi!
    ... and it went great!"
    (Literally no-one ever)

  • All you need is a computer (or even a phone or tablet!), with:

    • an Internet connection

    • a web browser

    • an SSH client

shared/prereqs.md

16/791

SSH clients

shared/prereqs.md

17/791

What is this Mosh thing?

You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!

  • Mosh is "the mobile shell"

  • It is essentially SSH over UDP, with roaming features

  • It retransmits packets quickly, so it works great even on lossy connections

    (Like hotel or conference WiFi)

  • It has intelligent local echo, so it works great even in high-latency connections

    (Like hotel or conference WiFi)

  • It supports transparent roaming when your client IP address changes

    (Like when you hop from hotel to conference WiFi)

shared/prereqs.md

18/791

Using Mosh

  • To install it: (apt|yum|brew) install mosh

  • It has been pre-installed on the VMs that we are using

  • To connect to a remote machine: mosh user@host

    (It is going to establish an SSH connection, then hand off to UDP)

  • It requires UDP ports to be open

    (By default, it uses a UDP port between 60000 and 61000)

shared/prereqs.md

19/791

Connecting to our lab environment

  • Log into the first VM (node1) with your SSH client:

    ssh user@A.B.C.D

    (Replace user and A.B.C.D with the user and IP address provided to you)

You should see a prompt looking like this:

[A.B.C.D] (...) user@node1 ~
$

If anything goes wrong — ask for help!

shared/connecting.md

20/791

tailhist

  • The shell history of the instructor is available online in real time

  • Note the IP address of the instructor's virtual machine (A.B.C.D)

  • Open http://A.B.C.D:1088 in your browser and you should see the history

  • The history is updated in real time

    (using a WebSocket connection)

  • It should be green when the WebSocket is connected

    (if it turns red, reloading the page should fix it)

shared/connecting.md

21/791

Doing or re-doing the workshop on your own?

  • Use something like Play-With-Docker or Play-With-Kubernetes

    Zero setup effort; but environment are short-lived and might have limited resources

  • Create your own cluster (local or cloud VMs)

    Small setup effort; small cost; flexible environments

  • Create a bunch of clusters for you and your friends (instructions)

    Bigger setup effort; ideal for group training

shared/connecting.md

22/791

For a consistent Kubernetes experience ...

  • If you are using your own Kubernetes cluster, you can use jpetazzo/shpod

  • shpod provides a shell running in a pod on your own cluster

  • It comes with many tools pre-installed (helm, stern...)

  • These tools are used in many demos and exercises in these slides

  • shpod also gives you completion and a fancy prompt

  • It can also be used as an SSH server if needed

shared/connecting.md

23/791

We will (mostly) interact with node1 only

These remarks apply only when using multiple nodes, of course.

  • Unless instructed, all commands must be run from the first VM, node1

  • We will only check out/copy the code on node1

  • During normal operations, we do not need access to the other nodes

  • If we had to troubleshoot issues, we would use a combination of:

    • SSH (to access system logs, daemon status...)

    • Docker API (to check running containers and container engine status)

shared/connecting.md

24/791

Terminals

Once in a while, the instructions will say:
"Open a new terminal."

There are multiple ways to do this:

  • create a new window or tab on your machine, and SSH into the VM;

  • use screen or tmux on the VM and open a new window from there.

You are welcome to use the method that you feel the most comfortable with.

shared/connecting.md

25/791

Tmux cheat sheet

Tmux is a terminal multiplexer like screen.

You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.

  • Ctrl-b c → creates a new window
  • Ctrl-b n → go to next window
  • Ctrl-b p → go to previous window
  • Ctrl-b " → split window top/bottom
  • Ctrl-b % → split window left/right
  • Ctrl-b Alt-1 → rearrange windows in columns
  • Ctrl-b Alt-2 → rearrange windows in rows
  • Ctrl-b arrows → navigate to other windows
  • Ctrl-b d → detach session
  • tmux attach → re-attach to session

shared/connecting.md

26/791

Exercise — Deploy Dockercoins

  • Deploy the dockercoins application to our Kubernetes cluster

  • Connect components together

  • Expose the web UI and open it in a web browser to check that it works

exercises/k8sfundamentals-brief.md

27/791

Exercise — Local Cluster

  • Deploy a local Kubernetes cluster if you don't already have one

  • Deploy dockercoins on that cluster

  • Connect to the web UI in your browser

  • Scale up dockercoins

exercises/localcluster-brief.md

28/791

Exercise — Healthchecks

  • Add readiness and liveness probes to a web service

    (we will use the rng service in the dockercoins app)

  • See what happens when the load increses

    (spoiler alert: it involves timeouts!)

exercises/healthchecks-brief.md

29/791

Image separating from the next part

38/791

Our sample application

(automatically generated title slide)

39/791

Our sample application

  • We will clone the GitHub repository onto our node1

  • The repository also contains scripts and tools that we will use through the workshop

  • Clone the repository on node1:
    git clone https://github.com/jpetazzo/container.training

(You can also fork the repository on GitHub and clone your fork if you prefer that.)

shared/sampleapp.md

40/791

Downloading and running the application

Let's start this before we look around, as downloading will take a little time...

  • Go to the dockercoins directory, in the cloned repository:

    cd ~/container.training/dockercoins
  • Use Compose to build and run all containers:

    docker-compose up

Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs.

shared/sampleapp.md

41/791

What's this application?

42/791

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢
43/791

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoin

44/791

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoin

  • How dockercoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

45/791

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoin

  • How dockercoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

  • DockerCoin is not a cryptocurrency

    (the only common points are "randomness," "hashing," and "coins" in the name)

shared/sampleapp.md

46/791

DockerCoin in the microservices era

  • The dockercoins app is made of 5 services:

    • rng = web service generating random bytes

    • hasher = web service computing hash of POSTed data

    • worker = background process calling rng and hasher

    • webui = web interface to watch progress

    • redis = data store (holds a counter updated by worker)

  • These 5 services are visible in the application's Compose file, docker-compose.yml

shared/sampleapp.md

47/791

How dockercoins works

  • worker invokes web service rng to generate random bytes

  • worker invokes web service hasher to hash these bytes

  • worker does this in an infinite loop

  • every second, worker updates redis to indicate how many loops were done

  • webui queries redis, and computes and exposes "hashing speed" in our browser

(See diagram on next slide!)

shared/sampleapp.md

48/791

Service discovery in container-land

How does each service find out the address of the other ones?

50/791

Service discovery in container-land

How does each service find out the address of the other ones?

  • We do not hard-code IP addresses in the code

  • We do not hard-code FQDNs in the code, either

  • We just connect to a service name, and container-magic does the rest

    (And by container-magic, we mean "a crafty, dynamic, embedded DNS server")

shared/sampleapp.md

51/791

Example in worker/worker.py

redis = Redis("redis")
def get_random_bytes():
r = requests.get("http://rng/32")
return r.content
def hash_bytes(data):
r = requests.post("http://hasher/",
data=data,
headers={"Content-Type": "application/octet-stream"})

(Full source code available here)

shared/sampleapp.md

52/791
  • Containers can have network aliases (resolvable through DNS)

  • Compose file version 2+ makes each container reachable through its service name

  • Compose file version 1 required "links" sections to accomplish this

  • Network aliases are automatically namespaced

    • you can have multiple apps declaring and using a service named database

    • containers in the blue app will resolve database to the IP of the blue database

    • containers in the green app will resolve database to the IP of the green database

shared/sampleapp.md

53/791

Show me the code!

  • You can check the GitHub repository with all the materials of this workshop:
    https://github.com/jpetazzo/container.training

  • The application is in the dockercoins subdirectory

  • The Compose file (docker-compose.yml) lists all 5 services

  • redis is using an official image from the Docker Hub

  • hasher, rng, worker, webui are each built from a Dockerfile

  • Each service's Dockerfile and source code is in its own directory

    (hasher is in the hasher directory, rng is in the rng directory, etc.)

shared/sampleapp.md

54/791

Compose file format version

This is relevant only if you have used Compose before 2016...

  • Compose 1.6 introduced support for a new Compose file format (aka "v2")

  • Services are no longer at the top level, but under a services section

  • There has to be a version key at the top level, with value "2" (as a string, not an integer)

  • Containers are placed on a dedicated network, making links unnecessary

  • There are other minor differences, but upgrade is easy and straightforward

shared/sampleapp.md

55/791

Our application at work

  • On the left-hand side, the "rainbow strip" shows the container names

  • On the right-hand side, we see the output of our containers

  • We can see the worker service making requests to rng and hasher

  • For rng and hasher, we see HTTP access logs

shared/sampleapp.md

56/791

Connecting to the web UI

  • "Logs are exciting and fun!" (No-one, ever)

  • The webui container exposes a web dashboard; let's view it

  • With a web browser, connect to node1 on port 8000

  • Remember: the nodeX aliases are valid only on the nodes themselves

  • In your browser, you need to enter the IP address of your node

A drawing area should show up, and after a few seconds, a blue graph will appear.

shared/sampleapp.md

57/791

Why does the speed seem irregular?

  • It looks like the speed is approximately 4 hashes/second

  • Or more precisely: 4 hashes/second, with regular dips down to zero

  • Why?

58/791

Why does the speed seem irregular?

  • It looks like the speed is approximately 4 hashes/second

  • Or more precisely: 4 hashes/second, with regular dips down to zero

  • Why?

  • The app actually has a constant, steady speed: 3.33 hashes/second
    (which corresponds to 1 hash every 0.3 seconds, for reasons)

  • Yes, and?

shared/sampleapp.md

59/791

The reason why this graph is not awesome

  • The worker doesn't update the counter after every loop, but up to once per second

  • The speed is computed by the browser, checking the counter about once per second

  • Between two consecutive updates, the counter will increase either by 4, or by 0

  • The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.

  • What can we conclude from this?

60/791

The reason why this graph is not awesome

  • The worker doesn't update the counter after every loop, but up to once per second

  • The speed is computed by the browser, checking the counter about once per second

  • Between two consecutive updates, the counter will increase either by 4, or by 0

  • The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.

  • What can we conclude from this?

  • "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme

shared/sampleapp.md

61/791

Stopping the application

  • If we interrupt Compose (with ^C), it will politely ask the Docker Engine to stop the app

  • The Docker Engine will send a TERM signal to the containers

  • If the containers do not exit in a timely manner, the Engine sends a KILL signal

  • Stop the application by hitting ^C
62/791

Stopping the application

  • If we interrupt Compose (with ^C), it will politely ask the Docker Engine to stop the app

  • The Docker Engine will send a TERM signal to the containers

  • If the containers do not exit in a timely manner, the Engine sends a KILL signal

  • Stop the application by hitting ^C

Some containers exit immediately, others take longer.

The containers that do not handle SIGTERM end up being killed after a 10s timeout. If we are very impatient, we can hit ^C a second time!

shared/sampleapp.md

63/791

Clean up

  • Before moving on, let's remove those containers
  • Tell Compose to remove everything:
    docker-compose down

shared/composedown.md

64/791

Image separating from the next part

65/791

Kubernetes concepts

(automatically generated title slide)

66/791

Kubernetes concepts

  • Kubernetes is a container management system

  • It runs and manages containerized applications on a cluster

67/791

Kubernetes concepts

  • Kubernetes is a container management system

  • It runs and manages containerized applications on a cluster

  • What does that really mean?

k8s/concepts-k8s.md

68/791

What can we do with Kubernetes?

  • Let's imagine that we have a 3-tier e-commerce app:

    • web frontend

    • API backend

    • database (that we will keep out of Kubernetes for now)

  • We have built images for our frontend and backend components

    (e.g. with Dockerfiles and docker build)

  • We are running them successfully with a local environment

    (e.g. with Docker Compose)

  • Let's see how we would deploy our app on Kubernetes!

k8s/concepts-k8s.md

69/791

Basic things we can ask Kubernetes to do

70/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3
71/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

72/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

73/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

74/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

75/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

76/791

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

  • Keep processing requests during the upgrade; update my containers one at a time

k8s/concepts-k8s.md

77/791

Other things that Kubernetes can do for us

  • Autoscaling

    (straightforward on CPU; more complex on other metrics)

  • Resource management and scheduling

    (reserve CPU/RAM for containers; placement constraints)

  • Advanced rollout patterns

    (blue/green deployment, canary deployment)

k8s/concepts-k8s.md

78/791

More things that Kubernetes can do for us

  • Batch jobs

    (one-off; parallel; also cron-style periodic execution)

  • Fine-grained access control

    (defining what can be done by whom on which resources)

  • Stateful services

    (databases, message queues, etc.)

  • Automating complex tasks with operators

    (e.g. database replication, failover, etc.)

k8s/concepts-k8s.md

79/791

Kubernetes architecture

k8s/concepts-k8s.md

80/791

Kubernetes architecture

  • Ha ha ha ha

  • OK, I was trying to scare you, it's much simpler than that ❤️

k8s/concepts-k8s.md

82/791

Credits

  • The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI

    (Courtesy of Yongbok Kim)

  • The second one is a simplified representation of a Kubernetes cluster

    (Courtesy of Imesh Gunaratne)

k8s/concepts-k8s.md

84/791

Kubernetes architecture: the nodes

  • The nodes executing our containers run a collection of services:

    • a container Engine (typically Docker)

    • kubelet (the "node agent")

    • kube-proxy (a necessary but not sufficient network component)

  • Nodes were formerly called "minions"

    (You might see that word in older articles or documentation)

k8s/concepts-k8s.md

85/791

Kubernetes architecture: the control plane

  • The Kubernetes logic (its "brains") is a collection of services:

    • the API server (our point of entry to everything!)

    • core services like the scheduler and controller manager

    • etcd (a highly available key/value store; the "database" of Kubernetes)

  • Together, these services form the control plane of our cluster

  • The control plane is also called the "master"

k8s/concepts-k8s.md

86/791

Running the control plane on special nodes

  • It is common to reserve a dedicated node for the control plane

    (Except for single-node development clusters, like when using minikube)

  • This node is then called a "master"

    (Yes, this is ambiguous: is the "master" a node, or the whole control plane?)

  • Normal applications are restricted from running on this node

    (By using a mechanism called "taints")

  • When high availability is required, each service of the control plane must be resilient

  • The control plane is then replicated on multiple nodes

    (This is sometimes called a "multi-master" setup)

k8s/concepts-k8s.md

88/791

Running the control plane outside containers

  • The services of the control plane can run in or out of containers

  • For instance: since etcd is a critical service, some people deploy it directly on a dedicated cluster (without containers)

    (This is illustrated on the first "super complicated" schema)

  • In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible

    (We only "see" a Kubernetes API endpoint)

  • In that case, there is no "master node"

For this reason, it is more accurate to say "control plane" rather than "master."

k8s/concepts-k8s.md

89/791

How many nodes should a cluster have?

  • There is no particular constraint

    (no need to have an odd number of nodes for quorum)

  • A cluster can have zero node

    (but then it won't be able to start any pods)

  • For testing and development, having a single node is fine

  • For production, make sure that you have extra capacity

    (so that your workload still fits if you lose a node or a group of nodes)

  • Kubernetes is tested with up to 5000 nodes

    (however, running a cluster of that size requires a lot of tuning)

k8s/concepts-k8s.md

97/791

Do we need to run Docker at all?

No!

98/791

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • We can leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

k8s/concepts-k8s.md

99/791

Some runtimes available through CRI

  • containerd

    • maintained by Docker, IBM, and community
    • used by Docker Engine, microk8s, k3s, GKE; also standalone
    • comes with its own CLI, ctr
  • CRI-O:

    • maintained by Red Hat, SUSE, and community
    • used by OpenShift and Kubic
    • designed specifically as a minimal runtime for Kubernetes
  • And more

k8s/concepts-k8s.md

100/791

Do we need to run Docker at all?

Yes!

101/791

Do we need to run Docker at all?

Yes!

  • In this workshop, we run our app on a single node first

  • We will need to build images and ship them around

  • We can do these things without Docker
    (and get diagnosed with NIH¹ syndrome)

  • Docker is still the most stable container engine today
    (but other options are maturing very quickly)

¹Not Invented Here

k8s/concepts-k8s.md

102/791

Do we need to run Docker at all?

  • On our development environments, CI pipelines ... :

    Yes, almost certainly

  • On our production servers:

    Yes (today)

    Probably not (in the future)

More information about CRI on the Kubernetes blog

k8s/concepts-k8s.md

103/791

Interacting with Kubernetes

  • We will interact with our Kubernetes cluster through the Kubernetes API

  • The Kubernetes API is (mostly) RESTful

  • It allows us to create, read, update, delete resources

  • A few common resource types are:

    • node (a machine — physical or virtual — in our cluster)

    • pod (group of containers running together on a node)

    • service (stable network endpoint to connect to one or multiple containers)

k8s/concepts-k8s.md

104/791

Scaling

  • How would we scale the pod shown on the previous slide?

  • Do create additional pods

    • each pod can be on a different node

    • each pod will have its own IP address

  • Do not add more NGINX containers in the pod

    • all the NGINX containers would be on the same node

    • they would all have the same IP address
      (resulting in Address alreading in use errors)

k8s/concepts-k8s.md

106/791

Together or separate

  • Should we put e.g. a web application server and a cache together?
    ("cache" being something like e.g. Memcached or Redis)

  • Putting them in the same pod means:

    • they have to be scaled together

    • they can communicate very efficiently over localhost

  • Putting them in different pods means:

    • they can be scaled separately

    • they must communicate over remote IP addresses
      (incurring more latency, lower performance)

  • Both scenarios can make sense, depending on our goals

k8s/concepts-k8s.md

107/791

Credits

  • The first diagram is courtesy of Lucas Käldström, in this presentation

    • it's one of the best Kubernetes architecture diagrams available!
  • The second diagram is courtesy of Weave Works

    • a pod can have multiple containers working together

    • IP addresses are associated with pods, not with individual containers

Both diagrams used with permission.

108/791

:EN:- Kubernetes concepts :FR:- Kubernetes en théorie

k8s/concepts-k8s.md

Image separating from the next part

109/791

First contact with kubectl

(automatically generated title slide)

110/791

First contact with kubectl

  • kubectl is (almost) the only tool we'll need to talk to Kubernetes

  • It is a rich CLI tool around the Kubernetes API

    (Everything you can do with kubectl, you can do directly with the API)

  • On our machines, there is a ~/.kube/config file with:

    • the Kubernetes API address

    • the path to our TLS certificates used to authenticate

  • You can also use the --kubeconfig flag to pass a config file

  • Or directly --server, --user, etc.

  • kubectl can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...

k8s/kubectlget.md

111/791

kubectl is the new SSH

  • We often start managing servers with SSH

    (installing packages, troubleshooting ...)

  • At scale, it becomes tedious, repetitive, error-prone

  • Instead, we use config management, central logging, etc.

  • In many cases, we still need SSH:

    • as the underlying access method (e.g. Ansible)

    • to debug tricky scenarios

    • to inspect and poke at things

k8s/kubectlget.md

112/791

The parallel with kubectl

  • We often start managing Kubernetes clusters with kubectl

    (deploying applications, troubleshooting ...)

  • At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone

  • Instead, we use automated pipelines, observability tooling, etc.

  • In many cases, we still need kubectl:

    • to debug tricky scenarios

    • to inspect and poke at things

  • The Kubernetes API is always the underlying access method

k8s/kubectlget.md

113/791

kubectl get

  • Let's look at our Node resources with kubectl get!
  • Look at the composition of our cluster:

    kubectl get node
  • These commands are equivalent:

    kubectl get no
    kubectl get node
    kubectl get nodes

k8s/kubectlget.md

114/791

Obtaining machine-readable output

  • kubectl get can output JSON, YAML, or be directly formatted
  • Give us more info about the nodes:

    kubectl get nodes -o wide
  • Let's have some YAML:

    kubectl get no -o yaml

    See that kind: List at the end? It's the type of our result!

k8s/kubectlget.md

115/791

(Ab)using kubectl and jq

  • It's super easy to build custom reports
  • Show the capacity of all our nodes as a stream of JSON objects:
    kubectl get nodes -o json |
    jq ".items[] | {name:.metadata.name} + .status.capacity"

k8s/kubectlget.md

116/791

Exploring types and definitions

  • We can list all available resource types by running kubectl api-resources
    (In Kubernetes 1.10 and prior, this command used to be kubectl get)

  • We can view the definition for a resource type with:

    kubectl explain type
  • We can view the definition of a field in a resource, for instance:

    kubectl explain node.spec
  • Or get the full definition of all fields and sub-fields:

    kubectl explain node --recursive

k8s/kubectlget.md

117/791

Introspection vs. documentation

  • We can access the same information by reading the API documentation

  • The API documentation is usually easier to read, but:

    • it won't show custom types (like Custom Resource Definitions)

    • we need to make sure that we look at the correct version

  • kubectl api-resources and kubectl explain perform introspection

    (they communicate with the API server and obtain the exact type definitions)

k8s/kubectlget.md

118/791

Type names

  • The most common resource names have three forms:

    • singular (e.g. node, service, deployment)

    • plural (e.g. nodes, services, deployments)

    • short (e.g. no, svc, deploy)

  • Some resources do not have a short name

  • Endpoints only have a plural form

    (because even a single Endpoints resource is actually a list of endpoints)

k8s/kubectlget.md

119/791

Viewing details

  • We can use kubectl get -o yaml to see all available details

  • However, YAML output is often simultaneously too much and not enough

  • For instance, kubectl get node node1 -o yaml is:

    • too much information (e.g.: list of images available on this node)

    • not enough information (e.g.: doesn't show pods running on this node)

    • difficult to read for a human operator

  • For a comprehensive overview, we can use kubectl describe instead

k8s/kubectlget.md

120/791

kubectl describe

  • kubectl describe needs a resource type and (optionally) a resource name

  • It is possible to provide a resource name prefix

    (all matching objects will be displayed)

  • kubectl describe will retrieve some extra information about the resource

  • Look at the information available for node1 with one of the following commands:
    kubectl describe node/node1
    kubectl describe node node1

(We should notice a bunch of control plane pods.)

k8s/kubectlget.md

121/791

Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods
122/791

Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods

Where are the pods that we saw just a moment earlier?!?

k8s/kubectlget.md

123/791

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns
124/791

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns

You know what ... This kube-system thing looks suspicious.

In fact, I'm pretty sure it showed up earlier, when we did:

kubectl describe node node1

k8s/kubectlget.md

125/791

Accessing namespaces

  • By default, kubectl uses the default namespace

  • We can see resources in all namespaces with --all-namespaces

  • List the pods in all namespaces:

    kubectl get pods --all-namespaces
  • Since Kubernetes 1.14, we can also use -A as a shorter version:

    kubectl get pods -A

Here are our system pods!

k8s/kubectlget.md

126/791

What are all these control plane pods?

  • etcd is our etcd server

  • kube-apiserver is the API server

  • kube-controller-manager and kube-scheduler are other control plane components

  • coredns provides DNS-based service discovery (replacing kube-dns as of 1.11)

  • kube-proxy is the (per-node) component managing port mappings and such

  • weave is the (per-node) component managing the network overlay

  • the READY column indicates the number of containers in each pod

    (1 for most pods, but weave has 2, for instance)

k8s/kubectlget.md

127/791

Scoping another namespace

  • We can also look at a different namespace (other than default)
  • List only the pods in the kube-system namespace:
    kubectl get pods --namespace=kube-system
    kubectl get pods -n kube-system

k8s/kubectlget.md

128/791

Namespaces and other kubectl commands

  • We can use -n/--namespace with almost every kubectl command

  • Example:

    • kubectl create --namespace=X to create something in namespace X
  • We can use -A/--all-namespaces with most commands that manipulate multiple objects

  • Examples:

    • kubectl delete can delete resources across multiple namespaces

    • kubectl label can add/remove/update labels across multiple namespaces

k8s/kubectlget.md

129/791

What about kube-public?

  • List the pods in the kube-public namespace:
    kubectl -n kube-public get pods

Nothing!

kube-public is created by kubeadm & used for security bootstrapping.

k8s/kubectlget.md

130/791

Exploring kube-public

  • The only interesting object in kube-public is a ConfigMap named cluster-info
  • List ConfigMap objects:

    kubectl -n kube-public get configmaps
  • Inspect cluster-info:

    kubectl -n kube-public get configmap cluster-info -o yaml

Note the selfLink URI: /api/v1/namespaces/kube-public/configmaps/cluster-info

We can use that!

k8s/kubectlget.md

131/791

Accessing cluster-info

  • Earlier, when trying to access the API server, we got a Forbidden message

  • But cluster-info is readable by everyone (even without authentication)

  • Retrieve cluster-info:
    curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
  • We were able to access cluster-info (without auth)

  • It contains a kubeconfig file

k8s/kubectlget.md

132/791

Retrieving kubeconfig

  • We can easily extract the kubeconfig file from this ConfigMap
  • Display the content of kubeconfig:
    curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \
    | jq -r .data.kubeconfig
  • This file holds the canonical address of the API server, and the public key of the CA

  • This file does not hold client keys or tokens

  • This is not sensitive information, but allows us to establish trust

k8s/kubectlget.md

133/791

What about kube-node-lease?

  • Starting with Kubernetes 1.14, there is a kube-node-lease namespace

    (or in Kubernetes 1.13 if the NodeLease feature gate is enabled)

  • That namespace contains one Lease object per node

  • Node leases are a new way to implement node heartbeats

    (i.e. node regularly pinging the control plane to say "I'm alive!")

  • For more details, see KEP-0009 or the node controller documentation k8s/kubectlget.md

134/791

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc
135/791

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc

There is already one service on our cluster: the Kubernetes API itself.

k8s/kubectlget.md

136/791

ClusterIP services

  • A ClusterIP service is internal, available from the cluster only

  • This is useful for introspection from within containers

  • Try to connect to the API:

    curl -k https://10.96.0.1
    • -k is used to skip certificate verification

    • Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc

The command above should either time out, or show an authentication error. Why?

k8s/kubectlget.md

137/791

Time out

  • Connections to ClusterIP services only work from within the cluster

  • If we are outside the cluster, the curl command will probably time out

    (Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)

  • This is the case with most "real" Kubernetes clusters

  • To try the connection from within the cluster, we can use shpod

k8s/kubectlget.md

138/791

Authentication error

This is what we should see when connecting from within the cluster:

$ curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}

k8s/kubectlget.md

139/791

Explanations

  • We can see kind, apiVersion, metadata

  • These are typical of a Kubernetes API reply

  • Because we are talking to the Kubernetes API

  • The Kubernetes API tells us "Forbidden"

    (because it requires authentication)

  • The Kubernetes API is reachable from within the cluster

    (many apps integrating with Kubernetes will use this)

k8s/kubectlget.md

140/791

DNS integration

  • Each service also gets a DNS record

  • The Kubernetes DNS resolver is available from within pods

    (and sometimes, from within nodes, depending on configuration)

  • Code running in pods can connect to services using their name

    (e.g. https://kubernetes/...)

141/791

:EN:- Getting started with kubectl :FR:- Se familiariser avec kubectl

k8s/kubectlget.md

Image separating from the next part

142/791

Running our first containers on Kubernetes

(automatically generated title slide)

143/791

Running our first containers on Kubernetes

  • First things first: we cannot run a container
144/791

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

145/791

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

  • In that container in the pod, we are going to run a simple ping command

k8s/kubectl-run.md

146/791

If you're running Kubernetes 1.17 (or older)...

  • This material assumes that you're running a recent version of Kubernetes

    (at least 1.19)

  • You can check your version number with kubectl version

    (look at the server part)

  • In Kubernetes 1.17 and older, kubectl run creates a Deployment

  • If you're running such an old version:

147/791

Starting a simple pod with kubectl run

  • kubectl run is convenient to start a single pod

  • We need to specify at least a name and the image we want to use

  • Optionally, we can specify the command to run in the pod

  • Let's ping the address of localhost, the loopback interface:
    kubectl run pingpong --image alpine ping 127.0.0.1

The output tells us that a Pod was created:

pod/pingpong created

k8s/kubectl-run.md

148/791

Viewing container output

  • Let's use the kubectl logs command

  • It takes a Pod name as argument

  • Unless specified otherwise, it will only show logs of the first container in the pod

    (Good thing there's only one in ours!)

  • View the result of our ping command:
    kubectl logs pingpong

k8s/kubectl-run.md

149/791

Streaming logs in real time

  • Just like docker logs, kubectl logs supports convenient options:

    • -f/--follow to stream logs in real time (à la tail -f)

    • --tail to indicate how many lines you want to see (from the end)

    • --since to get logs only after a given timestamp

  • View the latest logs of our ping command:

    kubectl logs pingpong --tail 1 --follow
  • Stop it with Ctrl-C

k8s/kubectl-run.md

150/791

Scaling our application

  • kubectl gives us a simple command to scale a workload:

    kubectl scale TYPE NAME --replicas=HOWMANY

  • Let's try it on our Pod, so that we have more Pods!

  • Try to scale the Pod:
    kubectl scale pod pingpong --replicas=3

🤔 We get the following error, what does that mean?

Error from server (NotFound): the server could not find the requested resource

k8s/kubectl-run.md

151/791

Scaling a Pod

  • We cannot "scale a Pod"

    (that's not completely true; we could give it more CPU/RAM)

  • If we want more Pods, we need to create more Pods

    (i.e. execute kubectl run multiple times)

  • There must be a better way!

    (spoiler alert: yes, there is a better way!)

k8s/kubectl-run.md

152/791

NotFound

  • What's the meaning of that error?

    Error from server (NotFound): the server could not find the requested resource
  • When we execute kubectl scale THAT-RESOURCE --replicas=THAT-MANY,
    it is like telling Kubernetes:

    go to THAT-RESOURCE and set the scaling button to position THAT-MANY

  • Pods do not have a "scaling button"

  • Try to execute the kubectl scale pod command with -v6

  • We see a PATCH request to /scale: that's the "scaling button"

    (technically it's called a subresource of the Pod)

k8s/kubectl-run.md

153/791

Creating more pods

  • We are going to create a ReplicaSet

    (= set of replicas = set of identical pods)

  • In fact, we will create a Deployment, which itself will create a ReplicaSet

  • Why so many layers? We'll explain that shortly, don't worry!

k8s/kubectl-run.md

154/791

Creating a Deployment running ping

  • Let's create a Deployment instead of a single Pod
  • Create the Deployment; pay attention to the --:
    kubectl create deployment pingpong --image=alpine -- ping 127.0.0.1
  • The -- is used to separate:

    • "options/flags of kubectl create

    • command to run in the container

k8s/kubectl-run.md

155/791

What has been created?

  • Check the resources that were created:
    kubectl get all

Note: kubectl get all is a lie. It doesn't show everything.

(But it shows a lot of "usual suspects", i.e. commonly used resources.)

k8s/kubectl-run.md

156/791

There's a lot going on here!

NAME READY STATUS RESTARTS AGE
pod/pingpong 1/1 Running 0 4m17s
pod/pingpong-6ccbc77f68-kmgfn 1/1 Running 0 11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h45
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1/1 1 1 11s
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-6ccbc77f68 1 1 1 11s

Our new Pod is not named pingpong, but pingpong-xxxxxxxxxxx-yyyyy.

We have a Deployment named pingpong, and an extra ReplicaSet, too. What's going on?

k8s/kubectl-run.md

157/791

From Deployment to Pod

We have the following resources:

  • deployment.apps/pingpong

    This is the Deployment that we just created.

  • replicaset.apps/pingpong-xxxxxxxxxx

    This is a Replica Set created by this Deployment.

  • pod/pingpong-xxxxxxxxxx-yyyyy

    This is a pod created by the Replica Set.

Let's explain what these things are.

k8s/kubectl-run.md

158/791

Pod

  • Can have one or multiple containers

  • Runs on a single node

    (Pod cannot "straddle" multiple nodes)

  • Pods cannot be moved

    (e.g. in case of node outage)

  • Pods cannot be scaled horizontally

    (except by manually creating more Pods)

k8s/kubectl-run.md

159/791

Pod details

  • A Pod is not a process; it's an environment for containers

    • it cannot be "restarted"

    • it cannot "crash"

  • The containers in a Pod can crash

  • They may or may not get restarted

    (depending on Pod's restart policy)

  • If all containers exit successfully, the Pod ends in "Succeeded" phase

  • If some containers fail and don't get restarted, the Pod ends in "Failed" phase

k8s/kubectl-run.md

160/791

Replica Set

  • Set of identical (replicated) Pods

  • Defined by a pod template + number of desired replicas

  • If there are not enough Pods, the Replica Set creates more

    (e.g. in case of node outage; or simply when scaling up)

  • If there are too many Pods, the Replica Set deletes some

    (e.g. if a node was disconnected and comes back; or when scaling down)

  • We can scale up/down a Replica Set

    • we update the manifest of the Replica Set

    • as a consequence, the Replica Set controller creates/deletes Pods

k8s/kubectl-run.md

161/791

Deployment

  • Replica Sets control identical Pods

  • Deployments are used to roll out different Pods

    (different image, command, environment variables, ...)

  • When we update a Deployment with a new Pod definition:

    • a new Replica Set is created with the new Pod definition

    • that new Replica Set is progressively scaled up

    • meanwhile, the old Replica Set(s) is(are) scaled down

  • This is a rolling update, minimizing application downtime

  • When we scale up/down a Deployment, it scales up/down its Replica Set

k8s/kubectl-run.md

162/791

Can we scale now?

  • Let's try kubectl scale again, but on the Deployment!
  • Scale our pingpong deployment:

    kubectl scale deployment pingpong --replicas 3
  • Note that we could also write it like this:

    kubectl scale deployment/pingpong --replicas 3
  • Check that we now have multiple pods:

    kubectl get pods

k8s/kubectl-run.md

163/791

Scaling a Replica Set

  • What if we scale the Replica Set instead of the Deployment?

  • The Deployment would notice it right away and scale back to the initial level

  • The Replica Set makes sure that we have the right numbers of Pods

  • The Deployment makes sure that the Replica Set has the right size

    (conceptually, it delegates the management of the Pods to the Replica Set)

  • This might seem weird (why this extra layer?) but will soon make sense

    (when we will look at how rolling updates work!)

k8s/kubectl-run.md

164/791

Checking Deployment logs

  • kubectl logs needs a Pod name

  • But it can also work with a type/name

    (e.g. deployment/pingpong)

  • View the result of our ping command:
    kubectl logs deploy/pingpong --tail 2
  • It shows us the logs of the first Pod of the Deployment

  • We'll see later how to get the logs of all the Pods!

k8s/kubectl-run.md

165/791

Resilience

  • The deployment pingpong watches its replica set

  • The replica set ensures that the right number of pods are running

  • What happens if pods disappear?

  • In a separate window, watch the list of pods:
    watch kubectl get pods
  • Destroy the pod currently shown by kubectl logs:
    kubectl delete pod pingpong-xxxxxxxxxx-yyyyy

k8s/kubectl-run.md

166/791

What happened?

  • kubectl delete pod terminates the pod gracefully

    (sending it the TERM signal and waiting for it to shutdown)

  • As soon as the pod is in "Terminating" state, the Replica Set replaces it

  • But we can still see the output of the "Terminating" pod in kubectl logs

  • Until 30 seconds later, when the grace period expires

  • The pod is then killed, and kubectl logs exits

k8s/kubectl-run.md

167/791

Deleting a standalone Pod

  • What happens if we delete a standalone Pod?

    (like the first pingpong Pod that we created)

  • Delete the Pod:
    kubectl delete pod pingpong
  • No replacement Pod gets created because there is no controller watching it

  • That's why we will rarely use standalone Pods in practice

    (except for e.g. punctual debugging or executing a short supervised task)

168/791

:EN:- Running pods and deployments :FR:- Créer un pod et un déploiement

k8s/kubectl-run.md

Image separating from the next part

169/791

Kubernetes network model

(automatically generated title slide)

170/791

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

171/791

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

  • In detail:

    • all nodes must be able to reach each other, without NAT

    • all pods must be able to reach each other, without NAT

    • pods and nodes must be able to reach each other, without NAT

    • each pod is aware of its IP address (no NAT)

    • pod IP addresses are assigned by the network implementation

  • Kubernetes doesn't mandate any particular implementation

k8s/kubenet.md

172/791

Kubernetes network model: the good

  • Everything can reach everything

  • No address translation

  • No port translation

  • No new protocol

  • The network implementation can decide how to allocate addresses

  • IP addresses don't have to be "portable" from a node to another

    (We can use e.g. a subnet per node and use a simple routed topology)

  • The specification is simple enough to allow many various implementations

k8s/kubenet.md

173/791

Kubernetes network model: the less good

  • Everything can reach everything

    • if you want security, you need to add network policies

    • the network implementation that you use needs to support them

  • There are literally dozens of implementations out there

    (https://github.com/containernetworking/cni/ lists more than 25 plugins)

  • Pods have level 3 (IP) connectivity, but services are level 4 (TCP or UDP)

    (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)

  • kube-proxy is on the data path when connecting to a pod or container,
    and it's not particularly fast (relies on userland proxying or iptables)

k8s/kubenet.md

174/791

Kubernetes network model: in practice

  • The nodes that we are using have been set up to use Weave

  • We don't endorse Weave in a particular way, it just Works For Us

  • Don't worry about the warning about kube-proxy performance

  • Unless you:

    • routinely saturate 10G network interfaces
    • count packet rates in millions per second
    • run high-traffic VOIP or gaming platforms
    • do weird things that involve millions of simultaneous connections
      (in which case you're already familiar with kernel tuning)
  • If necessary, there are alternatives to kube-proxy; e.g. kube-router

k8s/kubenet.md

175/791

The Container Network Interface (CNI)

  • Most Kubernetes clusters use CNI "plugins" to implement networking

  • When a pod is created, Kubernetes delegates the network setup to these plugins

    (it can be a single plugin, or a combination of plugins, each doing one task)

  • Typically, CNI plugins will:

    • allocate an IP address (by calling an IPAM plugin)

    • add a network interface into the pod's network namespace

    • configure the interface as well as required routes etc.

k8s/kubenet.md

176/791

Multiple moving parts

  • The "pod-to-pod network" or "pod network":

    • provides communication between pods and nodes

    • is generally implemented with CNI plugins

  • The "pod-to-service network":

    • provides internal communication and load balancing

    • is generally implemented with kube-proxy (or e.g. kube-router)

  • Network policies:

    • provide firewalling and isolation

    • can be bundled with the "pod network" or provided by another component

k8s/kubenet.md

177/791

Even more moving parts

  • Inbound traffic can be handled by multiple components:

    • something like kube-proxy or kube-router (for NodePort services)

    • load balancers (ideally, connected to the pod network)

  • It is possible to use multiple pod networks in parallel

    (with "meta-plugins" like CNI-Genie or Multus)

  • Some solutions can fill multiple roles

    (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)

183/791

:EN:- The Kubernetes network model :FR:- Le modèle réseau de Kubernetes

k8s/kubenet.md

Image separating from the next part

184/791

Exposing containers

(automatically generated title slide)

185/791

Exposing containers

  • We can connect to our pods using their IP address

  • Then we need to figure out a lot of things:

    • how do we look up the IP address of the pod(s)?

    • how do we connect from outside the cluster?

    • how do we load balance traffic?

    • what if a pod fails?

  • Kubernetes has a resource type named Service

  • Services address all these questions!

k8s/kubectlexpose.md

186/791

Services in a nutshell

  • Services give us a stable endpoint to connect to a pod or a group of pods

  • An easy way to create a service is to use kubectl expose

  • If we have a deployment named my-little-deploy, we can run:

    kubectl expose deployment my-little-deploy --port=80

    ... and this will create a service with the same name (my-little-deploy)

  • Services are automatically added to an internal DNS zone

    (in the example above, our code can now connect to http://my-little-deploy/)

k8s/kubectlexpose.md

187/791

Advantages of services

  • We don't need to look up the IP address of the pod(s)

    (we resolve the IP address of the service using DNS)

  • There are multiple service types; some of them allow external traffic

    (e.g. LoadBalancer and NodePort)

  • Services provide load balancing

    (for both internal and external traffic)

  • Service addresses are independent from pods' addresses

    (when a pod fails, the service seamlessly sends traffic to its replacement)

k8s/kubectlexpose.md

188/791

Many kinds and flavors of service

  • There are different types of services:

    ClusterIP, NodePort, LoadBalancer, ExternalName

  • There are also headless services

  • Services can also have optional external IPs

  • There is also another resource type called Ingress

    (specifically for HTTP services)

  • Wow, that's a lot! Let's start with the basics ...

k8s/kubectlexpose.md

189/791

ClusterIP

  • It's the default service type

  • A virtual IP address is allocated for the service

    (in an internal, private range; e.g. 10.96.0.0/12)

  • This IP address is reachable only from within the cluster (nodes and pods)

  • Our code can connect to the service using the original port number

  • Perfect for internal communication, within the cluster

k8s/kubectlexpose.md

190/791

LoadBalancer

  • An external load balancer is allocated for the service

    (typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)

  • This is available only when the underlying infrastructure provides some kind of "load balancer as a service"

  • Each service of that type will typically cost a little bit of money

    (e.g. a few cents per hour on AWS or GCE)

  • Ideally, traffic would flow directly from the load balancer to the pods

  • In practice, it will often flow through a NodePort first

k8s/kubectlexpose.md

195/791

NodePort

  • A port number is allocated for the service

    (by default, in the 30000-32767 range)

  • That port is made available on all our nodes and anybody can connect to it

    (we can connect to any node on that port to reach the service)

  • Our code needs to be changed to connect to that new port number

  • Under the hood: kube-proxy sets up a bunch of iptables rules on our nodes

  • Sometimes, it's the only available option for external traffic

    (e.g. most clusters deployed with kubeadm or on-premises)

k8s/kubectlexpose.md

212/791

Running containers with open ports

  • Since ping doesn't have anything to connect to, we'll have to run something else

  • We could use the nginx official image, but ...

    ... we wouldn't be able to tell the backends from each other!

  • We are going to use jpetazzo/color, a tiny HTTP server written in Go

  • jpetazzo/color listens on port 80

  • It serves a page showing the pod's name

    (this will be useful when checking load balancing behavior)

k8s/kubectlexpose.md

213/791

Creating a deployment for our HTTP server

  • We will create a deployment with kubectl create deployment

  • Then we will scale it with kubectl scale

  • In another window, watch the pods (to see when they are created):
    kubectl get pods -w
  • Create a deployment for this very lightweight HTTP server:

    kubectl create deployment blue --image=jpetazzo/color
  • Scale it to 10 replicas:

    kubectl scale deployment blue --replicas=10

k8s/kubectlexpose.md

214/791

Exposing our deployment

  • We'll create a default ClusterIP service
  • Expose the HTTP port of our server:

    kubectl expose deployment blue --port=80
  • Look up which IP address was allocated:

    kubectl get service

k8s/kubectlexpose.md

215/791

Services are layer 4 constructs

  • You can assign IP addresses to services, but they are still layer 4

    (i.e. a service is not an IP address; it's an IP address + protocol + port)

  • This is caused by the current implementation of kube-proxy

    (it relies on mechanisms that don't support layer 3)

  • As a result: you have to indicate the port number for your service

    (with some exceptions, like ExternalName or headless services, covered later)

k8s/kubectlexpose.md

216/791

Testing our service

  • We will now send a few HTTP requests to our pods
  • Let's obtain the IP address that was allocated for our service, programmatically:
    IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
  • Send a few requests:
    curl http://$IP:80/
217/791

Testing our service

  • We will now send a few HTTP requests to our pods
  • Let's obtain the IP address that was allocated for our service, programmatically:
    IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
  • Send a few requests:
    curl http://$IP:80/

Try it a few times! Our requests are load balanced across multiple pods.

k8s/kubectlexpose.md

218/791

ExternalName

  • Services of type ExternalName are quite different

  • No load balancer (internal or external) is created

  • Only a DNS entry gets added to the DNS managed by Kubernetes

  • That DNS entry will just be a CNAME to a provided record

Example:

kubectl create service externalname k8s --external-name kubernetes.io

Creates a CNAME k8s pointing to kubernetes.io

k8s/kubectlexpose.md

219/791

External IPs

  • We can add an External IP to a service, e.g.:

    kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
  • 1.2.3.4 should be the address of one of our nodes

    (it could also be a virtual address, service address, or VIP, shared by multiple nodes)

  • Connections to 1.2.3.4:80 will be sent to our service

  • External IPs will also show up on services of type LoadBalancer

    (they will be added automatically by the process provisioning the load balancer)

k8s/kubectlexpose.md

220/791

Headless services

  • Sometimes, we want to access our scaled services directly:

    • if we want to save a tiny little bit of latency (typically less than 1ms)

    • if we need to connect over arbitrary ports (instead of a few fixed ones)

    • if we need to communicate over another protocol than UDP or TCP

    • if we want to decide how to balance the requests client-side

    • ...

  • In that case, we can use a "headless service"

k8s/kubectlexpose.md

221/791

Creating a headless services

  • A headless service is obtained by setting the clusterIP field to None

    (Either with --cluster-ip=None, or by providing a custom YAML)

  • As a result, the service doesn't have a virtual IP address

  • Since there is no virtual IP address, there is no load balancer either

  • CoreDNS will return the pods' IP addresses as multiple A records

  • This gives us an easy way to discover all the replicas for a deployment

k8s/kubectlexpose.md

222/791

Services and endpoints

  • A service has a number of "endpoints"

  • Each endpoint is a host + port where the service is available

  • The endpoints are maintained and updated automatically by Kubernetes

  • Check the endpoints that Kubernetes has associated with our blue service:
    kubectl describe service blue

In the output, there will be a line starting with Endpoints:.

That line will list a bunch of addresses in host:port format.

k8s/kubectlexpose.md

223/791

Viewing endpoint details

  • When we have many endpoints, our display commands truncate the list

    kubectl get endpoints
  • If we want to see the full list, we can use one of the following commands:

    kubectl describe endpoints blue
    kubectl get endpoints blue -o yaml
  • These commands will show us a list of IP addresses

  • These IP addresses should match the addresses of the corresponding pods:

    kubectl get pods -l app=blue -o wide

k8s/kubectlexpose.md

224/791

endpoints not endpoint

  • endpoints is the only resource that cannot be singular
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
  • This is because the type itself is plural (unlike every other resource)

  • There is no endpoint object: type Endpoints struct

  • The type doesn't represent a single endpoint, but a list of endpoints

k8s/kubectlexpose.md

225/791

The DNS zone

  • In the kube-system namespace, there should be a service named kube-dns

  • This is the internal DNS server that can resolve service names

  • The default domain name for the service we created is default.svc.cluster.local

  • Get the IP address of the internal DNS server:

    IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
  • Resolve the cluster IP for the blue service:

    host blue.default.svc.cluster.local $IP

k8s/kubectlexpose.md

226/791

Ingress

  • Ingresses are another type (kind) of resource

  • They are specifically for HTTP services

    (not TCP or UDP)

  • They can also handle TLS certificates, URL rewriting ...

  • They require an Ingress Controller to function

k8s/kubectlexpose.md

227/791

231/791

:EN:- Service discovery and load balancing :EN:- Accessing pods through services :EN:- Service types: ClusterIP, NodePort, LoadBalancer

:FR:- Exposer un service :FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer :FR:- Utiliser CoreDNS pour la service discovery

k8s/kubectlexpose.md

Image separating from the next part

232/791

Shipping images with a registry

(automatically generated title slide)

233/791

Shipping images with a registry

  • Initially, our app was running on a single node

  • We could build and run in the same place

  • Therefore, we did not need to ship anything

  • Now that we want to run on a cluster, things are different

  • The easiest way to ship container images is to use a registry

k8s/shippingimages.md

234/791

How Docker registries work (a reminder)

  • What happens when we execute docker run alpine ?

  • If the Engine needs to pull the alpine image, it expands it into library/alpine

  • library/alpine is expanded into index.docker.io/library/alpine

  • The Engine communicates with index.docker.io to retrieve library/alpine:latest

  • To use something else than index.docker.io, we specify it in the image name

  • Examples:

    docker pull gcr.io/google-containers/alpine-with-bash:1.0
    docker build -t registry.mycompany.io:5000/myimage:awesome .
    docker push registry.mycompany.io:5000/myimage:awesome

k8s/shippingimages.md

235/791

Running DockerCoins on Kubernetes

  • Create one deployment for each component

    (hasher, redis, rng, webui, worker)

  • Expose deployments that need to accept connections

    (hasher, redis, rng, webui)

  • For redis, we can use the official redis image

  • For the 4 others, we need to build images and push them to some registry

k8s/shippingimages.md

236/791

Building and shipping images

  • There are many options!

  • Manually:

    • build locally (with docker build or otherwise)

    • push to the registry

  • Automatically:

    • build and test locally

    • when ready, commit and push a code repository

    • the code repository notifies an automated build system

    • that system gets the code, builds it, pushes the image to the registry

k8s/shippingimages.md

237/791

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

  • There are also commercial products to run our own registry

    (Docker EE, Quay...)

  • And open source options, too!

  • When picking a registry, pay attention to its build system

    (when it has one)

k8s/shippingimages.md

238/791

Building on the fly

  • Conceptually, it is possible to build images on the fly from a repository

  • Example: ctr.run

    (deprecated in August 2020, after being aquired by Datadog)

  • It did allow something like this:

    docker run ctr.run/github.com/jpetazzo/container.training/dockercoins/hasher
  • No alternative yet

    (free startup idea, anyone?)

239/791

:EN:- Shipping images to Kubernetes :FR:- Déployer des images sur notre cluster

k8s/shippingimages.md

Using images from the Docker Hub

  • For everyone's convenience, we took care of building DockerCoins images

  • We pushed these images to the DockerHub, under the dockercoins user

  • These images are tagged with a version number, v0.1

  • The full image names are therefore:

    • dockercoins/hasher:v0.1

    • dockercoins/rng:v0.1

    • dockercoins/webui:v0.1

    • dockercoins/worker:v0.1

k8s/buildshiprun-dockerhub.md

240/791

Image separating from the next part

241/791

Exercise — Deploy Dockercoins

(automatically generated title slide)

242/791

Exercise — Deploy Dockercoins

  • We want to deploy the dockercoins app

  • There are 5 components in the app:

    hasher, redis, rng, webui, worker

  • We'll use one Deployment for each component

    (created with kubectl create deployment)

  • We'll connect them with Services

    (create with kubectl expose)

exercises/k8sfundamentals-details.md

243/791

Images

  • We'll use the following images:

    • hasher → dockercoins/hasher:v0.1

    • redis → redis

    • rng → dockercoins/rng:v0.1

    • webui → dockercoins/webui:v0.1

    • worker → dockercoins/worker:v0.1

  • All services should be internal services, except the web UI

    (since we want to be able to connect to the web UI from outside)

exercises/k8sfundamentals-details.md

244/791

Goal

  • We should be able to see the web UI in our browser

    (with the graph showing approximately 3-4 hashes/second)

exercises/k8sfundamentals-details.md

246/791

Hints

  • Make sure to expose services with the right ports

    (check the logs of the worker; they indicate the port numbers)

  • The web UI can be exposed with a NodePort or LoadBalancer Service

exercises/k8sfundamentals-details.md

247/791

Image separating from the next part

248/791

Running our application on Kubernetes

(automatically generated title slide)

249/791

Running our application on Kubernetes

  • We can now deploy our code (as well as a redis instance)
  • Deploy redis:

    kubectl create deployment redis --image=redis
  • Deploy everything else:

    kubectl create deployment hasher --image=dockercoins/hasher:v0.1
    kubectl create deployment rng --image=dockercoins/rng:v0.1
    kubectl create deployment webui --image=dockercoins/webui:v0.1
    kubectl create deployment worker --image=dockercoins/worker:v0.1

k8s/ourapponkube.md

250/791

Deploying other images

  • If we wanted to deploy images from another registry ...

  • ... Or with a different tag ...

  • ... We could use the following snippet:

REGISTRY=dockercoins
TAG=v0.1
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done

k8s/ourapponkube.md

251/791

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker
252/791

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

253/791

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

💡 Oh right! We forgot to expose.

k8s/ourapponkube.md

254/791

Connecting containers together

  • Three deployments need to be reachable by others: hasher, redis, rng

  • worker doesn't need to be exposed

  • webui will be dealt with later

  • Expose each deployment, specifying the right port:
    kubectl expose deployment redis --port 6379
    kubectl expose deployment rng --port 80
    kubectl expose deployment hasher --port 80

k8s/ourapponkube.md

255/791

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

256/791

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

We should now see the worker, well, working happily.

k8s/ourapponkube.md

257/791

Exposing services for external access

  • Now we would like to access the Web UI

  • We will expose it with a NodePort

    (just like we did for the registry)

  • Create a NodePort service for the Web UI:

    kubectl expose deploy/webui --type=NodePort --port=80
  • Check the port that was allocated:

    kubectl get svc

k8s/ourapponkube.md

258/791

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI
259/791

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI

Yes, this may take a little while to update. (Narrator: it was DNS.)

260/791

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI

Yes, this may take a little while to update. (Narrator: it was DNS.)

Alright, we're back to where we started, when we were running on a single node!

261/791

:EN:- Running our demo app on Kubernetes :FR:- Faire tourner l'application de démo sur Kubernetes

k8s/ourapponkube.md

Image separating from the next part

262/791

Labels and annotations

(automatically generated title slide)

263/791

Labels and annotations

  • Most Kubernetes resources can have labels and annotations

  • Both labels and annotations are arbitrary strings

    (with some limitations that we'll explain in a minute)

  • Both labels and annotations can be added, removed, changed, dynamically

  • This can be done with:

    • the kubectl edit command

    • the kubectl label and kubectl annotate

    • ... many other ways! (kubectl apply -f, kubectl patch, ...)

k8s/labels-annotations.md

264/791

Viewing labels and annotations

  • Let's see what we get when we create a Deployment
  • Create a Deployment:

    kubectl create deployment clock --image=jpetazzo/clock
  • Look at its annotations and labels:

    kubectl describe deployment clock

So, what do we get?

k8s/labels-annotations.md

265/791

Labels and annotations for our Deployment

  • We see one label:

    Labels: app=clock
  • This is added by kubectl create deployment

  • And one annotation:

    Annotations: deployment.kubernetes.io/revision: 1
  • This is to keep track of successive versions when doing rolling updates

k8s/labels-annotations.md

266/791
  • Let's look up the Pod that was created and check it too
  • Find the name of the Pod:

    kubectl get pods
  • Display its information:

    kubectl describe pod clock-xxxxxxxxxx-yyyyy

So, what do we get?

k8s/labels-annotations.md

267/791

Labels and annotations for our Pod

  • We see two labels:

    Labels: app=clock
    pod-template-hash=xxxxxxxxxx
  • app=clock comes from kubectl create deployment too

  • pod-template-hash was assigned by the Replica Set

    (when we will do rolling updates, each set of Pods will have a different hash)

  • There are no annotations:

    Annotations: <none>

k8s/labels-annotations.md

268/791

Selectors

  • A selector is an expression matching labels

  • It will restrict a command to the objects matching at least all these labels

  • List all the pods with at least app=clock:

    kubectl get pods --selector=app=clock
  • List all the pods with a label app, regardless of its value:

    kubectl get pods --selector=app

k8s/labels-annotations.md

269/791

Settings labels and annotations

  • The easiest method is to use kubectl label and kubectl annotate
  • Set a label on the clock Deployment:

    kubectl label deployment clock color=blue
  • Check it out:

    kubectl describe deployment clock

k8s/labels-annotations.md

270/791

Other ways to view labels

  • kubectl get gives us a couple of useful flags to check labels

  • kubectl get --show-labels shows all labels

  • kubectl get -L xyz shows the value of label xyz

  • List all the labels that we have on pods:

    kubectl get pods --show-labels
  • List the value of label app on these pods:

    kubectl get pods -L app

k8s/labels-annotations.md

271/791

More on selectors

  • If a selector has multiple labels, it means "match at least these labels"

    Example: --selector=app=frontend,release=prod

  • --selector can be abbreviated as -l (for labels)

    We can also use negative selectors

    Example: --selector=app!=clock

  • Selectors can be used with most kubectl commands

    Examples: kubectl delete, kubectl label, ...

k8s/labels-annotations.md

272/791

Other ways to view labels

  • We can use the --show-labels flag with kubectl get
  • Show labels for a bunch of objects:
    kubectl get --show-labels po,rs,deploy,svc,no

k8s/labels-annotations.md

273/791

Differences between labels and annotations

  • The key for both labels and annotations:

    • must start and end with a letter or digit

    • can also have . - _ (but not in first or last position)

    • can be up to 63 characters, or 253 + / + 63

  • Label values are up to 63 characters, with the same restrictions

  • Annotations values can have arbitrary characters (yes, even binary)

  • Maximum length isn't defined

    (dozens of kilobytes is fine, hundreds maybe not so much)

274/791

:EN:- Labels and annotations :FR:- Labels et annotations

k8s/labels-annotations.md

Image separating from the next part

275/791

Revisiting kubectl logs

(automatically generated title slide)

276/791

Revisiting kubectl logs

  • In this section, we assume that we have a Deployment with multiple Pods

    (e.g. pingpong that we scaled to at least 3 pods)

  • We will highlights some of the limitations of kubectl logs

k8s/kubectl-logs.md

277/791

Streaming logs of multiple pods

  • By default, kubectl logs shows us the output of a single Pod
  • Try to check the output of the Pods related to a Deployment:
    kubectl logs deploy/pingpong --tail 1 --follow

kubectl logs only shows us the logs of one of the Pods.

k8s/kubectl-logs.md

278/791

Viewing logs of multiple pods

  • When we specify a deployment name, only one single pod's logs are shown

  • We can view the logs of multiple pods by specifying a selector

  • If we check the pods created by the deployment, they all have the label app=pingpong

    (this is just a default label that gets added when using kubectl create deployment)

  • View the last line of log from all pods with the app=pingpong label:
    kubectl logs -l app=pingpong --tail 1

k8s/kubectl-logs.md

279/791

Streaming logs of multiple pods

  • Can we stream the logs of all our pingpong pods?
  • Combine -l and -f flags:
    kubectl logs -l app=pingpong --tail 1 -f

Note: combining -l and -f is only possible since Kubernetes 1.14!

Let's try to understand why ...

k8s/kubectl-logs.md

280/791

Streaming logs of many pods

  • Let's see what happens if we try to stream the logs for more than 5 pods
  • Scale up our deployment:

    kubectl scale deployment pingpong --replicas=8
  • Stream the logs:

    kubectl logs -l app=pingpong --tail 1 -f

We see a message like the following one:

error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit

k8s/kubectl-logs.md

281/791

Why can't we stream the logs of many pods?

  • kubectl opens one connection to the API server per pod

  • For each pod, the API server opens one extra connection to the corresponding kubelet

  • If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server

  • This could easily put a lot of stress on the API server

  • Prior Kubernetes 1.14, it was decided to not allow multiple connections

  • From Kubernetes 1.14, it is allowed, but limited to 5 connections

    (this can be changed with --max-log-requests)

  • For more details about the rationale, see PR #67573

k8s/kubectl-logs.md

282/791

Shortcomings of kubectl logs

  • We don't see which pod sent which log line

  • If pods are restarted / replaced, the log stream stops

  • If new pods are added, we don't see their logs

  • To stream the logs of multiple pods, we need to write a selector

  • There are external tools to address these shortcomings

    (e.g.: Stern)

k8s/kubectl-logs.md

283/791

kubectl logs -l ... --tail N

  • If we run this with Kubernetes 1.12, the last command shows multiple lines

  • This is a regression when --tail is used together with -l/--selector

  • It always shows the last 10 lines of output for each container

    (instead of the number of lines specified on the command line)

  • The problem was fixed in Kubernetes 1.13

See #70554 for details.

284/791

:EN:- Viewing logs with "kubectl logs" :FR:- Consulter les logs avec "kubectl logs"

k8s/kubectl-logs.md

Image separating from the next part

285/791

Accessing logs from the CLI

(automatically generated title slide)

286/791

Accessing logs from the CLI

  • The kubectl logs command has limitations:

    • it cannot stream logs from multiple pods at a time

    • when showing logs from multiple pods, it mixes them all together

  • We are going to see how to do it better

k8s/logs-cli.md

287/791

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

288/791

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

  • We could do it, but thankfully, others did it for us already!

k8s/logs-cli.md

289/791

Stern

Stern is an open source project originally by Wercker.

From the README:

Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.

The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.

Exactly what we need!

k8s/logs-cli.md

290/791

Checking if Stern is installed

  • Run stern (without arguments) to check if it's installed:

    $ stern
    Tail multiple pods and containers from Kubernetes
    Usage:
    stern pod-query [flags]
  • If it's missing, let's see how to install it

k8s/logs-cli.md

291/791

Installing Stern

  • Stern is written in Go

  • Go programs are usually very easy to install

    (no dependencies, extra libraries to install, etc)

  • Binary releases are available here on GitHub

  • Stern is also available through most package managers

    (e.g. on macOS, we can brew install stern or sudo port install stern)

k8s/logs-cli.md

292/791

Using Stern

  • There are two ways to specify the pods whose logs we want to see:

    • -l followed by a selector expression (like with many kubectl commands)

    • with a "pod query," i.e. a regex used to match pod names

  • These two ways can be combined if necessary

  • View the logs for all the pingpong containers:
    stern pingpong

k8s/logs-cli.md

293/791

Stern convenient options

  • The --tail N flag shows the last N lines for each container

    (Instead of showing the logs since the creation of the container)

  • The -t / --timestamps flag shows timestamps

  • The --all-namespaces flag is self-explanatory

  • View what's up with the weave system containers:
    stern --tail 1 --timestamps --all-namespaces weave

k8s/logs-cli.md

294/791

Using Stern with a selector

  • When specifying a selector, we can omit the value for a label

  • This will match all objects having that label (regardless of the value)

  • Everything created with kubectl run has a label run

  • Everything created with kubectl create deployment has a label app

  • We can use that property to view the logs of all the pods created with kubectl create deployment

  • View the logs for all the things started with kubectl create deployment:
    stern -l app
295/791

:EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI

k8s/logs-cli.md

Image separating from the next part

296/791

Namespaces

(automatically generated title slide)

297/791

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

298/791

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

  • As hinted by the title of this section, we will use namespaces

k8s/namespaces.md

299/791

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

300/791

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

301/791

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

302/791

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

  • Except for resources that exist at the cluster scope

    (these do not belong to a namespace)

k8s/namespaces.md

303/791

Uniquely identifying a resource

  • For namespaced resources:

    the tuple (kind, name, namespace) needs to be unique

  • For resources at the cluster scope:

    the tuple (kind, name) needs to be unique

  • List resource types again, and check the NAMESPACED column:
    kubectl api-resources

k8s/namespaces.md

304/791

Pre-existing namespaces

  • If we deploy a cluster with kubeadm, we have three or four namespaces:

    • default (for our applications)

    • kube-system (for the control plane)

    • kube-public (contains one ConfigMap for cluster discovery)

    • kube-node-lease (in Kubernetes 1.14 and later; contains Lease objects)

  • If we deploy differently, we may have different namespaces

k8s/namespaces.md

305/791

Creating namespaces

  • Let's see two identical methods to create a namespace
  • We can use kubectl create namespace:

    kubectl create namespace blue
  • Or we can construct a very minimal YAML snippet:

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: blue
    EOF

k8s/namespaces.md

306/791

Using namespaces

  • We can pass a -n or --namespace flag to most kubectl commands:

    kubectl -n blue get svc
  • We can also change our current context

  • A context is a (user, cluster, namespace) tuple

  • We can manipulate contexts with the kubectl config command

k8s/namespaces.md

307/791

Viewing existing contexts

  • On our training environments, at this point, there should be only one context
  • View existing contexts to see the cluster name and the current user:
    kubectl config get-contexts
  • The current context (the only one!) is tagged with a *

  • What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?

k8s/namespaces.md

308/791

What's in a context

  • NAME is an arbitrary string to identify the context

  • CLUSTER is a reference to a cluster

    (i.e. API endpoint URL, and optional certificate)

  • AUTHINFO is a reference to the authentication information to use

    (i.e. a TLS client certificate, token, or otherwise)

  • NAMESPACE is the namespace

    (empty string = default)

k8s/namespaces.md

309/791

Switching contexts

  • We want to use a different namespace

  • Solution 1: update the current context

    This is appropriate if we need to change just one thing (e.g. namespace or authentication).

  • Solution 2: create a new context and switch to it

    This is appropriate if we need to change multiple things and switch back and forth.

  • Let's go with solution 1!

k8s/namespaces.md

310/791

Updating a context

  • This is done through kubectl config set-context

  • We can update a context by passing its name, or the current context with --current

  • Update the current context to use the blue namespace:

    kubectl config set-context --current --namespace=blue
  • Check the result:

    kubectl config get-contexts

k8s/namespaces.md

311/791

Using our new namespace

  • Let's check that we are in our new namespace, then deploy a new copy of Dockercoins
  • Verify that the new context is empty:
    kubectl get all

k8s/namespaces.md

312/791

Deploying DockerCoins with YAML files

  • The GitHub repository jpetazzo/kubercoins contains everything we need!
  • Clone the kubercoins repository:

    cd ~
    git clone https://github.com/jpetazzo/kubercoins
  • Create all the DockerCoins resources:

    kubectl create -f kubercoins

If the argument behind -f is a directory, all the files in that directory are processed.

The subdirectories are not processed, unless we also add the -R flag.

k8s/namespaces.md

313/791

Viewing the deployed app

  • Let's see if this worked correctly!
  • Retrieve the port number allocated to the webui service:

    kubectl get svc webui
  • Point our browser to http://X.X.X.X:3xxxx

If the graph shows up but stays at zero, give it a minute or two!

k8s/namespaces.md

314/791

Namespaces and isolation

  • Namespaces do not provide isolation

  • A pod in the green namespace can communicate with a pod in the blue namespace

  • A pod in the default namespace can communicate with a pod in the kube-system namespace

  • CoreDNS uses a different subdomain for each namespace

  • Example: from any pod in the cluster, you can connect to the Kubernetes API with:

    https://kubernetes.default.svc.cluster.local:443/

k8s/namespaces.md

315/791

Isolating pods

  • Actual isolation is implemented with network policies

  • Network policies are resources (like deployments, services, namespaces...)

  • Network policies specify which flows are allowed:

    • between pods

    • from pods to the outside world

    • and vice-versa

k8s/namespaces.md

316/791

Switch back to the default namespace

  • Let's make sure that we don't run future exercises and labs in the blue namespace
  • Switch back to the original context:
    kubectl config set-context --current --namespace=

Note: we could have used --namespace=default for the same result.

k8s/namespaces.md

317/791

Switching namespaces more easily

  • We can also use a little helper tool called kubens:

    # Switch to namespace foo
    kubens foo
    # Switch back to the previous namespace
    kubens -
  • On our clusters, kubens is called kns instead

    (so that it's even fewer keystrokes to switch namespaces)

k8s/namespaces.md

318/791

kubens and kubectx

  • With kubens, we can switch quickly between namespaces

  • With kubectx, we can switch quickly between contexts

  • Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx

  • On our clusters, they are installed as kns and kctx

    (for brevity and to avoid completion clashes between kubectx and kubectl)

k8s/namespaces.md

319/791

kube-ps1

  • It's easy to lose track of our current cluster / context / namespace

  • kube-ps1 makes it easy to track these, by showing them in our shell prompt

  • It is installed on our training clusters, and when using shpod

  • It gives us a prompt looking like this one:

    [123.45.67.89] (kubernetes-admin@kubernetes:default) docker@node1 ~

    (The highlighted part is context:namespace, managed by kube-ps1)

  • Highly recommended if you work across multiple contexts or namespaces!

k8s/namespaces.md

320/791

Installing kube-ps1

  • It's a simple shell script available from https://github.com/jonmosco/kube-ps1

  • It needs to be installed in our profile/rc files

    (instructions differ depending on platform, shell, etc.)

  • Once installed, it defines aliases called kube_ps1, kubeon, kubeoff

    (to selectively enable/disable it when needed)

  • Pro-tip: install it on your machine during the next break!

321/791

:EN:- Organizing resources with Namespaces :FR:- Organiser les ressources avec des namespaces

k8s/namespaces.md

Image separating from the next part

322/791

Deploying with YAML

(automatically generated title slide)

323/791

Deploying with YAML

  • So far, we created resources with the following commands:

    • kubectl run

    • kubectl create deployment

    • kubectl expose

  • We can also create resources directly with YAML manifests

k8s/yamldeploy.md

324/791

kubectl apply vs create

  • kubectl create -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, don't alter them
      (and display error message)

  • kubectl apply -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, update them
      (to match the definition provided by the YAML file)

    • stores the manifest as an annotation in the resource

k8s/yamldeploy.md

325/791

Creating multiple resources

  • The manifest can contain multiple resources separated by ---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...

k8s/yamldeploy.md

326/791

Creating multiple resources

  • The manifest can also contain a list of resources
apiVersion: v1
kind: List
items:
- kind: ...
apiVersion: ...
...
- kind: ...
apiVersion: ...
...

k8s/yamldeploy.md

327/791

Deploying dockercoins with YAML

  • We provide a YAML manifest with all the resources for Dockercoins

    (Deployments and Services)

  • We can use it if we need to deploy or redeploy Dockercoins

  • Deploy or redeploy Dockercoins:
    kubectl apply -f ~/container.training/k8s/dockercoins.yaml

(If we deployed Dockercoins earlier, we will see warning messages, because the resources that we created lack the necessary annotation. We can safely ignore them.)

k8s/yamldeploy.md

328/791

Deleting resources

  • We can also use a YAML file to delete resources

  • kubectl delete -f ... will delete all the resources mentioned in a YAML file

    (useful to clean up everything that was created by kubectl apply -f ...)

  • The definitions of the resources don't matter

    (just their kind, apiVersion, and name)

k8s/yamldeploy.md

329/791

Pruning¹ resources

  • We can also tell kubectl to remove old resources

  • This is done with kubectl apply -f ... --prune

  • It will remove resources that don't exist in the YAML file(s)

  • But only if they were created with kubectl apply in the first place

    (technically, if they have an annotation kubectl.kubernetes.io/last-applied-configuration)

¹If English is not your first language: to prune means to remove dead or overgrown branches in a tree, to help it to grow.

k8s/yamldeploy.md

330/791

YAML as source of truth

  • Imagine the following workflow:

    • do not use kubectl run, kubectl create deployment, kubectl expose ...

    • define everything with YAML

    • kubectl apply -f ... --prune --all that YAML

    • keep that YAML under version control

    • enforce all changes to go through that YAML (e.g. with pull requests)

  • Our version control system now has a full history of what we deploy

  • Compares to "Infrastructure-as-Code", but for app deployments

k8s/yamldeploy.md

331/791

Specifying the namespace

  • When creating resources from YAML manifests, the namespace is optional

  • If we specify a namespace:

    • resources are created in the specified namespace

    • this is typical for things deployed only once per cluster

    • example: system components, cluster add-ons ...

  • If we don't specify a namespace:

    • resources are created in the current namespace

    • this is typical for things that may be deployed multiple times

    • example: applications (production, staging, feature branches ...)

332/791

:EN:- Deploying with YAML manifests :FR:- Déployer avec des manifests YAML

k8s/yamldeploy.md

Image separating from the next part

333/791

Declarative vs imperative

(automatically generated title slide)

334/791

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

335/791

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

336/791

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

  • ... As long as you know how to brew tea

shared/declarative.md

337/791

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

338/791

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

339/791

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

340/791

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

341/791

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

Did you know there was an ISO standard specifying how to brew tea?

shared/declarative.md

342/791

Declarative vs imperative

  • Imperative systems:

    • simpler

    • if a task is interrupted, we have to restart from scratch

  • Declarative systems:

    • if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary

    • we need to be able to observe the system

    • ... and compute a "diff" between what we have and what we want

shared/declarative.md

343/791

Declarative vs imperative in Kubernetes

  • With Kubernetes, we cannot say: "run this container"

  • All we can do is write a spec and push it to the API server

    (by creating a resource like e.g. a Pod or a Deployment)

  • The API server will validate that spec (and reject it if it's invalid)

  • Then it will store it in etcd

  • A controller will "notice" that spec and act upon it

k8s/declarative.md

344/791

Reconciling state

  • Watch for the spec fields in the YAML files later!

  • The spec describes how we want the thing to be

  • Kubernetes will reconcile the current state with the spec
    (technically, this is done by a number of controllers)

  • When we want to change some resource, we update the spec

  • Kubernetes will then converge that resource

345/791

:EN:- Declarative vs imperative models :FR:- Modèles déclaratifs et impératifs

k8s/declarative.md

19,000 words

They say, "a picture is worth one thousand words."

The following 19 slides show what really happens when we run:

kubectl create deployment web --image=nginx

k8s/deploymentslideshow.md

346/791

Image separating from the next part

366/791

Authoring YAML

(automatically generated title slide)

367/791

Authoring YAML

  • We have already generated YAML implicitly, with e.g.:

    • kubectl run

    • kubectl create deployment (and a few other kubectl create variants)

    • kubectl expose

  • When and why do we need to write our own YAML?

  • How do we write YAML from scratch?

k8s/authoring-yaml.md

368/791

The limits of generated YAML

  • Many advanced (and even not-so-advanced) features require to write YAML:

    • pods with multiple containers

    • resource limits

    • healthchecks

    • DaemonSets, StatefulSets

    • and more!

  • How do we access these features?

k8s/authoring-yaml.md

369/791

Various ways to write YAML

  • Completely from scratch with our favorite editor

    (yeah, right)

  • Dump an existing resource with kubectl get -o yaml ...

    (it is recommended to clean up the result)

  • Ask kubectl to generate the YAML

    (with a kubectl create --dry-run=client -o yaml)

  • Use The Docs, Luke

    (the documentation almost always has YAML examples)

k8s/authoring-yaml.md

370/791

Generating YAML from scratch

  • Start with a namespace:

    kind: Namespace
    apiVersion: v1
    metadata:
    name: hello
  • We can use kubectl explain to see resource definitions:

    kubectl explain -r pod.spec
  • Not the easiest option!

k8s/authoring-yaml.md

371/791

Dump the YAML for an existing resource

  • kubectl get -o yaml works!

  • A lot of fields in metadata are not necessary

    (managedFields, resourceVersion, uid, creationTimestamp ...)

  • Most objects will have a status field that is not necessary

  • Default or empty values can also be removed for clarity

  • This can be done manually or with the kubectl-neat plugin

    kubectl get -o yaml ... | kubectl neat

k8s/authoring-yaml.md

372/791

Generating YAML without creating resources

  • We can use the --dry-run=client option
  • Generate the YAML for a Deployment without creating it:

    kubectl create deployment web --image nginx --dry-run=client
  • Optionally clean it up with kubectl neat, too

k8s/authoring-yaml.md

373/791

Using --dry-run with kubectl apply

  • The --dry-run option can also be used with kubectl apply

  • However, it can be misleading (it doesn't do a "real" dry run)

  • Let's see what happens in the following scenario:

    • generate the YAML for a Deployment

    • tweak the YAML to transform it into a DaemonSet

    • apply that YAML to see what would actually be created

k8s/authoring-yaml.md

374/791

The limits of kubectl apply --dry-run=client

  • Generate the YAML for a deployment:

    kubectl create deployment web --image=nginx -o yaml > web.yaml
  • Change the kind in the YAML to make it a DaemonSet:

    sed -i s/Deployment/DaemonSet/ web.yaml
  • Ask kubectl what would be applied:

    kubectl apply -f web.yaml --dry-run=client --validate=false -o yaml

The resulting YAML doesn't represent a valid DaemonSet.

k8s/authoring-yaml.md

375/791

Server-side dry run

  • Since Kubernetes 1.13, we can use server-side dry run and diffs

  • Server-side dry run will do all the work, but not persist to etcd

    (all validation and mutation hooks will be executed)

  • Try the same YAML file as earlier, with server-side dry run:
    kubectl apply -f web.yaml --dry-run=server --validate=false -o yaml

The resulting YAML doesn't have the replicas field anymore.

Instead, it has the fields expected in a DaemonSet.

k8s/authoring-yaml.md

376/791

Advantages of server-side dry run

  • The YAML is verified much more extensively

  • The only step that is skipped is "write to etcd"

  • YAML that passes server-side dry run should apply successfully

    (unless the cluster state changes by the time the YAML is actually applied)

  • Validating or mutating hooks that have side effects can also be an issue

k8s/authoring-yaml.md

377/791

kubectl diff

  • Kubernetes 1.13 also introduced kubectl diff

  • kubectl diff does a server-side dry run, and shows differences

  • Try kubectl diff on the YAML that we tweaked earlier:
    kubectl diff -f web.yaml

Note: we don't need to specify --validate=false here.

k8s/authoring-yaml.md

378/791

Advantage of YAML

  • Using YAML (instead of kubectl create <kind>) allows to be declarative

  • The YAML describes the desired state of our cluster and applications

  • YAML can be stored, versioned, archived (e.g. in git repositories)

  • To change resources, change the YAML files

    (instead of using kubectl edit/scale/label/etc.)

  • Changes can be reviewed before being applied

    (with code reviews, pull requests ...)

  • This workflow is sometimes called "GitOps"

    (there are tools like Weave Flux or GitKube to facilitate it)

k8s/authoring-yaml.md

379/791

YAML in practice

  • Get started with kubectl create deployment and kubectl expose

    (until you have something that works)

  • Then, run these commands again, but with -o yaml --dry-run=client

    (to generate and save YAML manifests)

  • Try to apply these manifests in a clean environment

    (e.g. a new Namespace)

  • Check that everything works; tweak and iterate if needed

  • Commit the YAML to a repo 💯🏆️

k8s/authoring-yaml.md

380/791

"Day 2" YAML

  • Don't hesitate to remove unused fields

    (e.g. creationTimestamp: null, most {} values...)

  • Check your YAML with:

    kube-score (installable with krew)

    kube-linter

  • Check live resources with tools like popeye

  • Remember that like all linters, they need to be configured for your needs!

381/791

:EN:- Techniques to write YAML manifests :FR:- Comment écrire des manifests YAML

k8s/authoring-yaml.md

Image separating from the next part

382/791

Setting up Kubernetes

(automatically generated title slide)

383/791

Setting up Kubernetes

  • Kubernetes is made of many components that require careful configuration

  • Secure operation typically requires TLS certificates and a local CA

    (certificate authority)

  • Setting up everything manually is possible, but rarely done

    (except for learning purposes)

  • Let's do a quick overview of available options!

k8s/setup-overview.md

384/791

Local development

  • Are you writing code that will eventually run on Kubernetes?

  • Then it's a good idea to have a development cluster!

  • Instead of shipping containers images, we can test them on Kubernetes

  • Extremely useful when authoring or testing Kubernetes-specific objects

    (ConfigMaps, Secrets, StatefulSets, Jobs, RBAC, etc.)

  • Extremely convenient to quickly test/check what a particular thing looks like

    (e.g. what are the fields a Deployment spec?)

k8s/setup-overview.md

385/791

One-node clusters

  • It's perfectly fine to work with a cluster that has only one node

  • It simplifies a lot of things:

    • pod networking doesn't even need CNI plugins, overlay networks, etc.

    • these clusters can be fully contained (no pun intended) in an easy-to-ship VM or container image

    • some of the security aspects may be simplified (different threat model)

    • images can be built directly on the node (we don't need to ship them with a registry)

  • Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube

    (some of these also support clusters with multiple nodes)

k8s/setup-overview.md

386/791

Managed clusters ("Turnkey Solutions")

  • Many cloud providers and hosting providers offer "managed Kubernetes"

  • The deployment and maintenance of the control plane is entirely managed by the provider

    (ideally, clusters can be spun up automatically through an API, CLI, or web interface)

  • Given the complexity of Kubernetes, this approach is strongly recommended

    (at least for your first production clusters)

  • After working for a while with Kubernetes, you will be better equipped to decide:

    • whether to operate it yourself or use a managed offering

    • which offering or which distribution works best for you and your needs

k8s/setup-overview.md

387/791

Node management

  • Most "Turnkey Solutions" offer fully managed control planes

    (including control plane upgrades, sometimes done automatically)

  • However, with most providers, we still need to take care of nodes

    (provisioning, upgrading, scaling the nodes)

  • Example with Amazon EKS "managed node groups":

    ...when bugs or issues are reported [...] you're responsible for deploying these patched AMI versions to your managed node groups.

k8s/setup-overview.md

388/791

Managed clusters differences

  • Most providers let you pick which Kubernetes version you want

    • some providers offer up-to-date versions

    • others lag significantly (sometimes by 2 or 3 minor versions)

  • Some providers offer multiple networking or storage options

  • Others will only support one, tied to their infrastructure

    (changing that is in theory possible, but might be complex or unsupported)

  • Some providers let you configure or customize the control plane

    (generally through Kubernetes "feature gates")

k8s/setup-overview.md

389/791

Choosing a provider

  • Pricing models differ from one provider to another

    • nodes are generally charged at their usual price

    • control plane may be free or incur a small nominal fee

  • Beyond pricing, there are huge differences in features between providers

  • The "major" providers are not always the best ones!

  • See this page for a list of available providers

k8s/setup-overview.md

390/791

Kubernetes distributions and installers

  • If you want to run Kubernetes yourselves, there are many options

    (free, commercial, proprietary, open source ...)

  • Some of them are installers, while some are complete platforms

  • Some of them leverage other well-known deployment tools

    (like Puppet, Terraform ...)

  • There are too many options to list them all

    (check this page for an overview!)

k8s/setup-overview.md

391/791

kubeadm

  • kubeadm is a tool part of Kubernetes to facilitate cluster setup

  • Many other installers and distributions use it (but not all of them)

  • It can also be used by itself

  • Excellent starting point to install Kubernetes on your own machines

    (virtual, physical, it doesn't matter)

  • It even supports highly available control planes, or "multi-master"

    (this is more complex, though, because it introduces the need for an API load balancer)

k8s/setup-overview.md

392/791

Manual setup

  • The resources below are mainly for educational purposes!

  • Kubernetes The Hard Way by Kelsey Hightower

    • step by step guide to install Kubernetes on Google Cloud

    • covers certificates, high availability ...

    • “Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”

  • Deep Dive into Kubernetes Internals for Builders and Operators

    • conference presentation showing step-by-step control plane setup

    • emphasis on simplicity, not on security and availability

k8s/setup-overview.md

393/791

About our training clusters

  • How did we set up these Kubernetes clusters that we're using?
394/791

About our training clusters

  • How did we set up these Kubernetes clusters that we're using?

  • We used kubeadm on freshly installed VM instances running Ubuntu LTS

    1. Install Docker

    2. Install Kubernetes packages

    3. Run kubeadm init on the first node (it deploys the control plane on that node)

    4. Set up Weave (the overlay network) with a single kubectl apply command

    5. Run kubeadm join on the other nodes (with the token produced by kubeadm init)

    6. Copy the configuration file generated by kubeadm init

  • Check the prepare VMs README for more details

k8s/setup-overview.md

395/791

kubeadm "drawbacks"

  • Doesn't set up Docker or any other container engine

    (this is by design, to give us choice)

  • Doesn't set up the overlay network

    (this is also by design, for the same reasons)

  • HA control plane requires some extra steps

  • Note that HA control plane also requires setting up a specific API load balancer

    (which is beyond the scope of kubeadm)

396/791

:EN:- Various ways to install Kubernetes :FR:- Survol des techniques d'installation de Kubernetes

k8s/setup-overview.md

Image separating from the next part

397/791

Running a local development cluster

(automatically generated title slide)

398/791

Running a local development cluster

  • Let's review some options to run Kubernetes locally

  • There is no "best option", it depends what you value:

    • ability to run on all platforms (Linux, Mac, Windows, other?)

    • ability to run clusters with multiple nodes

    • ability to run multiple clusters side by side

    • ability to run recent (or even, unreleased) versions of Kubernetes

    • availability of plugins

    • etc.

k8s/setup-devel.md

399/791

Docker Desktop

  • Available on Mac and Windows

  • Gives you one cluster with one node

  • Very easy to use if you are already using Docker Desktop:

    go to Docker Desktop preferences and enable Kubernetes

  • Ideal for Docker users who need good integration between both platforms

k8s/setup-devel.md

400/791

k3d

  • Based on K3s by Rancher Labs

  • Requires Docker

  • Runs Kubernetes nodes in Docker containers

  • Can deploy multiple clusters, with multiple nodes, and multiple master nodes

  • As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)

  • They have different syntax and options, this can be confusing

    (but don't let that stop you!)

k8s/setup-devel.md

401/791

k3d in action

  • Install k3d (e.g. get the binary from https://github.com/rancher/k3d/releases)

  • Create a simple cluster:

    k3d cluster create petitcluster
  • Create a more complex cluster with a custom version:

    k3d cluster create groscluster \
    --image rancher/k3s:v1.18.9-k3s1 --servers 3 --agents 5

    (3 nodes for the control plane + 5 worker nodes)

  • Clusters are automatically added to .kube/config file

k8s/setup-devel.md

402/791

KinD

  • Kubernetes-in-Docker

  • Requires Docker (obviously!)

  • Deploying a single node cluster using the latest version is simple:

    kind create cluster
  • More advanced scenarios require writing a short config file

    (to define multiple nodes, multiple master nodes, set Kubernetes versions ...)

  • Can deploy multiple clusters

k8s/setup-devel.md

403/791

Minikube

  • The "legacy" option!

    (note: this is not a bad thing, it means that it's very stable, has lots of plugins, etc.)

  • Supports many drivers

    (HyperKit, Hyper-V, KVM, VirtualBox, but also Docker and many others)

  • Can deploy a single cluster; recent versions can deploy multiple nodes

  • Great option if you want a "Kubernetes first" experience

    (i.e. if you don't already have Docker and/or don't want/need it)

k8s/setup-devel.md

404/791

MicroK8s

  • Available on Linux, and since recently, on Mac and Windows as well

  • The Linux version is installed through Snap

    (which is pre-installed on all recent versions of Ubuntu)

  • Also supports clustering (as in, multiple machines running MicroK8s)

  • DNS is not enabled by default; enable it with microk8s enable dns

k8s/setup-devel.md

405/791

Rancher Desktop

  • Available on Mac and Windows

  • Runs a single cluster with a single node

  • Lets you pick the Kubernetes version that you want to use

    (and change it any time you like)

  • Emphasis on ease of use (like Docker Desktop)

  • Very young product (first release in May 2021)

  • Based on k3s and other proven components

k8s/setup-devel.md

406/791

VM with custom install

  • Choose your own adventure!

  • Pick any Linux distribution!

  • Build your cluster from scratch or use a Kubernetes installer!

  • Discover exotic CNI plugins and container runtimes!

  • The only limit is yourself, and the time you are willing to sink in!

407/791

:EN:- Kubernetes options for local development :FR:- Installation de Kubernetes pour travailler en local

k8s/setup-devel.md

Image separating from the next part

408/791

Controlling a Kubernetes cluster remotely

(automatically generated title slide)

409/791

Controlling a Kubernetes cluster remotely

  • kubectl can be used either on cluster instances or outside the cluster

  • Here, we are going to use kubectl from our local machine

k8s/localkubeconfig.md

410/791

Requirements

The commands in this chapter should be run on your local machine.

  • kubectl is officially available on Linux, macOS, Windows

    (and unofficially anywhere we can build and run Go binaries)

  • You may skip these commands if you are following along from:

    • a tablet or phone

    • a web-based terminal

    • an environment where you can't install and run new binaries

k8s/localkubeconfig.md

411/791

Installing kubectl

  • If you already have kubectl on your local machine, you can skip this
  • Download the kubectl binary from one of these links:

    Linux | macOS | Windows

  • On Linux and macOS, make the binary executable with chmod +x kubectl

    (And remember to run it with ./kubectl or move it to your $PATH)

Note: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing kubectl might be more complicated (or even impossible) so feel free to skip this section.

k8s/localkubeconfig.md

412/791

Testing kubectl

  • Check that kubectl works correctly

    (before even trying to connect to a remote cluster!)

  • Ask kubectl to show its version number:
    kubectl version --client

The output should look like this:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0",
GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean",
BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc",
Platform:"darwin/amd64"}

k8s/localkubeconfig.md

413/791

Preserving the existing ~/.kube/config

  • If you already have a ~/.kube/config file, rename it

    (we are going to overwrite it in the following slides!)

  • If you never used kubectl on your machine before: nothing to do!

  • Make a copy of ~/.kube/config; if you are using macOS or Linux, you can do:

    cp ~/.kube/config ~/.kube/config.before.training
  • If you are using Windows, you will need to adapt this command

k8s/localkubeconfig.md

414/791

Copying the configuration file from node1

  • The ~/.kube/config file that is on node1 contains all the credentials we need

  • Let's copy it over!

  • Copy the file from node1; if you are using macOS or Linux, you can do:

    scp USER@X.X.X.X:.kube/config ~/.kube/config
    # Make sure to replace X.X.X.X with the IP address of node1,
    # and USER with the user name used to log into node1!
  • If you are using Windows, adapt these instructions to your SSH client

k8s/localkubeconfig.md

415/791

Updating the server address

  • There is a good chance that we need to update the server address

  • To know if it is necessary, run kubectl config view

  • Look for the server: address:

    • if it matches the public IP address of node1, you're good!

    • if it is anything else (especially a private IP address), update it!

  • To update the server address, run:

    kubectl config set-cluster kubernetes --server=https://X.X.X.X:6443
    # Make sure to replace X.X.X.X with the IP address of node1!

k8s/localkubeconfig.md

416/791

What if we get a certificate error?

  • Generally, the Kubernetes API uses a certificate that is valid for:

    • kubernetes
    • kubernetes.default
    • kubernetes.default.svc
    • kubernetes.default.svc.cluster.local
    • the ClusterIP address of the kubernetes service
    • the hostname of the node hosting the control plane (e.g. node1)
    • the IP address of the node hosting the control plane
  • On most clouds, the IP address of the node is an internal IP address

  • ... And we are going to connect over the external IP address

  • ... And that external IP address was not used when creating the certificate!

k8s/localkubeconfig.md

417/791

Working around the certificate error

  • We need to tell kubectl to skip TLS verification

    (only do this with testing clusters, never in production!)

  • The following command will do the trick:

    kubectl config set-cluster kubernetes --insecure-skip-tls-verify

k8s/localkubeconfig.md

418/791

Checking that we can connect to the cluster

  • We can now run a couple of trivial commands to check that all is well
  • Check the versions of the local client and remote server:

    kubectl version
  • View the nodes of the cluster:

    kubectl get nodes

We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.

419/791

:EN:- Working with remote Kubernetes clusters :FR:- Travailler avec des clusters distants

k8s/localkubeconfig.md

Image separating from the next part

420/791

Accessing internal services

(automatically generated title slide)

421/791

Accessing internal services

  • When we are logged in on a cluster node, we can access internal services

    (by virtue of the Kubernetes network model: all nodes can reach all pods and services)

  • When we are accessing a remote cluster, things are different

    (generally, our local machine won't have access to the cluster's internal subnet)

  • How can we temporarily access a service without exposing it to everyone?

422/791

Accessing internal services

  • When we are logged in on a cluster node, we can access internal services

    (by virtue of the Kubernetes network model: all nodes can reach all pods and services)

  • When we are accessing a remote cluster, things are different

    (generally, our local machine won't have access to the cluster's internal subnet)

  • How can we temporarily access a service without exposing it to everyone?

  • kubectl proxy: gives us access to the API, which includes a proxy for HTTP resources

  • kubectl port-forward: allows forwarding of TCP ports to arbitrary pods, services, ...

k8s/accessinternal.md

423/791

Suspension of disbelief

The labs and demos in this section assume that we have set up kubectl on our local machine in order to access a remote cluster.

We will therefore show how to access services and pods of the remote cluster, from our local machine.

You can also run these commands directly on the cluster (if you haven't installed and set up kubectl locally).

Running commands locally will be less useful (since you could access services and pods directly), but keep in mind that these commands will work anywhere as long as you have installed and set up kubectl to communicate with your cluster.

k8s/accessinternal.md

424/791

kubectl proxy in theory

  • Running kubectl proxy gives us access to the entire Kubernetes API

  • The API includes routes to proxy HTTP traffic

  • These routes look like the following:

    /api/v1/namespaces/<namespace>/services/<service>/proxy

  • We just add the URI to the end of the request, for instance:

    /api/v1/namespaces/<namespace>/services/<service>/proxy/index.html

  • We can access services and pods this way

k8s/accessinternal.md

425/791

kubectl proxy in practice

  • Let's access the webui service through kubectl proxy
  • Run an API proxy in the background:

    kubectl proxy &
  • Access the webui service:

    curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html
  • Terminate the proxy:

    kill %1

k8s/accessinternal.md

426/791

kubectl port-forward in theory

  • What if we want to access a TCP service?

  • We can use kubectl port-forward instead

  • It will create a TCP relay to forward connections to a specific port

    (of a pod, service, deployment...)

  • The syntax is:

    kubectl port-forward service/name_of_service local_port:remote_port

  • If only one port number is specified, it is used for both local and remote ports

k8s/accessinternal.md

427/791

kubectl port-forward in practice

  • Let's access our remote Redis server
  • Forward connections from local port 10000 to remote port 6379:

    kubectl port-forward svc/redis 10000:6379 &
  • Connect to the Redis server:

    telnet localhost 10000
  • Issue a few commands, e.g. INFO server then QUIT

  • Terminate the port forwarder:
    kill %1
428/791

:EN:- Securely accessing internal services :FR:- Accès sécurisé aux services internes

:T: Accessing internal services from our local machine

:Q: What's the advantage of "kubectl port-forward" compared to a NodePort? :A: It can forward arbitrary protocols :A: It doesn't require Kubernetes API credentials :A: It offers deterministic load balancing (instead of random) :A: ✔️It doesn't expose the service to the public

:Q: What's the security concept behind "kubectl port-forward"? :A: ✔️We authenticate with the Kubernetes API, and it forwards connections on our behalf :A: It detects our source IP address, and only allows connections coming from it :A: It uses end-to-end mTLS (mutual TLS) to authenticate our connections :A: There is no security (as long as it's running, anyone can connect from anywhere)

k8s/accessinternal.md

Image separating from the next part

429/791

Accessing the API with kubectl proxy

(automatically generated title slide)

430/791

Accessing the API with kubectl proxy

  • The API requires us to authenticate¹

  • There are many authentication methods available, including:

    • TLS client certificates
      (that's what we've used so far)

    • HTTP basic password authentication
      (from a static file; not recommended)

    • various token mechanisms
      (detailed in the documentation)

¹OK, we lied. If you don't authenticate, you are considered to be user system:anonymous, which doesn't have any access rights by default.

k8s/kubectlproxy.md

431/791

Accessing the API directly

  • Let's see what happens if we try to access the API directly with curl
  • Retrieve the ClusterIP allocated to the kubernetes service:

    kubectl get svc kubernetes
  • Replace the IP below and try to connect with curl:

    curl -k https://10.96.0.1/

The API will tell us that user system:anonymous cannot access this path.

k8s/kubectlproxy.md

432/791

Authenticating to the API

If we wanted to talk to the API, we would need to:

  • extract our TLS key and certificate information from ~/.kube/config

    (the information is in PEM format, encoded in base64)

  • use that information to present our certificate when connecting

    (for instance, with openssl s_client -key ... -cert ... -connect ...)

  • figure out exactly which credentials to use

    (once we start juggling multiple clusters)

  • change that whole process if we're using another authentication method

🤔 There has to be a better way!

k8s/kubectlproxy.md

433/791

Using kubectl proxy for authentication

  • kubectl proxy runs a proxy in the foreground

  • This proxy lets us access the Kubernetes API without authentication

    (kubectl proxy adds our credentials on the fly to the requests)

  • This proxy lets us access the Kubernetes API over plain HTTP

  • This is a great tool to learn and experiment with the Kubernetes API

  • ... And for serious uses as well (suitable for one-shot scripts)

  • For unattended use, it's better to create a service account

k8s/kubectlproxy.md

434/791

Trying kubectl proxy

  • Let's start kubectl proxy and then do a simple request with curl!
  • Start kubectl proxy in the background:

    kubectl proxy &
  • Access the API's default route:

    curl localhost:8001
  • Terminate the proxy:
    kill %1

The output is a list of available API routes.

k8s/kubectlproxy.md

435/791

OpenAPI (fka Swagger)

  • The Kubernetes API serves an OpenAPI Specification

    (OpenAPI was formerly known as Swagger)

  • OpenAPI has many advantages

    (generate client library code, generate test code ...)

  • For us, this means we can explore the API with Swagger UI

    (for instance with the Swagger UI add-on for Firefox)

k8s/kubectlproxy.md

436/791

kubectl proxy is intended for local use

  • By default, the proxy listens on port 8001

    (But this can be changed, or we can tell kubectl proxy to pick a port)

  • By default, the proxy binds to 127.0.0.1

    (Making it unreachable from other machines, for security reasons)

  • By default, the proxy only accepts connections from:

    ^localhost$,^127\.0\.0\.1$,^\[::1\]$

  • This is great when running kubectl proxy locally

  • Not-so-great when you want to connect to the proxy from a remote machine

k8s/kubectlproxy.md

437/791

Running kubectl proxy on a remote machine

  • If we wanted to connect to the proxy from another machine, we would need to:

    • bind to INADDR_ANY instead of 127.0.0.1

    • accept connections from any address

  • This is achieved with:

    kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*

Do not do this on a real cluster: it opens full unauthenticated access!

k8s/kubectlproxy.md

438/791

Security considerations

  • Running kubectl proxy openly is a huge security risk

  • It is slightly better to run the proxy where you need it

    (and copy credentials, e.g. ~/.kube/config, to that place)

  • It is even better to use a limited account with reduced permissions

k8s/kubectlproxy.md

439/791

Good to know ...

  • kubectl proxy also gives access to all internal services

  • Specifically, services are exposed as such:

    /api/v1/namespaces/<namespace>/services/<service>/proxy
  • We can use kubectl proxy to access an internal service in a pinch

    (or, for non HTTP services, kubectl port-forward)

  • This is not very useful when running kubectl directly on the cluster

    (since we could connect to the services directly anyway)

  • But it is very powerful as soon as you run kubectl from a remote machine

k8s/kubectlproxy.md

440/791

Image separating from the next part

441/791

Exercise — Local Cluster

(automatically generated title slide)

442/791

Exercise — Local Cluster

  • We want to have our own local Kubernetes cluster

    (we can use Docker Desktop, KinD, minikube... anything will do!)

  • Then we want to run a copy of dockercoins on that cluster

  • We want to be able to connect to the web UI

    (we can expose the port, or use port-forward, or whatever)

exercises/localcluster-details.md

443/791

Goal

  • Be able to see the dockercoins web UI running on our local cluster

exercises/localcluster-details.md

444/791

Hints

  • On a Mac or Windows machine:

    the easiest solution is probably Docker Desktop

  • On a Linux machine:

    the easiest solution is probably KinD or k3d

  • To connect to the web UI:

    kubectl port-forward is probably the easiest solution

exercises/localcluster-details.md

445/791

Bonus

  • If you already have a local Kubernetes cluster:

    try to run another one!

  • Try to use another method than kubectl port-forward

exercises/localcluster-details.md

446/791

Image separating from the next part

447/791

Scaling our demo app

(automatically generated title slide)

448/791

Scaling our demo app

  • Our ultimate goal is to get more DockerCoins

    (i.e. increase the number of loops per second shown on the web UI)

  • Let's look at the architecture again:

    DockerCoins architecture

  • The loop is done in the worker; perhaps we could try adding more workers?

k8s/scalingdockercoins.md

449/791

Adding another worker

  • All we have to do is scale the worker Deployment
  • Open a new terminal to keep an eye on our pods:
    kubectl get pods -w
  • Now, create more worker replicas:
    kubectl scale deployment worker --replicas=2

After a few seconds, the graph in the web UI should show up.

k8s/scalingdockercoins.md

450/791

Adding more workers

  • If 2 workers give us 2x speed, what about 3 workers?
  • Scale the worker Deployment further:
    kubectl scale deployment worker --replicas=3

The graph in the web UI should go up again.

(This is looking great! We're gonna be RICH!)

k8s/scalingdockercoins.md

451/791

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10
452/791

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10

The graph will peak at 10 hashes/second.

(We can add as many workers as we want: we will never go past 10 hashes/second.)

k8s/scalingdockercoins.md

453/791

Didn't we briefly exceed 10 hashes/second?

  • It may look like it, because the web UI shows instant speed

  • The instant speed can briefly exceed 10 hashes/second

  • The average speed cannot

  • The instant speed can be biased because of how it's computed

k8s/scalingdockercoins.md

454/791

Why instant speed is misleading

  • The instant speed is computed client-side by the web UI

  • The web UI checks the hash counter once per second
    (and does a classic (h2-h1)/(t2-t1) speed computation)

  • The counter is updated once per second by the workers

  • These timings are not exact
    (e.g. the web UI check interval is client-side JavaScript)

  • Sometimes, between two web UI counter measurements,
    the workers are able to update the counter twice

  • During that cycle, the instant speed will appear to be much bigger
    (but it will be compensated by lower instant speed before and after)

k8s/scalingdockercoins.md

455/791

Why are we stuck at 10 hashes per second?

  • If this was high-quality, production code, we would have instrumentation

    (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)

  • It's not!

  • Perhaps we could benchmark our web services?

    (with tools like ab, or even simpler, httping)

k8s/scalingdockercoins.md

456/791

Benchmarking our web services

  • We want to check hasher and rng

  • We are going to use httping

  • It's just like ping, but using HTTP GET requests

    (it measures how long it takes to perform one GET request)

  • It's used like this:

    httping [-c count] http://host:port/path
  • Or even simpler:

    httping ip.ad.dr.ess
  • We will use httping on the ClusterIP addresses of our services

k8s/scalingdockercoins.md

457/791

Obtaining ClusterIP addresses

  • We can simply check the output of kubectl get services

  • Or do it programmatically, as in the example below

  • Retrieve the IP addresses:
    HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})
    RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})

Now we can access the IP addresses of our services through $HASHER and $RNG.

k8s/scalingdockercoins.md

458/791

Checking hasher and rng response times

  • Check the response times for both services:
    httping -c 3 $HASHER
    httping -c 3 $RNG
  • hasher is fine (it should take a few milliseconds to reply)

  • rng is not (it should take about 700 milliseconds if there are 10 workers)

  • Something is wrong with rng, but ... what?

459/791

:EN:- Scaling up our demo app :FR:- Scale up de l'application de démo

k8s/scalingdockercoins.md

Let's draw hasty conclusions

  • The bottleneck seems to be rng

  • What if we don't have enough entropy and can't generate enough random numbers?

  • We need to scale out the rng service on multiple machines!

Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.

(In fact, the code of rng uses /dev/urandom, which never runs out of entropy...
...and is just as good as /dev/random.)

shared/hastyconclusions.md

460/791

Image separating from the next part

461/791

Daemon sets

(automatically generated title slide)

462/791

Daemon sets

  • We want to scale rng in a way that is different from how we scaled worker

  • We want one (and exactly one) instance of rng per node

  • We do not want two instances of rng on the same node

  • We will do that with a daemon set

k8s/daemonset.md

463/791

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?
464/791

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?

  • Nothing guarantees that the rng containers will be distributed evenly

  • If we add nodes later, they will not automatically run a copy of rng

  • If we remove (or reboot) a node, one rng container will restart elsewhere

    (and we will end up with two instances rng on the same node)

  • By contrast, a daemon set will start one pod per node and keep it that way

    (as nodes are added or removed)

k8s/daemonset.md

465/791

Daemon sets in practice

  • Daemon sets are great for cluster-wide, per-node processes:

    • kube-proxy

    • weave (our overlay network)

    • monitoring agents

    • hardware management tools (e.g. SCSI/FC HBA agents)

    • etc.

  • They can also be restricted to run only on some nodes

k8s/daemonset.md

466/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
467/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

468/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
469/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?
470/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?

471/791

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?

k8s/daemonset.md

472/791

Creating the YAML file for our daemon set

  • Let's start with the YAML file for the current rng resource
  • Dump the rng resource in YAML:

    kubectl get deploy/rng -o yaml >rng.yml
  • Edit rng.yml

k8s/daemonset.md

473/791

"Casting" a resource to another

  • What if we just changed the kind field?

    (It can't be that easy, right?)

  • Change kind: Deployment to kind: DaemonSet
  • Save, quit

  • Try to create our new resource:

    kubectl apply -f rng.yml
474/791

"Casting" a resource to another

  • What if we just changed the kind field?

    (It can't be that easy, right?)

  • Change kind: Deployment to kind: DaemonSet
  • Save, quit

  • Try to create our new resource:

    kubectl apply -f rng.yml

We all knew this couldn't be that easy, right!

k8s/daemonset.md

475/791

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
476/791

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set
477/791

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set

  • Workaround: fix the YAML

    • remove the replicas field
    • remove the strategy field (which defines the rollout mechanism for a deployment)
    • remove the progressDeadlineSeconds field (also used by the rollout mechanism)
    • remove the status: {} line at the end
478/791

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set

  • Workaround: fix the YAML

    • remove the replicas field
    • remove the strategy field (which defines the rollout mechanism for a deployment)
    • remove the progressDeadlineSeconds field (also used by the rollout mechanism)
    • remove the status: {} line at the end
  • Or, we could also ...

k8s/daemonset.md

479/791

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false
480/791

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false

🎩✨🐇

481/791

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false

🎩✨🐇

Wait ... Now, can it be that easy?

k8s/daemonset.md

482/791

Checking what we've done

  • Did we transform our deployment into a daemonset?
  • Look at the resources that we have now:
    kubectl get all
483/791

Checking what we've done

  • Did we transform our deployment into a daemonset?
  • Look at the resources that we have now:
    kubectl get all

We have two resources called rng:

  • the deployment that was existing before

  • the daemon set that we just created

We also have one too many pods.
(The pod corresponding to the deployment still exists.)

k8s/daemonset.md

484/791

deploy/rng and ds/rng

  • You can have different resource types with the same name

    (i.e. a deployment and a daemon set both named rng)

  • We still have the old rng deployment

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    deployment.apps/rng 1 1 1 1 18m
  • But now we have the new rng daemon set as well

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/rng 2 2 2 2 2 <none> 9s

k8s/daemonset.md

485/791

Too many pods

  • If we check with kubectl get pods, we see:

    • one pod for the deployment (named rng-xxxxxxxxxx-yyyyy)

    • one pod per node for the daemon set (named rng-zzzzz)

    NAME READY STATUS RESTARTS AGE
    rng-54f57d4d49-7pt82 1/1 Running 0 11m
    rng-b85tm 1/1 Running 0 25s
    rng-hfbrr 1/1 Running 0 25s
    [...]
486/791

Too many pods

  • If we check with kubectl get pods, we see:

    • one pod for the deployment (named rng-xxxxxxxxxx-yyyyy)

    • one pod per node for the daemon set (named rng-zzzzz)

    NAME READY STATUS RESTARTS AGE
    rng-54f57d4d49-7pt82 1/1 Running 0 11m
    rng-b85tm 1/1 Running 0 25s
    rng-hfbrr 1/1 Running 0 25s
    [...]

The daemon set created one pod per node, except on the master node.

The master node has taints preventing pods from running there.

(To schedule a pod on this node anyway, the pod will require appropriate tolerations.)

(Off by one? We don't run these pods on the node hosting the control plane.)

k8s/daemonset.md

487/791

Is this working?

  • Look at the web UI
488/791

Is this working?

  • Look at the web UI

  • The graph should now go above 10 hashes per second!

489/791

Is this working?

  • Look at the web UI

  • The graph should now go above 10 hashes per second!

  • It looks like the newly created pods are serving traffic correctly

  • How and why did this happen?

    (We didn't do anything special to add them to the rng service load balancer!)

k8s/daemonset.md

490/791

Image separating from the next part

491/791

Labels and selectors

(automatically generated title slide)

492/791

Labels and selectors

  • The rng service is load balancing requests to a set of pods

  • That set of pods is defined by the selector of the rng service

  • Check the selector in the rng service definition:
    kubectl describe service rng
  • The selector is app=rng

  • It means "all the pods having the label app=rng"

    (They can have additional labels as well, that's OK!)

k8s/daemonset.md

493/791

Selector evaluation

  • We can use selectors with many kubectl commands

  • For instance, with kubectl get, kubectl logs, kubectl delete ... and more

  • Get the list of pods matching selector app=rng:
    kubectl get pods -l app=rng
    kubectl get pods --selector app=rng

But ... why do these pods (in particular, the new ones) have this app=rng label?

k8s/daemonset.md

494/791

Where do labels come from?

  • When we create a deployment with kubectl create deployment rng,
    this deployment gets the label app=rng

  • The replica sets created by this deployment also get the label app=rng

  • The pods created by these replica sets also get the label app=rng

  • When we created the daemon set from the deployment, we re-used the same spec

  • Therefore, the pods created by the daemon set get the same labels

Note: when we use kubectl run stuff, the label is run=stuff instead.

k8s/daemonset.md

495/791

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

496/791

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

497/791

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

498/791

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

    It would also be re-created immediately

499/791

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

    It would also be re-created immediately

    Why?!?

k8s/daemonset.md

500/791

Selectors for replica sets and daemon sets

  • The "mission" of a replica set is:

    "Make sure that there is the right number of pods matching this spec!"

  • The "mission" of a daemon set is:

    "Make sure that there is a pod matching this spec on each node!"

501/791

Selectors for replica sets and daemon sets

  • The "mission" of a replica set is:

    "Make sure that there is the right number of pods matching this spec!"

  • The "mission" of a daemon set is:

    "Make sure that there is a pod matching this spec on each node!"

  • In fact, replica sets and daemon sets do not check pod specifications

  • They merely have a selector, and they look for pods matching that selector

  • Yes, we can fool them by manually creating pods with the "right" labels

  • Bottom line: if we remove our app=rng label ...

    ... The pod "disappears" for its parent, which re-creates another pod to replace it

k8s/daemonset.md

502/791

Isolation of replica sets and daemon sets

  • Since both the rng daemon set and the rng replica set use app=rng ...

    ... Why don't they "find" each other's pods?

503/791

Isolation of replica sets and daemon sets

  • Since both the rng daemon set and the rng replica set use app=rng ...

    ... Why don't they "find" each other's pods?

  • Replica sets have a more specific selector, visible with kubectl describe

    (It looks like app=rng,pod-template-hash=abcd1234)

  • Daemon sets also have a more specific selector, but it's invisible

    (It looks like app=rng,controller-revision-hash=abcd1234)

  • As a result, each controller only "sees" the pods it manages

k8s/daemonset.md

504/791

Removing a pod from the load balancer

  • Currently, the rng service is defined by the app=rng selector

  • The only way to remove a pod is to remove or change the app label

  • ... But that will cause another pod to be created instead!

  • What's the solution?

505/791

Removing a pod from the load balancer

  • Currently, the rng service is defined by the app=rng selector

  • The only way to remove a pod is to remove or change the app label

  • ... But that will cause another pod to be created instead!

  • What's the solution?

  • We need to change the selector of the rng service!

  • Let's add another label to that selector (e.g. active=yes)

k8s/daemonset.md

506/791

Selectors with multiple labels

  • If a selector specifies multiple labels, they are understood as a logical AND

    (in other words: the pods must match all the labels)

  • We cannot have a logical OR

    (e.g. app=api AND (release=prod OR release=preprod))

  • We can, however, apply as many extra labels as we want to our pods:

    • use selector app=api AND prod-or-preprod=yes

    • add prod-or-preprod=yes to both sets of pods

  • We will see later that in other places, we can use more advanced selectors

k8s/daemonset.md

507/791

The plan

  1. Add the label active=yes to all our rng pods

  2. Update the selector for the rng service to also include active=yes

  3. Toggle traffic to a pod by manually adding/removing the active label

  4. Profit!

Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.

k8s/daemonset.md

508/791

Adding labels to pods

  • We want to add the label active=yes to all pods that have app=rng

  • We could edit each pod one by one with kubectl edit ...

  • ... Or we could use kubectl label to label them all

  • kubectl label can use selectors itself

  • Add active=yes to all pods that have app=rng:
    kubectl label pods -l app=rng active=yes

k8s/daemonset.md

509/791

Updating the service selector

  • We need to edit the service specification

  • Reminder: in the service definition, we will see app: rng in two places

    • the label of the service itself (we don't need to touch that one)

    • the selector of the service (that's the one we want to change)

  • Update the service to add active: yes to its selector:
    kubectl edit service rng
510/791

Updating the service selector

  • We need to edit the service specification

  • Reminder: in the service definition, we will see app: rng in two places

    • the label of the service itself (we don't need to touch that one)

    • the selector of the service (that's the one we want to change)

  • Update the service to add active: yes to its selector:
    kubectl edit service rng

... And then we get the weirdest error ever. Why?

k8s/daemonset.md

511/791

When the YAML parser is being too smart

  • YAML parsers try to help us:

    • xyz is the string "xyz"

    • 42 is the integer 42

    • yes is the boolean value true

  • If we want the string "42" or the string "yes", we have to quote them

  • So we have to use active: "yes"

For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!

k8s/daemonset.md

512/791

Updating the service selector, take 2

  • Update the YAML manifest of the service

  • Add active: "yes" to its selector

This time it should work!

If we did everything correctly, the web UI shouldn't show any change.

k8s/daemonset.md

513/791

Updating labels

  • We want to disable the pod that was created by the deployment

  • All we have to do, is remove the active label from that pod

  • To identify that pod, we can use its name

  • ... Or rely on the fact that it's the only one with a pod-template-hash label

  • Good to know:

    • kubectl label ... foo= doesn't remove a label (it sets it to an empty string)

    • to remove label foo, use kubectl label ... foo-

    • to change an existing label, we would need to add --overwrite

k8s/daemonset.md

514/791

Removing a pod from the load balancer

  • In one window, check the logs of that pod:
    POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)
    kubectl logs --tail 1 --follow $POD
    (We should see a steady stream of HTTP logs)
  • In another window, remove the label from the pod:
    kubectl label pod -l app=rng,pod-template-hash active-
    (The stream of HTTP logs should stop immediately)

There might be a slight change in the web UI (since we removed a bit of capacity from the rng service). If we remove more pods, the effect should be more visible.

k8s/daemonset.md

515/791

Updating the daemon set

  • If we scale up our cluster by adding new nodes, the daemon set will create more pods

  • These pods won't have the active=yes label

  • If we want these pods to have that label, we need to edit the daemon set spec

  • We can do that with e.g. kubectl edit daemonset rng

k8s/daemonset.md

516/791

We've put resources in your resources

  • Reminder: a daemon set is a resource that creates more resources!

  • There is a difference between:

    • the label(s) of a resource (in the metadata block in the beginning)

    • the selector of a resource (in the spec block)

    • the label(s) of the resource(s) created by the first resource (in the template block)

  • We would need to update the selector and the template

    (metadata labels are not mandatory)

  • The template must match the selector

    (i.e. the resource will refuse to create resources that it will not select)

k8s/daemonset.md

517/791

Labels and debugging

  • When a pod is misbehaving, we can delete it: another one will be recreated

  • But we can also change its labels

  • It will be removed from the load balancer (it won't receive traffic anymore)

  • Another pod will be recreated immediately

  • But the problematic pod is still here, and we can inspect and debug it

  • We can even re-add it to the rotation if necessary

    (Very useful to troubleshoot intermittent and elusive bugs)

k8s/daemonset.md

518/791

Labels and advanced rollout control

  • Conversely, we can add pods matching a service's selector

  • These pods will then receive requests and serve traffic

  • Examples:

    • one-shot pod with all debug flags enabled, to collect logs

    • pods created automatically, but added to rotation in a second step
      (by setting their label accordingly)

  • This gives us building blocks for canary and blue/green deployments

k8s/daemonset.md

519/791

Advanced label selectors

  • As indicated earlier, service selectors are limited to a AND

  • But in many other places in the Kubernetes API, we can use complex selectors

    (e.g. Deployment, ReplicaSet, DaemonSet, NetworkPolicy ...)

  • These allow extra operations; specifically:

    • checking for presence (or absence) of a label

    • checking if a label is (or is not) in a given set

  • Relevant documentation:

    Service spec, LabelSelector spec, label selector doc

k8s/daemonset.md

520/791

Example of advanced selector

theSelector:
matchLabels:
app: portal
component: api
matchExpressions:
- key: release
operator: In
values: [ production, preproduction ]
- key: signed-off-by
operator: Exists

This selector matches pods that meet all the indicated conditions.

operator can be In, NotIn, Exists, DoesNotExist.

A nil selector matches nothing, a {} selector matches everything.
(Because that means "match all pods that meet at least zero condition".)

k8s/daemonset.md

521/791

Services and Endpoints

  • Each Service has a corresponding Endpoints resource

    (see kubectl get endpoints or kubectl get ep)

  • That Endpoints resource is used by various controllers

    (e.g. kube-proxy when setting up iptables rules for ClusterIP services)

  • These Endpoints are populated (and updated) with the Service selector

  • We can update the Endpoints manually, but our changes will get overwritten

  • ... Except if the Service selector is empty!

k8s/daemonset.md

522/791

Empty Service selector

  • If a service selector is empty, Endpoints don't get updated automatically

    (but we can still set them manually)

  • This lets us create Services pointing to arbitrary destinations

    (potentially outside the cluster; or things that are not in pods)

  • Another use-case: the kubernetes service in the default namespace

    (its Endpoints are maintained automatically by the API server)

523/791

:EN:- Scaling with Daemon Sets :FR:- Utilisation de Daemon Sets

k8s/daemonset.md

Image separating from the next part

524/791

Rolling updates

(automatically generated title slide)

525/791

Rolling updates

  • By default (without rolling updates), when a scaled resource is updated:

    • new pods are created

    • old pods are terminated

    • ... all at the same time

    • if something goes wrong, ¯\_(ツ)_/¯

k8s/rollout.md

526/791

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple Replica Sets

  • Each Replica Set is a group of identical Pods

    (with the same image, arguments, parameters ...)

  • During the rolling update, we have at least two Replica Sets:

    • the "new" set (corresponding to the "target" version)

    • at least one "old" set

  • We can have multiple "old" sets

    (if we start another update before the first one is done)

k8s/rollout.md

527/791

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge

  • They can be specified in absolute number of pods, or percentage of the replicas count

  • At any given time ...

    • there will always be at least replicas-maxUnavailable pods available

    • there will never be more than replicas+maxSurge pods in total

    • there will therefore be up to maxUnavailable+maxSurge pods being updated

  • We have the possibility of rolling back to the previous version
    (if the update fails or is unsatisfactory in any way)

k8s/rollout.md

528/791

Checking current rollout parameters

  • Recall how we build custom reports with kubectl and jq:
  • Show the rollout plan for our deployments:
    kubectl get deploy -o json |
    jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"

k8s/rollout.md

529/791

Rolling updates in practice

  • As of Kubernetes 1.8, we can do rolling updates with:

    deployments, daemonsets, statefulsets

  • Editing one of these resources will automatically result in a rolling update

  • Rolling updates can be monitored with the kubectl rollout subcommand

k8s/rollout.md

530/791

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2
531/791

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2

That rollout should be pretty quick. What shows in the web UI?

k8s/rollout.md

532/791

Give it some time

  • At first, it looks like nothing is happening (the graph remains at the same level)

  • According to kubectl get deploy -w, the deployment was updated really quickly

  • But kubectl get pods -w tells a different story

  • The old pods are still here, and they stay in Terminating state for a while

  • Eventually, they are terminated; and then the graph decreases significantly

  • This delay is due to the fact that our worker doesn't handle signals

  • Kubernetes sends a "polite" shutdown request to the worker, which ignores it

  • After a grace period, Kubernetes gets impatient and kills the container

    (The grace period is 30 seconds, but can be changed if needed)

k8s/rollout.md

533/791

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker
534/791

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker

Our rollout is stuck. However, the app is not dead.

(After a minute, it will stabilize to be 20-25% slower.)

k8s/rollout.md

535/791

What's going on with our rollout?

  • Why is our app a bit slower?

  • Because MaxUnavailable=25%

    ... So the rollout terminated 2 replicas out of 10 available

  • Okay, but why do we see 5 new replicas being rolled out?

  • Because MaxSurge=25%

    ... So in addition to replacing 2 replicas, the rollout is also starting 3 more

  • It rounded down the number of MaxUnavailable pods conservatively,
    but the total number of pods being rolled out is allowed to be 25+25=50%

k8s/rollout.md

536/791

The nitty-gritty details

  • We start with 10 pods running for the worker deployment

  • Current settings: MaxUnavailable=25% and MaxSurge=25%

  • When we start the rollout:

    • two replicas are taken down (as per MaxUnavailable=25%)
    • two others are created (with the new version) to replace them
    • three others are created (with the new version) per MaxSurge=25%)
  • Now we have 8 replicas up and running, and 5 being deployed

  • Our rollout is stuck at this point!

k8s/rollout.md

537/791

Checking the dashboard during the bad rollout

If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.

  • Connect to the dashboard that we deployed earlier

  • Check that we have failures in Deployments, Pods, and Replica Sets

  • Can we see the reason for the failure?

k8s/rollout.md

538/791

Recovering from a bad rollout

  • We could push some v0.3 image

    (the pod retry logic will eventually catch it and the rollout will proceed)

  • Or we could invoke a manual rollback

  • Cancel the deployment and wait for the dust to settle:
    kubectl rollout undo deploy worker
    kubectl rollout status deploy worker

k8s/rollout.md

539/791

Rolling back to an older version

  • We reverted to v0.2

  • But this version still has a performance problem

  • How can we get back to the previous version?

k8s/rollout.md

540/791

Multiple "undos"

  • What happens if we try kubectl rollout undo again?
  • Try it:

    kubectl rollout undo deployment worker
  • Check the web UI, the list of pods ...

🤔 That didn't work.

k8s/rollout.md

541/791

Multiple "undos" don't work

  • If we see successive versions as a stack:

    • kubectl rollout undo doesn't "pop" the last element from the stack

    • it copies the N-1th element to the top

  • Multiple "undos" just swap back and forth between the last two versions!

  • Go back to v0.2 again:
    kubectl rollout undo deployment worker

k8s/rollout.md

542/791

In this specific scenario

  • Our version numbers are easy to guess

  • What if we had used git hashes?

  • What if we had changed other parameters in the Pod spec?

k8s/rollout.md

543/791

Listing versions

  • We can list successive versions of a Deployment with kubectl rollout history
  • Look at our successive versions:
    kubectl rollout history deployment worker

We don't see all revisions.

We might see something like 1, 4, 5.

(Depending on how many "undos" we did before.)

k8s/rollout.md

544/791

Explaining deployment revisions

  • These revisions correspond to our Replica Sets

  • This information is stored in the Replica Set annotations

  • Check the annotations for our replica sets:
    kubectl describe replicasets -l app=worker | grep -A3 ^Annotations

k8s/rollout.md

545/791

What about the missing revisions?

  • The missing revisions are stored in another annotation:

    deployment.kubernetes.io/revision-history

  • These are not shown in kubectl rollout history

  • We could easily reconstruct the full list with a script

    (if we wanted to!)

k8s/rollout.md

546/791

Rolling back to an older version

  • kubectl rollout undo can work with a revision number
  • Roll back to the "known good" deployment version:

    kubectl rollout undo deployment worker --to-revision=1
  • Check the web UI or the list of pods

k8s/rollout.md

547/791

Changing rollout parameters

  • We want to:

    • revert to v0.1
    • be conservative on availability (always have desired number of available workers)
    • go slow on rollout speed (update only one pod at a time)
    • give some time to our workers to "warm up" before starting more

The corresponding changes can be expressed in the following YAML snippet:

spec:
template:
spec:
containers:
- name: worker
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 10

k8s/rollout.md

548/791

Applying changes through a YAML patch

  • We could use kubectl edit deployment worker

  • But we could also use kubectl patch with the exact YAML shown before

  • Apply all our changes and wait for them to take effect:
    kubectl patch deployment worker -p "
    spec:
    template:
    spec:
    containers:
    - name: worker
    image: dockercoins/worker:v0.1
    strategy:
    rollingUpdate:
    maxUnavailable: 0
    maxSurge: 1
    minReadySeconds: 10
    "
    kubectl rollout status deployment worker
    kubectl get deploy -o json worker |
    jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
549/791

:EN:- Rolling updates :EN:- Rolling back a bad deployment

:FR:- Mettre à jour un déploiement :FR:- Concept de rolling update et rollback :FR:- Paramétrer la vitesse de déploiement

k8s/rollout.md

Image separating from the next part

550/791

Healthchecks

(automatically generated title slide)

551/791

Healthchecks

  • Containers can have healthchecks

  • There are three kinds of healthchecks, corresponding to very different use-cases:

    • liveness = detect when a container is "dead" and needs to be restarted

    • readiness = detect when a container is ready to serve traffic

    • startup = detect if a container has finished to boot

  • These healthchecks are optional (we can use none, all, or some of them)

  • Different probes are available (HTTP request, TCP connection, program execution)

  • Let's see the difference and how to use them!

k8s/healthchecks.md

552/791

Liveness probe

This container is dead, we don't know how to fix it, other than restarting it.

  • Indicates if the container is dead or alive

  • A dead container cannot come back to life

  • If the liveness probe fails, the container is killed (destroyed)

    (to make really sure that it's really dead; no zombies or undeads!)

  • What happens next depends on the pod's restartPolicy:

    • Never: the container is not restarted

    • OnFailure or Always: the container is restarted

k8s/healthchecks.md

553/791

When to use a liveness probe

  • To indicate failures that can't be recovered

    • deadlocks (causing all requests to time out)

    • internal corruption (causing all requests to error)

  • Anything where our incident response would be "just restart/reboot it"

Do not use liveness probes for problems that can't be fixed by a restart

  • Otherwise we just restart our pods for no reason, creating useless load

k8s/healthchecks.md

554/791

Readiness probe (1)

Make sure that a container is ready before continuing a rolling update.

  • Indicates if the container is ready to handle traffic

  • When doing a rolling update, the Deployment controller waits for Pods to be ready

    (a Pod is ready when all the containers in the Pod are ready)

  • Improves reliability and safety of rolling updates:

    • don't roll out a broken version (that doesn't pass readiness checks)

    • don't lose processing capacity during a rolling update

k8s/healthchecks.md

555/791

Readiness probe (2)

Temporarily remove a container (overloaded or otherwise) from a Service load balancer.

  • A container can mark itself "not ready" temporarily

    (e.g. if it's overloaded or needs to reload/restart/garbage collect...)

  • If a container becomes "unready" it might be ready again soon

  • If the readiness probe fails:

    • the container is not killed

    • if the pod is a member of a service, it is temporarily removed

    • it is re-added as soon as the readiness probe passes again

k8s/healthchecks.md

556/791

When to use a readiness probe

  • To indicate failure due to an external cause

    • database is down or unreachable

    • mandatory auth or other backend service unavailable

  • To indicate temporary failure or unavailability

    • application can only service N parallel connections

    • runtime is busy doing garbage collection or initial data load

  • To redirect new connections to other Pods

    (e.g. fail the readiness probe when the Pod's load is too high)

k8s/healthchecks.md

557/791

Dependencies

  • If a web server depends on a database to function, and the database is down:

    • the web server's liveness probe should succeed

    • the web server's readiness probe should fail

  • Same thing for any hard dependency (without which the container can't work)

Do not fail liveness probes for problems that are external to the container

k8s/healthchecks.md

558/791

Timing and thresholds

  • Probes are executed at intervals of periodSeconds (default: 10)

  • The timeout for a probe is set with timeoutSeconds (default: 1)

If a probe takes longer than that, it is considered as a FAIL

  • A probe is considered successful after successThreshold successes (default: 1)

  • A probe is considered failing after failureThreshold failures (default: 3)

  • A probe can have an initialDelaySeconds parameter (default: 0)

  • Kubernetes will wait that amount of time before running the probe for the first time

    (this is important to avoid killing services that take a long time to start)

k8s/healthchecks.md

559/791

Startup probe

The container takes too long to start, and is killed by the liveness probe!

  • By default, probes (including liveness) start immediately

  • With the default probe interval and failure threshold:

    a container must respond in less than 30 seconds, or it will be killed!

  • There are two ways to avoid that:

    • set initialDelaySeconds (a fixed, rigid delay)

    • use a startupProbe

  • Kubernetes will run only the startup probe, and when it succeeds, run the other probes

k8s/healthchecks.md

560/791

When to use a startup probe

  • For containers that take a long time to start

    (more than 30 seconds)

  • Especially if that time can vary a lot

    (e.g. fast in dev, slow in prod, or the other way around)

k8s/healthchecks.md

561/791

Different types of probes

  • HTTP request

    • specify URL of the request (and optional headers)

    • any status code between 200 and 399 indicates success

  • TCP connection

    • the probe succeeds if the TCP port is open
  • arbitrary exec

    • a command is executed in the container

    • exit status of zero indicates success

k8s/healthchecks.md

562/791

Benefits of using probes

  • Rolling updates proceed when containers are actually ready

    (as opposed to merely started)

  • Containers in a broken state get killed and restarted

    (instead of serving errors or timeouts)

  • Unavailable backends get removed from load balancer rotation

    (thus improving response times across the board)

  • If a probe is not defined, it's as if there was an "always successful" probe

k8s/healthchecks.md

563/791

Example: HTTP probe

Here is a pod template for the rng web service of the DockerCoins app:

apiVersion: v1
kind: Pod
metadata:
name: healthy-app
spec:
containers:
- name: myapp
image: myregistry.io/myapp:v1.0
livenessProbe:
httpGet:
path: /health
port: 80
periodSeconds: 5

If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed.

k8s/healthchecks.md

564/791

Example: exec probe

Here is a pod template for a Redis server:

apiVersion: v1
kind: Pod
metadata:
name: redis-with-liveness
spec:
containers:
- name: redis
image: redis
livenessProbe:
exec:
command: ["redis-cli", "ping"]

If the Redis process becomes unresponsive, it will be killed.

k8s/healthchecks.md

565/791

Questions to ask before adding healthchecks

  • Do we want liveness, readiness, both?

    (sometimes, we can use the same check, but with different failure thresholds)

  • Do we have existing HTTP endpoints that we can use?

  • Do we need to add new endpoints, or perhaps use something else?

  • Are our healthchecks likely to use resources and/or slow down the app?

  • Do they depend on additional services?

    (this can be particularly tricky, see next slide)

k8s/healthchecks.md

566/791

Healthchecks and dependencies

  • Liveness checks should not be influenced by the state of external services

  • All checks should reply quickly (by default, less than 1 second)

  • Otherwise, they are considered to fail

  • This might require to check the health of dependencies asynchronously

    (e.g. if a database or API might be healthy but still take more than 1 second to reply, we should check the status asynchronously and report a cached status)

k8s/healthchecks.md

567/791

Healthchecks for workers

(In that context, worker = process that doesn't accept connections)

  • Readiness is useful mostly for rolling updates

    (because workers aren't backends for a service)

  • Liveness may help us restart a broken worker, but how can we check it?

  • Embedding an HTTP server is a (potentially expensive) option

  • Using a "lease" file can be relatively easy:

    • touch a file during each iteration of the main loop

    • check the timestamp of that file from an exec probe

  • Writing logs (and checking them from the probe) also works

568/791

:EN:- Using healthchecks to improve availability :FR:- Utiliser des healthchecks pour améliorer la disponibilité

k8s/healthchecks.md

Image separating from the next part

569/791

The Kubernetes dashboard

(automatically generated title slide)

570/791

The Kubernetes dashboard

  • Kubernetes resources can also be viewed with a web dashboard

  • Dashboard users need to authenticate

    (typically with a token)

  • The dashboard should be exposed over HTTPS

    (to prevent interception of the aforementioned token)

  • Ideally, this requires obtaining a proper TLS certificate

    (for instance, with Let's Encrypt)

k8s/dashboard.md

571/791

Three ways to install the dashboard

  • Our k8s directory has no less than three manifests!

  • dashboard-recommended.yaml

    (purely internal dashboard; user must be created manually)

  • dashboard-with-token.yaml

    (dashboard exposed with NodePort; creates an admin user for us)

  • dashboard-insecure.yaml aka YOLO

    (dashboard exposed over HTTP; gives root access to anonymous users)

k8s/dashboard.md

572/791

dashboard-insecure.yaml

  • This will allow anyone to deploy anything on your cluster

    (without any authentication whatsoever)

  • Do not use this, except maybe on a local cluster

    (or a cluster that you will destroy a few minutes later)

  • On "normal" clusters, use dashboard-with-token.yaml instead!

k8s/dashboard.md

573/791

What's in the manifest?

  • The dashboard itself

  • An HTTP/HTTPS unwrapper (using socat)

  • The guest/admin account

  • Create all the dashboard resources, with the following command:
    kubectl apply -f ~/container.training/k8s/dashboard-insecure.yaml

k8s/dashboard.md

574/791

Connecting to the dashboard

  • Check which port the dashboard is on:
    kubectl get svc dashboard

You'll want the 3xxxx port.

The dashboard will then ask you which authentication you want to use.

k8s/dashboard.md

575/791

Dashboard authentication

  • We have three authentication options at this point:

    • token (associated with a role that has appropriate permissions)

    • kubeconfig (e.g. using the ~/.kube/config file from node1)

    • "skip" (use the dashboard "service account")

  • Let's use "skip": we're logged in!

576/791

Dashboard authentication

  • We have three authentication options at this point:

    • token (associated with a role that has appropriate permissions)

    • kubeconfig (e.g. using the ~/.kube/config file from node1)

    • "skip" (use the dashboard "service account")

  • Let's use "skip": we're logged in!

Remember, we just added a backdoor to our Kubernetes cluster!

k8s/dashboard.md

577/791

Closing the backdoor

  • Seriously, don't leave that thing running!
  • Remove what we just created:
    kubectl delete -f ~/container.training/k8s/dashboard-insecure.yaml

k8s/dashboard.md

578/791

The risks

k8s/dashboard.md

579/791

dashboard-with-token.yaml

  • This is a less risky way to deploy the dashboard

  • It's not completely secure, either:

    • we're using a self-signed certificate

    • this is subject to eavesdropping attacks

  • Using kubectl port-forward or kubectl proxy is even better

k8s/dashboard.md

580/791

What's in the manifest?

  • The dashboard itself (but exposed with a NodePort)

  • A ServiceAccount with cluster-admin privileges

    (named kubernetes-dashboard:cluster-admin)

  • Create all the dashboard resources, with the following command:
    kubectl apply -f ~/container.training/k8s/dashboard-with-token.yaml

k8s/dashboard.md

581/791

Obtaining the token

  • The manifest creates a ServiceAccount

  • Kubernetes will automatically generate a token for that ServiceAccount

  • Display the token:
    kubectl --namespace=kubernetes-dashboard \
    describe secret cluster-admin-token

The token should start with eyJ... (it's a JSON Web Token).

Note that the secret name will actually be cluster-admin-token-xxxxx.
(But kubectl prefix matches are great!)

k8s/dashboard.md

582/791

Connecting to the dashboard

  • Check which port the dashboard is on:
    kubectl get svc --namespace=kubernetes-dashboard

You'll want the 3xxxx port.

The dashboard will then ask you which authentication you want to use.

k8s/dashboard.md

583/791

Dashboard authentication

  • Select "token" authentication

  • Copy paste the token (starting with eyJ...) obtained earlier

  • We're logged in!

k8s/dashboard.md

584/791

Other dashboards

k8s/dashboard.md

585/791

Image separating from the next part

586/791

Security implications of kubectl apply

(automatically generated title slide)

587/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

588/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster
589/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

590/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

591/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

592/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

    • encrypts our data and ransoms it

593/791

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

    • encrypts our data and ransoms it

    • ☠️☠️☠️

k8s/dashboard.md

594/791

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

595/791

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • kubectl apply -f is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • Example: the official setup instructions for most pod networks

596/791

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • kubectl apply -f is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • Example: the official setup instructions for most pod networks

  • It introduces new failure modes

    (for instance, if you try to apply YAML from a link that's no longer valid)

597/791

:EN:- The Kubernetes dashboard :FR:- Le dashboard Kubernetes

k8s/dashboard.md

Image separating from the next part

598/791

k9s

(automatically generated title slide)

599/791

k9s

  • Somewhere in between CLI and GUI (or web UI), we can find the magic land of TUI

  • Some folks love them, some folks hate them, some are indifferent ...

  • But it's nice to have different options!

  • Let's see one particular TUI for Kubernetes: k9s

k8s/k9s.md

600/791

Installing k9s

  • If you are using a training cluster or the shpod image, k9s is pre-installed

  • Otherwise, it can be installed easily:

  • We don't need to set up or configure anything

    (it will use the same configuration as kubectl and other well-behaved clients)

  • Just run k9s to fire it up!

k8s/k9s.md

601/791

What kind to we want to see?

  • Press : to change the type of resource to view

  • Then type, for instance, ns or namespace or nam[TAB], then [ENTER]

  • Use the arrows to move down to e.g. kube-system, and press [ENTER]

  • Or, type /kub or /sys to filter the output, and press [ENTER] twice

    (once to exit the filter, once to enter the namespace)

  • We now see the pods in kube-system!

k8s/k9s.md

602/791

Interacting with pods

  • l to view logs

  • d to describe

  • s to get a shell (won't work if sh isn't available in the container image)

  • e to edit

  • shift-f to define port forwarding

  • ctrl-k to kill

  • [ESC] to get out or get back

k8s/k9s.md

603/791

Quick navigation between namespaces

  • On top of the screen, we should see shortcuts like this:

    <0> all
    <1> kube-system
    <2> default
  • Pressing the corresponding number switches to that namespace

    (or shows resources across all namespaces with 0)

  • Locate a namespace with a copy of DockerCoins, and go there!

k8s/k9s.md

604/791

Interacting with Deployments

  • View Deployments (type : deploy [ENTER])

  • Select e.g. worker

  • Scale it with s

  • View its aggregated logs with l

k8s/k9s.md

605/791

Exit

  • Exit at any time with Ctrl-C

  • k9s will "remember" where you were

    (and go back there next time you run it)

k8s/k9s.md

606/791

Pros

  • Very convenient to navigate through resources

    (hopping from a deployment, to its pod, to another namespace, etc.)

  • Very convenient to quickly view logs of e.g. init containers

  • Very convenient to get a (quasi) realtime view of resources

    (if we use watch kubectl get a lot, we will probably like k9s)

k8s/k9s.md

607/791

Cons

  • Doesn't promote automation / scripting

    (if you repeat the same things over and over, there is a scripting opportunity)

  • Not all features are available

    (e.g. executing arbitrary commands in containers)

k8s/k9s.md

608/791

Conclusion

Try it out, and see if it makes you more productive!

609/791

:EN:- The k9s TUI :FR:- L'interface texte k9s

k8s/k9s.md

Image separating from the next part

610/791

Tilt

(automatically generated title slide)

611/791

Tilt

  • What does a development workflow look like?

    • make changes

    • test / see these changes

    • repeat!

  • What does it look like, with containers?

    🤔

k8s/tilt.md

612/791

Basic Docker workflow

  • Preparation

    • write Dockerfiles
  • Iteration

    • edit code
    • docker build
    • docker run
    • test
    • docker stop

Straightforward when we have a single container.

k8s/tilt.md

613/791

Docker workflow with volumes

  • Preparation

    • write Dockerfiles
    • docker build + docker run
  • Iteration

    • edit code
    • test

Note: only works with interpreted languages.
(Compiled languages require extra work.)

k8s/tilt.md

614/791

Docker workflow with Compose

  • Preparation

    • write Dockerfiles + Compose file
    • docker-compose up
  • Iteration

    • edit code
    • test
    • docker-compose up (as needed)

Simplifies complex scenarios (multiple containers).
Facilitates updating images.

k8s/tilt.md

615/791

Basic Kubernetes workflow

  • Preparation

    • write Dockerfiles
    • write Kubernetes YAML
    • set up container registry
  • Iteration

    • edit code
    • build images
    • push images
    • update Kubernetes resources

Seems simple enough, right?

k8s/tilt.md

616/791

Basic Kubernetes workflow

  • Preparation

    • write Dockerfiles
    • write Kubernetes YAML
    • set up container registry
  • Iteration

    • edit code
    • build images
    • push images
    • update Kubernetes resources

Ah, right ...

k8s/tilt.md

617/791

We need a registry

  • Remember "build, ship, and run"

  • Registries are involved in the "ship" phase

  • With Docker, we were building and running on the same node

  • We didn't need a registry!

  • With Kubernetes, though ...

k8s/tilt.md

618/791

Special case of single node clusters

  • If our Kubernetes has only one node ...

  • ... We can build directly on that node ...

  • ... We don't need to push images ...

  • ... We don't need to run a registry!

  • Examples: Docker Desktop, Minikube ...

k8s/tilt.md

619/791

When we have more than one node

  • Which registry should we use?

    (Docker Hub, Quay, cloud-based, self-hosted ...)

  • Should we use a single registry, or one per cluster or environment?

  • Which tags and credentials should we use?

    (in particular when using a shared registry!)

  • How do we provision that registry and its users?

  • How do we adjust our Kubernetes YAML manifests?

    (e.g. to inject image names and tags)

k8s/tilt.md

620/791

More questions

  • The whole cycle (build+push+update) is expensive

  • If we have many services, how do we update only the ones we need?

  • Can we take shortcuts?

    (e.g. synchronized files without going through a whole build+push+update cycle)

k8s/tilt.md

621/791

Tilt

  • Tilt is a tool to address all these questions

  • There are other similar tools (e.g. Skaffold)

  • We arbitrarily decided to focus on that one

k8s/tilt.md

622/791

Tilt in practice

  • The dockercoins directory in our repository has a Tiltfile

  • That Tiltfile includes definitions for the DockerCoins app, including:

    • building the images for the app

    • Kubernetes manifests to deploy the app

    • a self-hosted registry to host the app image

  • Let's try it out!

k8s/tilt.md

623/791

Running Tilt locally

These instructions are valid only if you run Tilt on your local machine.

If you are running Tilt on a remote machine or in a Pod, see next slide.

k8s/tilt.md

624/791

Running Tilt on a remote machine

  • If Tilt runs remotely, we can't access http://localhost:10350

  • Our Tiltfile includes an ngrok tunnel, let's use that

  • Start Tilt:

    tilt up
  • The ngrok URL should appear in the Tilt output

    (something like https://xxxx-aa-bb-cc-dd.ngrok.io/)

  • Open that URL in your browser

Note: it's also possible to run tilt up --host=0.0.0.0.

k8s/tilt.md

625/791

Kubernetes contexts

  • Tilt is designed to run in dev environments

  • It will try to figure out if we're really in a dev environment:

    • if Tilt thinks that are on a local dev cluster, it will start

    • otherwise, it will give us a warning and it won't continue

  • In the latter case, we need to add one line to the Tiltfile

    (to tell Tilt "it's okay, you can run safely in this environment!")

  • If this happens, add the line to the Tiltfile

    (Tilt will tell you exactly what to add!)

  • We don't need to restart Tilt, it will detect the change immediately

k8s/tilt.md

626/791

What's in our Tiltfile?

  • Kubernetes manifests for a local registry

  • Kubernetes manifests for DockerCoins

  • Instructions indicating how to build DockerCoins' images

  • A tiny bit of sugar

    (telling Tilt which registry to use)

k8s/tilt.md

627/791

How does it work?

  • Tilt keeps track of dependencies between files and resources

    (a bit like a make that would run continuously)

  • It automatically alters some resources

    (for instance, it updates the images used in our Kubernetes manifests)

  • That's it!

(And of course, it provides a great web UI, lots of libraries, etc.)

k8s/tilt.md

628/791

What happens when we edit a file (1/2)

  • Let's change e.g. worker/worker.py

  • Thanks to this line,

    docker_build('dockercoins/worker', 'worker')

    ... Tilt watches the worker directory and uses it to build dockercoins/worker

  • Thanks to this line,

    default_registry('localhost:30555')

    ... Tilt actually renames dockercoins/worker to localhost:30555/dockercoins_worker

  • Tilt will tag the image with something like tilt-xxxxxxxxxx

k8s/tilt.md

629/791

What happens when we edit a file (2/2)

  • Thanks to this line,

    k8s_yaml('../k8s/dockercoins.yaml')

    ... Tilt is aware of our Kubernetes resources

  • The worker Deployment uses dockercoins/worker, so it must be updated

  • dockercoins/worker becomes localhost:30555/dockercoins_worker:tilt-xxx

  • The worker Deployment gets updated on the Kubernetes cluster

  • All these operations (and their log output) are visible in the Tilt UI

k8s/tilt.md

630/791

Configuration file format

  • The Tiltfile is written in Starlark

    (essentially a subset of Python)

  • Tilt monitors the Tiltfile too

    (so it reloads it immediately when we change it)

k8s/tilt.md

631/791

Tilt "killer features"

  • Dependency engine

    (build or run only what's necessary)

  • Ability to watch resources

    (execute actions immediately, without explicitly running a command)

  • Rich library of function and helpers

    (build container images, manipulate YAML manifests...)

  • Convenient UI (web; TUI also available)

    (provides immediate feedback and logs)

  • Extensibility!

632/791

:EN:- Development workflow with Tilt :FR:- Développer avec Tilt

k8s/tilt.md

Image separating from the next part

633/791

Exercise — Healthchecks

(automatically generated title slide)

634/791

Exercise — Healthchecks

  • We want to add healthchecks to the rng service in dockercoins

  • The rng service exhibits an interesting behavior under load:

    its latency increases (which will cause probes to time out!)

  • We want to see:

    • what happens when the readiness probe fails

    • what happens when the liveness probe fails

    • how to set "appropriate" probes and probe parameters

exercises/healthchecks-details.md

635/791

Setup

  • First, deploy a new copy of dockercoins

    (for instance, in a brand new namespace)

  • Pro tip #1: ping (e.g. with httping) the rng service at all times

    • it should initially show a few milliseconds latency

    • that will increase when we scale up

    • it will also let us detect when the service goes "boom"

  • Pro tip #2: also keep an eye on the web UI

exercises/healthchecks-details.md

636/791

Readiness

  • Add a readiness probe to rng

    • this requires editing the pod template in the Deployment manifest

    • use a simple HTTP check on the / route of the service

    • keep all other parameters (timeouts, thresholds...) at their default values

  • Check what happens when deploying an invalid image for rng (e.g. alpine)

(If the probe was set up correctly, the app will continue to work, because Kubernetes won't switch over the traffic to the alpine containers, because they don't pass the readiness probe.)

exercises/healthchecks-details.md

637/791

Readiness under load

  • Then roll back rng to the original image

  • Check what happens when we scale up the worker Deployment to 15+ workers

    (get the latency above 1 second)

(We should now observe intermittent unavailability of the service, i.e. every 30 seconds it will be unreachable for a bit, then come back, then go away again, etc.)

exercises/healthchecks-details.md

638/791

Liveness

  • Now replace the readiness probe with a liveness probe

  • What happens now?

(At first the behavior looks the same as with the readiness probe: service becomes unreachable, then reachable again, etc.; but there is a significant difference behind the scenes. What is it?)

exercises/healthchecks-details.md

639/791

Readiness and liveness

  • Bonus questions!

  • What happens if we enable both probes at the same time?

  • What strategies can we use so that both probes are useful?

exercises/healthchecks-details.md

640/791

Image separating from the next part

641/791

Exposing HTTP services with Ingress resources

(automatically generated title slide)

642/791

Exposing HTTP services with Ingress resources

  • HTTP services are typically exposed on port 80

    (and 443 for HTTPS)

  • NodePort services are great, but they are not on port 80

    (by default, they use port range 30000-32767)

  • How can we get many HTTP services on port 80? 🤔

k8s/ingress.md

643/791

Various ways to expose something on port 80

  • Service with type: LoadBalancer

    costs a little bit of money; not always available

  • Service with one (or multiple) ExternalIP

    requires public nodes; limited by number of nodes

  • Service with hostPort or hostNetwork

    same limitations as ExternalIP; even harder to manage

  • Ingress resources

    addresses all these limitations, yay!

k8s/ingress.md

644/791

LoadBalancer vs Ingress

  • Service with type: LoadBalancer

    • requires a particular controller (e.g. CCM, MetalLB)
    • if TLS is desired, it has to be implemented by the app
    • works for any TCP protocol (not just HTTP)
    • doesn't interpret the HTTP protocol (no fancy routing)
    • costs a bit of money for each service
  • Ingress

    • requires an ingress controller
    • can implement TLS transparently for the app
    • only supports HTTP
    • can do content-based routing (e.g. per URI)
    • lower cost per service
      (exact pricing depends on provider's model)

k8s/ingress.md

645/791

Ingress resources

  • Kubernetes API resource (kubectl get ingress/ingresses/ing)

  • Designed to expose HTTP services

  • Requires an ingress controller

    (otherwise, resources can be created, but nothing happens)

  • Some ingress controllers are based on existing load balancers

    (HAProxy, NGINX...)

  • Some are standalone, and sometimes designed for Kubernetes

    (Contour, Traefik...)

  • Note: there is no "default" or "official" ingress controller!

k8s/ingress.md

646/791

Ingress standard features

  • Load balancing

  • SSL termination

  • Name-based virtual hosting

  • URI routing

    (e.g. /apiapi-service, /staticassets-service)

k8s/ingress.md

647/791

Ingress extended features

(Not always supported; supported through annotations, CRDs, etc.)

  • Routing with other headers or cookies

  • A/B testing

  • Canary deployment

  • etc.

k8s/ingress.md

648/791

Principle of operation

  • Step 1: deploy an ingress controller

    (one-time setup)

  • Step 2: create Ingress resources

    • maps a domain and/or path to a Kubernetes Service

    • the controller watches ingress resources and sets up a LB

  • Step 3: set up DNS

    • associate DNS entries with the load balancer address

k8s/ingress.md

649/791

Special cases

  • GKE has "GKE Ingress", a custom ingress controller

    (enabled by default)

  • EKS has "AWS ALB Ingress Controller" as well

    (not enabled by default, requires extra setup)

  • They leverage cloud-specific HTTP load balancers

    (GCP HTTP LB, AWS ALB)

  • They typically a cost per ingress resource k8s/ingress.md

650/791

Single or multiple LoadBalancer

  • Most ingress controllers will create a LoadBalancer Service

    (and will receive all HTTP/HTTPS traffic through it)

  • We need to point our DNS entries to the IP address of that LB

  • Some rare ingress controllers will allocate one LB per ingress resource

    (example: the GKE Ingress and ALB Ingress mentioned previously)

  • This leads to increased costs

  • Note that it's possible to have multiple "rules" per ingress resource

    (this will reduce costs but may be less convenient to manage)

k8s/ingress.md

651/791

Ingress in action

  • We will deploy the Traefik ingress controller

    • this is an arbitrary choice

    • maybe motivated by the fact that Traefik releases are named after cheeses

  • For DNS, we will use nip.io

    • *.1.2.3.4.nip.io resolves to 1.2.3.4
  • We will create ingress resources for various HTTP services

k8s/ingress.md

652/791

Deploying pods listening on port 80

  • We want our ingress load balancer to be available on port 80

  • The best way to do that would be with a LoadBalancer service

    ... but it requires support from the underlying infrastructure

  • Instead, we are going to use the hostNetwork mode on the Traefik pods

  • Let's see what this hostNetwork mode is about ...

k8s/ingress.md

653/791

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

  • An IP address is assigned to the pod

  • This IP address is routed/connected to the cluster network

  • All containers of that pod are sharing that network namespace

    (and therefore using the same IP address)

k8s/ingress.md

654/791

With hostNetwork: true

  • No network namespace gets created

  • The pod is using the network namespace of the host

  • It "sees" (and can use) the interfaces (and IP addresses) of the host

  • The pod can receive outside traffic directly, on any port

  • Downside: with most network plugins, network policies won't work for that pod

    • most network policies work at the IP address level

    • filtering that pod = filtering traffic from the node

k8s/ingress.md

655/791

Other techniques to expose port 80

k8s/ingress.md

656/791

Running Traefik

  • The Traefik documentation recommends to use a Helm chart

  • For simplicity, we're going to use a custom YAML manifest

  • Our manifest will:

    • use a Daemon Set so that each node can accept connections

    • enable hostNetwork

    • add a toleration so that Traefik also runs on all nodes

  • We could do the same with the official Helm chart k8s/ingress.md

657/791

Taints and tolerations

  • A taint is an attribute added to a node

  • It prevents pods from running on the node

  • ... Unless they have a matching toleration

  • When deploying with kubeadm:

    • a taint is placed on the node dedicated to the control plane

    • the pods running the control plane have a matching toleration

k8s/ingress.md

658/791

Checking taints on our nodes

  • Check our nodes specs:
    kubectl get node node1 -o json | jq .spec
    kubectl get node node2 -o json | jq .spec

We should see a result only for node1 (the one with the control plane):

"taints": [
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]

k8s/ingress.md

659/791

Understanding a taint

  • The key can be interpreted as:

    • a reservation for a special set of pods
      (here, this means "this node is reserved for the control plane")

    • an error condition on the node
      (for instance: "disk full," do not start new pods here!)

  • The effect can be:

    • NoSchedule (don't run new pods here)

    • PreferNoSchedule (try not to run new pods here)

    • NoExecute (don't run new pods and evict running pods)

k8s/ingress.md

660/791

Checking tolerations on the control plane

  • Check tolerations for CoreDNS:
    kubectl -n kube-system get deployments coredns -o json |
    jq .spec.template.spec.tolerations

The result should include:

{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}

It means: "bypass the exact taint that we saw earlier on node1."

k8s/ingress.md

661/791

Special tolerations

  • Check tolerations on kube-proxy:
    kubectl -n kube-system get ds kube-proxy -o json |
    jq .spec.template.spec.tolerations

The result should include:

{
"operator": "Exists"
}

This one is a special case that means "ignore all taints and run anyway."

k8s/ingress.md

662/791

Running Traefik on our cluster

  • Apply the YAML:
    kubectl apply -f ~/container.training/k8s/traefik.yaml

k8s/ingress.md

663/791

Checking that Traefik runs correctly

  • If Traefik started correctly, we now have a web server listening on each node
  • Check that Traefik is serving 80/tcp:
    curl localhost

We should get a 404 page not found error.

This is normal: we haven't provided any ingress rule yet.

k8s/ingress.md

664/791

Setting up DNS

  • To make our lives easier, we will use nip.io

  • Check out http://red.A.B.C.D.nip.io

    (replacing A.B.C.D with the IP address of node1)

  • We should get the same 404 page not found error

    (meaning that our DNS is "set up properly", so to speak!)

k8s/ingress.md

665/791

Traefik web UI

  • Traefik provides a web dashboard

  • With the current install method, it's listening on port 8080

  • Go to http://node1:8080 (replacing node1 with its IP address)

k8s/ingress.md

666/791

Setting up host-based routing ingress rules

  • We are going to use the jpetazzo/color image

  • This image contains a simple static HTTP server on port 80

  • We will run 3 deployments (red, green, blue)

  • We will create 3 services (one for each deployment)

  • Then we will create 3 ingress rules (one for each service)

  • We will route <color>.A.B.C.D.nip.io to the corresponding deployment

k8s/ingress.md

667/791

Running colorful web servers

  • Run all three deployments:

    kubectl create deployment red --image=jpetazzo/color
    kubectl create deployment green --image=jpetazzo/color
    kubectl create deployment blue --image=jpetazzo/color
  • Create a service for each of them:

    kubectl expose deployment red --port=80
    kubectl expose deployment green --port=80
    kubectl expose deployment blue --port=80

k8s/ingress.md

668/791

Creating ingress resources

  • Before Kubernetes 1.19, we must use YAML manifests

    (see example on next slide)

  • Since Kubernetes 1.19, we can use kubectl create ingress

    kubectl create ingress red \
    --rule=red.A.B.C.D.nip.io/*=red:80
  • We can specify multiple rules per resource

    kubectl create ingress rgb \
    --rule=red.A.B.C.D.nip.io/*=red:80 \
    --rule=green.A.B.C.D.nip.io/*=green:80 \
    --rule=blue.A.B.C.D.nip.io/*=blue:80

k8s/ingress.md

669/791

Pay attention to the *!

  • The * is important:

    --rule=red.A.B.C.D.nip.io/*=red:80
  • It means "all URIs below that path"

  • Without the *, it means "only that exact path"

    (if we omit it, requests for e.g. red.A.B.C.D.nip.io/hello will 404)

k8s/ingress.md

670/791

Ingress resources in YAML

Here is a minimal host-based ingress resource:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: red
spec:
rules:
- host: red.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: red
servicePort: 80

(It is in k8s/ingress.yaml.)

k8s/ingress.md

671/791

Ingress API version

  • The YAML on the previous slide uses apiVersion: networking.k8s.io/v1beta1

  • Starting with Kubernetes 1.19, networking.k8s.io/v1 is available

  • However, with Kubernetes 1.19 (and later), we can use kubectl create ingress

  • We chose to keep an "old" (deprecated!) YAML example for folks still using older versions of Kubernetes

  • If we want to see "modern" YAML, we can use -o yaml --dry-run=client:

    kubectl create ingress red -o yaml --dry-run=client \
    --rule=red.A.B.C.D.nip.io/*=red:80

k8s/ingress.md

672/791

Creating ingress resources

  • Create the ingress resources with kubectl create ingress

    (or use the YAML manifests if using Kubernetes 1.18 or older)

  • Make sure to update the hostnames!

  • Check that you can connect to the exposed web apps

k8s/ingress.md

673/791

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

  • To indicate which ingress controller should be used by a given Ingress resouce:

    • before Kubernetes 1.18, use the kubernetes.io/ingress.class annotation

    • since Kubernetes 1.18, use the ingressClassName field
      (which should refer to an existing IngressClass resource)

k8s/ingress.md

674/791

Ingress shortcomings

k8s/ingress.md

675/791

Ingress in the future

  • The Gateway API SIG might be the future of Ingress

  • It proposes new resources:

    GatewayClass, Gateway, HTTPRoute, TCPRoute...

  • It is still in alpha stage

k8s/ingress.md

676/791

Vendor-specific example

  • Let's see how to implement canary releases

  • The example here will use Traefik v1

    (which is obsolete)

  • It won't work on your Kubernetes cluster!

    (unless you're running an oooooold version of Kubernetes)

    (and an equally oooooooold version of Traefik)

  • We've left it here just as an example!

k8s/ingress.md

677/791

Canary releases

  • A canary release (or canary launch or canary deployment) is a release that will process only a small fraction of the workload

  • After deploying the canary, we compare its metrics to the normal release

  • If the metrics look good, the canary will progressively receive more traffic

    (until it gets 100% and becomes the new normal release)

  • If the metrics aren't good, the canary is automatically removed

  • When we deploy a bad release, only a tiny fraction of traffic is affected

k8s/ingress.md

678/791

Various ways to implement canary

  • Example 1: canary for a microservice

    • 1% of all requests (sampled randomly) are sent to the canary
    • the remaining 99% are sent to the normal release
  • Example 2: canary for a web app

    • 1% of users are sent to the canary web site
    • the remaining 99% are sent to the normal release
  • Example 3: canary for shipping physical goods

    • 1% of orders are shipped with the canary process
    • the remaining 99% are shipped with the normal process
  • We're going to implement example 1 (per-request routing)

k8s/ingress.md

679/791

Canary releases with Traefik v1

  • We need to deploy the canary and expose it with a separate service

  • Then, in the Ingress resource, we need:

    • multiple paths entries (one for each service, canary and normal)

    • an extra annotation indicating the weight of each service

  • If we want, we can send requests to more than 2 services

k8s/ingress.md

680/791

The Ingress resource

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: rgb
annotations:
traefik.ingress.kubernetes.io/service-weights: |
red: 50%
green: 25%
blue: 25%
spec:
rules:
- host: rgb.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: red
servicePort: 80
- path: /
backend:
serviceName: green
servicePort: 80
- path: /
backend:
serviceName: blue
servicePort: 80

k8s/ingress.md

681/791

Other ingress controllers

Just to illustrate how different things are ...

  • With the NGINX ingress controller:

    • define two ingress ressources
      (specifying rules with the same host+path)

    • add nginx.ingress.kubernetes.io/canary annotations on each

  • With Linkerd2:

    • define two services

    • define an extra service for the weighted aggregate of the two

    • define a TrafficSplit (this is a CRD introduced by the SMI spec)

k8s/ingress.md

682/791

We need more than that

What we saw is just one of the multiple building blocks that we need to achieve a canary release.

We also need:

  • metrics (latency, performance ...) for our releases

  • automation to alter canary weights

    (increase canary weight if metrics look good; decrease otherwise)

  • a mechanism to manage the lifecycle of the canary releases

    (create them, promote them, delete them ...)

For inspiration, check flagger by Weave.

683/791

:EN:- The Ingress resource :FR:- La ressource ingress

k8s/ingress.md

Image separating from the next part

684/791

Ingress and TLS certificates

(automatically generated title slide)

685/791

Ingress and TLS certificates

  • Most ingress controllers support TLS connections

    (in a way that is standard across controllers)

  • The TLS key and certificate are stored in a Secret

  • The Secret is then referenced in the Ingress resource:

    spec:
    tls:
    - secretName: XXX
    hosts:
    - YYY
    rules:
    - ZZZ

k8s/ingress-tls.md

686/791

Obtaining a certificate

  • In the next section, we will need a TLS key and certificate

  • These usually come in PEM format:

    -----BEGIN CERTIFICATE-----
    MIIDATCCAemg...
    ...
    -----END CERTIFICATE-----
  • We will see how to generate a self-signed certificate

    (easy, fast, but won't be recognized by web browsers)

  • We will also see how to obtain a certificate from Let's Encrypt

    (requires the cluster to be reachable through a domain name)

k8s/ingress-tls.md

687/791

In production ...

  • A very popular option is to use the cert-manager operator

  • It's a flexible, modular approach to automated certificate management

  • For simplicity, in this section, we will use certbot

  • The method shown here works well for one-time certs, but lacks:

    • automation

    • renewal

k8s/ingress-tls.md

688/791

Which domain to use

  • If you're doing this in a training:

    the instructor will tell you what to use

  • If you're doing this on your own Kubernetes cluster:

    you should use a domain that points to your cluster

  • More precisely:

    you should use a domain that points to your ingress controller

  • If you don't have a domain name, you can use nip.io

    (if your ingress controller is on 1.2.3.4, you can use whatever.1.2.3.4.nip.io)

k8s/ingress-tls.md

689/791

Setting $DOMAIN

  • We will use $DOMAIN in the following section

  • Let's set it now

  • Set the DOMAIN environment variable:
    export DOMAIN=...

k8s/ingress-tls.md

690/791

Choose your adventure!

  • We present 3 methods to obtain a certificate

  • We suggest that we use method 1 (self-signed certificate)

    • it's the simplest and fastest method

    • it doesn't rely on other components

  • You're welcome to try methods 2 and 3 (leveraging certbot)

    • they're great if you want to understand "how the sausage is made"

    • they require some hacks (make sure port 80 is available)

    • they won't be used in production (cert-manager is better)

k8s/ingress-tls.md

691/791

Method 1, self-signed certificate

  • Thanks to openssl, generating a self-signed cert is just one command away!
  • Generate a key and certificate:
    openssl req \
    -newkey rsa -nodes -keyout privkey.pem \
    -x509 -days 30 -subj /CN=$DOMAIN/ -out cert.pem

This will create two files, privkey.pem and cert.pem.

k8s/ingress-tls.md

692/791

Method 2, Let's Encrypt with certbot

  • certbot is an ACME client

    (Automatic Certificate Management Environment)

  • We can use it to obtain certificates from Let's Encrypt

  • It needs to listen to port 80

    (to complete the HTTP-01 challenge)

  • If port 80 is already taken by our ingress controller, see method 3

k8s/ingress-tls.md

693/791

HTTP-01 challenge

  • certbot contacts Let's Encrypt, asking for a cert for $DOMAIN

  • Let's Encrypt gives a token to certbot

  • Let's Encrypt then tries to access the following URL:

    http://$DOMAIN/.well-known/acme-challenge/<token>

  • That URL needs to be routed to certbot

  • Once Let's Encrypt gets the response from certbot, it issues the certificate

k8s/ingress-tls.md

694/791

Running certbot

  • There is a very convenient container image, certbot/certbot

  • Let's use a volume to get easy access to the generated key and certificate

  • Obtain a certificate from Let's Encrypt:
    EMAIL=your.address@example.com
    docker run --rm -p 80:80 -v $PWD/letsencrypt:/etc/letsencrypt \
    certbot/certbot certonly \
    -m $EMAIL \
    --standalone --agree-tos -n \
    --domain $DOMAIN \
    --test-cert

This will get us a "staging" certificate. Remove --test-cert to obtain a real certificate.

k8s/ingress-tls.md

695/791

Copying the key and certificate

  • If everything went fine:

    • the key and certificate files are in letsencrypt/live/$DOMAIN

    • they are owned by root

  • Grant ourselves permissions on these files:

    sudo chown -R $USER letsencrypt
  • Copy the certificate and key to the current directory:

    cp letsencrypt/live/test/{cert,privkey}.pem .

k8s/ingress-tls.md

696/791

Method 3, certbot with Ingress

  • Sometimes, we can't simply listen to port 80:

    • we might already have an ingress controller there
    • our nodes might be on an internal network
  • But we can define an Ingress to route the HTTP-01 challenge to certbot!

  • Our Ingress needs to route all requests to /.well-known/acme-challenge to certbot

  • There are at least two ways to do that:

    • run certbot in a Pod (and extract the cert+key when it's done)
    • run certbot in a container on a node (and manually route traffic to it)
  • We're going to use the second option

    (mostly because it will give us an excuse to tinker with Endpoints resources!)

k8s/ingress-tls.md

697/791

The plan

  • We need the following resources:

    • an Endpoints¹ listing a hard-coded IP address and port
      (where our certbot container will be listening)

    • a Service corresponding to that Endpoints

    • an Ingress sending requests to /.well-known/acme-challenge/* to that Service
      (we don't even need to include a domain name in it)

  • Then we need to start certbot so that it's listening on the right address+port

¹Endpoints is always plural, because even a single resource is a list of endpoints.

k8s/ingress-tls.md

698/791

Creating resources

  • We prepared a YAML file to create the three resources

  • However, the Endpoints needs to be adapted to put the current node's address

  • Edit ~/containers.training/k8s/certbot.yaml

    (replace A.B.C.D with the current node's address)

  • Create the resources:

    kubectl apply -f ~/containers.training/k8s/certbot.yaml

k8s/ingress-tls.md

699/791

Obtaining the certificate

  • Now we can run certbot, listening on the port listed in the Endpoints

    (i.e. 8000)

  • Run certbot:
    EMAIL=your.address@example.com
    docker run --rm -p 8000:80 -v $PWD/letsencrypt:/etc/letsencrypt \
    certbot/certbot certonly \
    -m $EMAIL \
    --standalone --agree-tos -n \
    --domain $DOMAIN \
    --test-cert

This is using the staging environment. Remove --test-cert to get a production certificate.

k8s/ingress-tls.md

700/791

Copying the certificate

  • Just like in the previous method, the certificate is in letsencrypt/live/$DOMAIN

    (and owned by root)

  • Grand ourselves permissions on these files:

    sudo chown -R $USER letsencrypt
  • Copy the certificate and key to the current directory:

    cp letsencrypt/live/$DOMAIN/{cert,privkey}.pem .

k8s/ingress-tls.md

701/791

Creating the Secret

  • We now have two files:

    • privkey.pem (the private key)

    • cert.pem (the certificate)

  • We can create a Secret to hold them

  • Create the Secret:
    kubectl create secret tls $DOMAIN --cert=cert.pem --key=privkey.pem

k8s/ingress-tls.md

702/791

Ingress with TLS

  • To enable TLS for an Ingress, we need to add a tls section to the Ingress:

    spec:
    tls:
    - secretName: DOMAIN
    hosts:
    - DOMAIN
    rules: ...
  • The list of hosts will be used by the ingress controller

    (to know which certificate to use with SNI)

  • Of course, the name of the secret can be different

    (here, for clarity and convenience, we set it to match the domain)

k8s/ingress-tls.md

703/791

kubectl create ingress

  • We can also create an Ingress using TLS directly

  • To do it, add ,tls=secret-name to an Ingress rule

  • Example:

    kubectl create ingress hello \
    --rule=hello.example.com/*=hello:80,tls=hello
  • The domain will automatically be inferred from the rule

k8s/ingress-tls.md

704/791

About the ingress controller

  • Many ingress controllers can use different "stores" for keys and certificates

  • Our ingress controller needs to be configured to use secrets

    (as opposed to, e.g., obtain certificates directly with Let's Encrypt)

k8s/ingress-tls.md

705/791

Using the certificate

  • Add the tls section to an existing Ingress

  • If you need to see what the tls section should look like, you can:

    • kubectl explain ingress.spec.tls

    • kubectl create ingress --dry-run=client -o yaml ...

    • check ~/container.training/k8s/ingress.yaml for inspiration

    • read the docs

  • Check that the URL now works over https

    (it might take a minute to be picked up by the ingress controller)

k8s/ingress-tls.md

706/791

Discussion

To repeat something mentioned earlier ...

  • The methods presented here are for educational purpose only

  • In most production scenarios, the certificates will be obtained automatically

  • A very popular option is to use the cert-manager operator

k8s/ingress-tls.md

707/791

Security

  • Since TLS certificates are stored in Secrets...

  • ...It means that our Ingress controller must be able to read Secrets

  • A vulnerability in the Ingress controller can have dramatic consequences

  • See CVE-2021-25742 for an example

  • This can be mitigated by limiting which Secrets the controller can access

    (RBAC rules can specify resource names)

  • Downside: each TLS secret must explicitly be listed in RBAC

    (but that's better than a full cluster compromise, isn't it?)

708/791

:EN:- Ingress and TLS :FR:- Certificats TLS et ingress

k8s/ingress-tls.md

Image separating from the next part

709/791

Volumes

(automatically generated title slide)

710/791

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine

    • share files and directories between containers and their host

    • centralize configuration information in Kubernetes and expose it to containers

    • manage credentials and secrets and expose them securely to containers

    • store persistent data for stateful services

    • access storage systems (like Ceph, EBS, NFS, Portworx, and many others)

k8s/volumes.md

711/791

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

  • Both Docker and Kubernetes volumes enable access to storage systems

  • Kubernetes volumes are also used to expose configuration and secrets

  • Docker has specific concepts for configuration and secrets
    (but under the hood, the technical implementation is similar)

  • If you're not familiar with Docker volumes, you can safely ignore this slide!

k8s/volumes.md

712/791

Volumes ≠ Persistent Volumes

  • Volumes and Persistent Volumes are related, but very different!

  • Volumes:

    • appear in Pod specifications (we'll see that in a few slides)

    • do not exist as API resources (cannot do kubectl get volumes)

  • Persistent Volumes:

    • are API resources (can do kubectl get persistentvolumes)

    • correspond to concrete volumes (e.g. on a SAN, EBS, etc.)

    • cannot be associated with a Pod directly; but through a Persistent Volume Claim

    • won't be discussed further in this section

k8s/volumes.md

713/791

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

  • We will mount that volume in a container in the Pod

  • By default, this volume will be an emptyDir

    (an empty directory)

  • It will "shadow" the directory where it's mounted

k8s/volumes.md

714/791

Our basic Pod

apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx

This is a MVP! (Minimum Viable Pod😉)

It runs a single NGINX container.

k8s/volumes.md

715/791

Trying the basic pod

  • Create the Pod:
    kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should see the "Welcome to NGINX" page.)

k8s/volumes.md

716/791

Adding a volume

  • We need to add the volume in two places:

    • at the Pod level (to declare the volume)

    • at the container level (to mount the volume)

  • We will declare a volume named www

  • No type is specified, so it will default to emptyDir

    (as the name implies, it will be initialized as an empty directory at pod creation)

  • In that pod, there is also a container named nginx

  • That container mounts the volume www to path /usr/share/nginx/html/

k8s/volumes.md

717/791

The Pod with a volume

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/

k8s/volumes.md

718/791

Trying the Pod with a volume

  • Create the Pod:
    kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should now see a "403 Forbidden" error page.)

k8s/volumes.md

719/791

Populating the volume with another container

  • Let's add another container to the Pod

  • Let's mount the volume in both containers

  • That container will populate the volume with static files

  • NGINX will then serve these static files

  • To populate the volume, we will clone the Spoon-Knife repository

k8s/volumes.md

720/791

Sharing a volume between two containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

k8s/volumes.md

721/791

Sharing a volume, explained

  • We added another container to the pod

  • That container mounts the www volume on a different path (/www)

  • It uses the alpine image

  • When started, it installs git and clones the octocat/Spoon-Knife repository

    (that repository contains a tiny HTML website)

  • As a result, NGINX now serves this website

k8s/volumes.md

722/791

Trying the shared volume

  • This one will be time-sensitive!

  • We need to catch the Pod IP address as soon as it's created

  • Then send a request to it as fast as possible

  • Watch the pods (so that we can catch the Pod IP address)
    kubectl get pods -o wide --watch

k8s/volumes.md

723/791

Shared volume in action

  • Create the pod:
    kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml
  • As soon as we see its IP address, access it:
    curl $IP
  • A few seconds later, the state of the pod will change; access it again:
    curl $IP

The first time, we should see "403 Forbidden".

The second time, we should see the HTML file from the Spoon-Knife repository.

k8s/volumes.md

724/791

Explanations

  • Both containers are started at the same time

  • NGINX starts very quickly

    (it can serve requests immediately)

  • But at this point, the volume is empty

    (NGINX serves "403 Forbidden")

  • The other containers installs git and clones the repository

    (this takes a bit longer)

  • When the other container is done, the volume holds the repository

    (NGINX serves the HTML file)

k8s/volumes.md

725/791

The devil is in the details

  • The default restartPolicy is Always

  • This would cause our git container to run again ... and again ... and again

    (with an exponential back-off delay, as explained in the documentation)

  • That's why we specified restartPolicy: OnFailure

k8s/volumes.md

726/791

Inconsistencies

  • There is a short period of time during which the website is not available

    (because the git container hasn't done its job yet)

  • With a bigger website, we could get inconsistent results

    (where only a part of the content is ready)

  • In real applications, this could cause incorrect results

  • How can we avoid that?

k8s/volumes.md

727/791

Init Containers

  • We can define containers that should execute before the main ones

  • They will be executed in order

    (instead of in parallel)

  • They must all succeed before the main containers are started

  • This is exactly what we need here!

  • Let's see one in action

See Init Containers documentation for all the details.

k8s/volumes.md

728/791

Defining Init Containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

k8s/volumes.md

729/791

Trying the init container

  • Create the pod:

    kubectl create -f ~/container.training/k8s/nginx-4-with-init.yaml
  • Try to send HTTP requests as soon as the pod comes up

  • This time, instead of "403 Forbidden" we get a "connection refused"

  • NGINX doesn't start until the git container has done its job

  • We never get inconsistent results

    (a "half-ready" container)

k8s/volumes.md

730/791

Other uses of init containers

  • Load content

  • Generate configuration (or certificates)

  • Database migrations

  • Waiting for other services to be up

    (to avoid flurry of connection errors in main container)

  • etc.

k8s/volumes.md

731/791

Volume lifecycle

  • The lifecycle of a volume is linked to the pod's lifecycle

  • This means that a volume is created when the pod is created

  • This is mostly relevant for emptyDir volumes

    (other volumes, like remote storage, are not "created" but rather "attached" )

  • A volume survives across container restarts

  • A volume is destroyed (or, for remote storage, detached) when the pod is destroyed

732/791

:EN:- Sharing data between containers with volumes :EN:- When and how to use Init Containers

:FR:- Partager des données grâce aux volumes :FR:- Quand et comment utiliser un Init Container

k8s/volumes.md

Image separating from the next part

733/791

Managing configuration

(automatically generated title slide)

734/791

Managing configuration

  • Some applications need to be configured (obviously!)

  • There are many ways for our code to pick up configuration:

    • command-line arguments

    • environment variables

    • configuration files

    • configuration servers (getting configuration from a database, an API...)

    • ... and more (because programmers can be very creative!)

  • How can we do these things with containers and Kubernetes?

k8s/configuration.md

735/791

Passing configuration to containers

  • There are many ways to pass configuration to code running in a container:

    • baking it into a custom image

    • command-line arguments

    • environment variables

    • injecting configuration files

    • exposing it over the Kubernetes API

    • configuration servers

  • Let's review these different strategies!

k8s/configuration.md

736/791

Baking custom images

  • Put the configuration in the image

    (it can be in a configuration file, but also ENV or CMD actions)

  • It's easy! It's simple!

  • Unfortunately, it also has downsides:

    • multiplication of images

    • different images for dev, staging, prod ...

    • minor reconfigurations require a whole build/push/pull cycle

  • Avoid doing it unless you don't have the time to figure out other options

k8s/configuration.md

737/791

Command-line arguments

  • Indicate what should run in the container

  • Pass command and/or args in the container options in a Pod's template

  • Both command and args are arrays

  • Example (source):

    args:
    - "agent"
    - "-bootstrap-expect=3"
    - "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NS)\""
    - "-client=0.0.0.0"
    - "-data-dir=/consul/data"
    - "-server"
    - "-ui"

k8s/configuration.md

738/791

args or command?

  • Use command to override the ENTRYPOINT defined in the image

  • Use args to keep the ENTRYPOINT defined in the image

    (the parameters specified in args are added to the ENTRYPOINT)

  • In doubt, use command

  • It is also possible to use both command and args

    (they will be strung together, just like ENTRYPOINT and CMD)

  • See the docs to see how they interact together

k8s/configuration.md

739/791

Command-line arguments, pros & cons

  • Works great when options are passed directly to the running program

    (otherwise, a wrapper script can work around the issue)

  • Works great when there aren't too many parameters

    (to avoid a 20-lines args array)

  • Requires documentation and/or understanding of the underlying program

    ("which parameters and flags do I need, again?")

  • Well-suited for mandatory parameters (without default values)

  • Not ideal when we need to pass a real configuration file anyway

k8s/configuration.md

740/791

Environment variables

  • Pass options through the env map in the container specification

  • Example:

    env:
    - name: ADMIN_PORT
    value: "8080"
    - name: ADMIN_AUTH
    value: Basic
    - name: ADMIN_CRED
    value: "admin:0pensesame!"

value must be a string! Make sure that numbers and fancy strings are quoted.

🤔 Why this weird {name: xxx, value: yyy} scheme? It will be revealed soon!

k8s/configuration.md

741/791

The downward API

  • In the previous example, environment variables have fixed values

  • We can also use a mechanism called the downward API

  • The downward API allows exposing pod or container information

    • either through special files (we won't show that for now)

    • or through environment variables

  • The value of these environment variables is computed when the container is started

  • Remember: environment variables won't (can't) change after container start

  • Let's see a few concrete examples!

k8s/configuration.md

742/791

Exposing the pod's namespace

- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
  • Useful to generate FQDN of services

    (in some contexts, a short name is not enough)

  • For instance, the two commands should be equivalent:

    curl api-backend
    curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local

k8s/configuration.md

743/791

Exposing the pod's IP address

- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
  • Useful if we need to know our IP address

    (we could also read it from eth0, but this is more solid)

k8s/configuration.md

744/791

Exposing the container's resource limits

- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
  • Useful for runtimes where memory is garbage collected

  • Example: the JVM

    (the memory available to the JVM should be set with the -Xmx flag)

  • Best practice: set a memory limit, and pass it to the runtime

  • Note: recent versions of the JVM can do this automatically

    (see JDK-8146115) and this blog post for detailed examples)

k8s/configuration.md

745/791

More about the downward API

  • This documentation page tells more about these environment variables

  • And this one explains the other way to use the downward API

    (through files that get created in the container filesystem)

  • That second link also includes a list of all the fields that can be used with the downward API

k8s/configuration.md

746/791

Environment variables, pros and cons

  • Works great when the running program expects these variables

  • Works great for optional parameters with reasonable defaults

    (since the container image can provide these defaults)

  • Sort of auto-documented

    (we can see which environment variables are defined in the image, and their values)

  • Can be (ab)used with longer values ...

  • ... You can put an entire Tomcat configuration file in an environment ...

  • ... But should you?

(Do it if you really need to, we're not judging! But we'll see better ways.)

k8s/configuration.md

747/791

Injecting configuration files

  • Sometimes, there is no way around it: we need to inject a full config file

  • Kubernetes provides a mechanism for that purpose: configmaps

  • A configmap is a Kubernetes resource that exists in a namespace

  • Conceptually, it's a key/value map

    (values are arbitrary strings)

  • We can think about them in (at least) two different ways:

    • as holding entire configuration file(s)

    • as holding individual configuration parameters

Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!

k8s/configuration.md

748/791

Configmaps storing entire files

  • In this case, each key/value pair corresponds to a configuration file

  • Key = name of the file

  • Value = content of the file

  • There can be one key/value pair, or as many as necessary

    (for complex apps with multiple configuration files)

  • Examples:

    # Create a configmap with a single key, "app.conf"
    kubectl create configmap my-app-config --from-file=app.conf
    # Create a configmap with a single key, "app.conf" but another file
    kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf
    # Create a configmap with multiple keys (one per file in the config.d directory)
    kubectl create configmap my-app-config --from-file=config.d/

k8s/configuration.md

749/791

Configmaps storing individual parameters

  • In this case, each key/value pair corresponds to a parameter

  • Key = name of the parameter

  • Value = value of the parameter

  • Examples:

    # Create a configmap with two keys
    kubectl create cm my-app-config \
    --from-literal=foreground=red \
    --from-literal=background=blue
    # Create a configmap from a file containing key=val pairs
    kubectl create cm my-app-config \
    --from-env-file=app.conf

k8s/configuration.md

750/791

Exposing configmaps to containers

  • Configmaps can be exposed as plain files in the filesystem of a container

    • this is achieved by declaring a volume and mounting it in the container

    • this is particularly effective for configmaps containing whole files

  • Configmaps can be exposed as environment variables in the container

    • this is achieved with the downward API

    • this is particularly effective for configmaps containing individual parameters

  • Let's see how to do both!

k8s/configuration.md

751/791

Example: HAProxy configuration

  • We are going to deploy HAProxy, a popular load balancer

  • It expects to find its configuration in a specific place:

    /usr/local/etc/haproxy/haproxy.cfg

  • We will create a ConfigMap holding the configuration file

  • Then we will mount that ConfigMap in a Pod running HAProxy

k8s/configuration.md

752/791

Blue/green load balancing

  • In this example, we will deploy two versions of our app:

    • the "blue" version in the blue namespace

    • the "green" version in the green namespace

  • In both namespaces, we will have a Deployment and a Service

    (both named color)

  • We want to load balance traffic between both namespaces

    (we can't do that with a simple service selector: these don't cross namespaces)

k8s/configuration.md

753/791

Deploying the app

  • We're going to use the image jpetazzo/color

    (it is a simple "HTTP echo" server showing which pod served the request)

  • We can create each Namespace, Deployment, and Service by hand, or...

  • We can deploy the app with a YAML manifest:
    kubectl apply -f ~/container.training/k8s/rainbow.yaml

k8s/configuration.md

754/791

Testing the app

  • Reminder: Service x in Namespace y is available through:

    x.y, x.y.svc, x.y.svc.cluster.local

  • Since the cluster.local suffix can change, we'll use x.y.svc

  • Check that the app is up and running:
    kubectl run --rm -it --restart=Never --image=nixery.dev/curl my-test-pod \
    curl color.blue.svc

k8s/configuration.md

755/791

Creating the HAProxy configuration

Here is the file that we will use, k8s/haproxy.cfg:

global
daemon
defaults
mode tcp
timeout connect 5s
timeout client 50s
timeout server 50s
listen very-basic-load-balancer
bind *:80
server blue color.blue.svc:80
server green color.green.svc:80
# Note: the services above must exist,
# otherwise HAproxy won't start.

k8s/configuration.md

756/791

Creating the ConfigMap

  • Create a ConfigMap named haproxy and holding the configuration file:

    kubectl create configmap haproxy --from-file=~/container.training/k8s/haproxy.cfg
  • Check what our configmap looks like:

    kubectl get configmap haproxy -o yaml

k8s/configuration.md

757/791

Using the ConfigMap

Here is k8s/haproxy.yaml, a Pod manifest using that ConfigMap:

apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy:1
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

k8s/configuration.md

758/791

Creating the Pod

  • Create the HAProxy Pod:
    kubectl apply -f ~/container.training/k8s/haproxy.yaml
  • Check the IP address allocated to the pod:
    kubectl get pod haproxy -o wide
    IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP)

k8s/configuration.md

759/791

Testing our load balancer

  • If everything went well, when we should see a perfect round robin

    (one request to blue, one request to green, one request to blue, etc.)

  • Send a few requests:
    for i in $(seq 10); do
    curl $IP
    done

k8s/configuration.md

760/791

Exposing configmaps with the downward API

  • We are going to run a Docker registry on a custom port

  • By default, the registry listens on port 5000

  • This can be changed by setting environment variable REGISTRY_HTTP_ADDR

  • We are going to store the port number in a configmap

  • Then we will expose that configmap as a container environment variable

k8s/configuration.md

761/791

Creating the configmap

  • Our configmap will have a single key, http.addr:

    kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80
  • Check our configmap:

    kubectl get configmap registry -o yaml

k8s/configuration.md

762/791

Using the configmap

We are going to use the following pod definition:

apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

k8s/configuration.md

763/791

Using the configmap

  • Create the registry pod:
    kubectl apply -f ~/container.training/k8s/registry.yaml
  • Check the IP address allocated to the pod:

    kubectl get pod registry -o wide
    IP=$(kubectl get pod registry -o json | jq -r .status.podIP)
  • Confirm that the registry is available on port 80:

    curl $IP/v2/_catalog
764/791

:EN:- Managing application configuration :EN:- Exposing configuration with the downward API :EN:- Exposing configuration with Config Maps

:FR:- Gérer la configuration des applications :FR:- Configuration au travers de la downward API :FR:- Configurer les applications avec des Config Maps k8s/configuration.md

Image separating from the next part

765/791

Managing secrets

(automatically generated title slide)

766/791

Managing secrets

  • Sometimes our code needs sensitive information:

    • passwords

    • API tokens

    • TLS keys

    • ...

  • Secrets can be used for that purpose

  • Secrets and ConfigMaps are very similar

k8s/secrets.md

767/791

Similarities between ConfigMap and Secrets

  • ConfigMap and Secrets are key-value maps

    (a Secret can contain zero, one, or many key-value pairs)

  • They can both be exposed with the downward API or volumes

  • They can both be created with YAML or with a CLI command

    (kubectl create configmap / kubectl create secret)

k8s/secrets.md

768/791

ConfigMap and Secrets are different resources

  • They can have different RBAC permissions

    (e.g. the default view role can read ConfigMaps but not Secrets)

  • They indicate a different intent:

    "You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."

    "In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."

    (Source: the author of both features)

k8s/secrets.md

769/791

Secrets have an optional type

  • The type indicates which keys must exist in the secrets, for instance:

    kubernetes.io/tls requires tls.crt and tls.key

    kubernetes.io/basic-auth requires username and password

    kubernetes.io/ssh-auth requires ssh-privatekey

    kubernetes.io/dockerconfigjson requires .dockerconfigjson

    kubernetes.io/service-account-token requires token, namespace, ca.crt

    (the whole list is in the documentation)

  • This is merely for our (human) convenience:

    “Ah yes, this secret is a ...”

k8s/secrets.md

770/791

Accessing private repositories

  • Let's see how to access an image on private registry!

  • These images are protected by a username + password

    (on some registries, it's token + password, but it's the same thing)

  • To access a private image, we need to:

    • create a secret

    • reference that secret in a Pod template

    • or reference that secret in a ServiceAccount used by a Pod

k8s/secrets.md

771/791

In practice

  • Let's try to access an image on a private registry!

    • image = docker-registry.enix.io/jpetazzo/private:latest
    • user = reader
    • password = VmQvqdtXFwXfyy4Jb5DR
  • Create a Deployment using that image:

    kubectl create deployment priv \
    --image=docker-registry.enix.io/jpetazzo/private
  • Check that the Pod won't start:

    kubectl get pods --selector=app=priv

k8s/secrets.md

772/791

Creating a secret

  • Let's create a secret with the information provided earlier
  • Create the registry secret:
    kubectl create secret docker-registry enix \
    --docker-server=docker-registry.enix.io \
    --docker-username=reader \
    --docker-password=VmQvqdtXFwXfyy4Jb5DR

Why do we have to specify the registry address?

If we use multiple sets of credentials for different registries, it prevents leaking the credentials of one registry to another registry.

k8s/secrets.md

773/791

Using the secret

  • The first way to use a secret is to add it to imagePullSecrets

    (in the spec section of a Pod template)

  • Patch the priv Deployment that we created earlier:
    kubectl patch deploy priv --patch='
    spec:
    template:
    spec:
    imagePullSecrets:
    - name: enix
    '

k8s/secrets.md

774/791

Checking the results

  • Confirm that our Pod can now start correctly:
    kubectl get pods --selector=app=priv

k8s/secrets.md

775/791

Another way to use the secret

  • We can add the secret to the ServiceAccount

  • This is convenient to automatically use credentials for all pods

    (as long as they're using a specific ServiceAccount, of course)

  • Add the secret to the ServiceAccount:
    kubectl patch serviceaccount default --patch='
    imagePullSecrets:
    - name: enix
    '

k8s/secrets.md

776/791

Secrets are displayed with base64 encoding

  • When shown with e.g. kubectl get secrets -o yaml, secrets are base64-encoded

  • Likewise, when defining it with YAML, data values are base64-encoded

  • Example:

    kind: Secret
    apiVersion: v1
    metadata:
    name: pin-codes
    data:
    onetwothreefour: MTIzNA==
    zerozerozerozero: MDAwMA==
  • Keep in mind that this is just encoding, not encryption

  • It is very easy to automatically extract and decode secrets

k8s/secrets.md

777/791

Using stringData

  • When creating a Secret, it is possible to bypass base64

  • Just use stringData instead of data:

    kind: Secret
    apiVersion: v1
    metadata:
    name: pin-codes
    stringData:
    onetwothreefour: 1234
    zerozerozerozero: 0000
  • It will show up as base64 if you kubectl get -o yaml

  • No type was specified, so it defaults to Opaque

k8s/secrets.md

778/791

Encryption at rest

  • It is possible to encrypted secrets at rest

  • This means that secrets will be safe if someone ...

    • steals our etcd servers

    • steals our backups

    • snoops the e.g. iSCSI link between our etcd servers and SAN

  • However, starting the API server will now require human intervention

    (to provide the decryption keys)

  • This is only for extremely regulated environments (military, nation states...)

k8s/secrets.md

779/791

Immutable ConfigMaps and Secrets

  • Since Kubernetes 1.19, it is possible to mark a ConfigMap or Secret as immutable

    kubectl patch configmap xyz --patch='{"immutable": true}'
  • This brings performance improvements when using lots of ConfigMaps and Secrets

    (lots = tens of thousands)

  • Once a ConfigMap or Secret has been marked as immutable:

    • its content cannot be changed anymore
    • the immutable field can't be changed back either
    • the only way to change it is to delete and re-create it
    • Pods using it will have to be re-created as well
780/791

:EN:- Handling passwords and tokens safely

:FR:- Manipulation de mots de passe, clés API etc.

k8s/secrets.md

Image separating from the next part

781/791

Executing batch jobs

(automatically generated title slide)

782/791

Executing batch jobs

  • Deployments are great for stateless web apps

    (as well as workers that keep running forever)

  • Pods are great for one-off execution that we don't care about

    (because they don't get automatically restarted if something goes wrong)

  • Jobs are great for "long" background work

    ("long" being at least minutes or hours)

  • CronJobs are great to schedule Jobs at regular intervals

    (just like the classic UNIX cron daemon with its crontab files)

k8s/batch-jobs.md

783/791

Creating a Job

  • A Job will create a Pod

  • If the Pod fails, the Job will create another one

  • The Job will keep trying until:

    • either a Pod succeeds,

    • or we hit the backoff limit of the Job (default=6)

  • Create a Job that has a 50% chance of success:
    kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'

k8s/batch-jobs.md

784/791

Our Job in action

  • Our Job will create a Pod named flipcoin-xxxxx

  • If the Pod succeeds, the Job stops

  • If the Pod fails, the Job creates another Pod

  • Check the status of the Pod(s) created by the Job:
    kubectl get pods --selector=job-name=flipcoin

k8s/batch-jobs.md

785/791

More advanced jobs

  • We can specify a number of "completions" (default=1)

  • This indicates how many times the Job must be executed

  • We can specify the "parallelism" (default=1)

  • This indicates how many Pods should be running in parallel

  • These options cannot be specified with kubectl create job

    (we have to write our own YAML manifest to use them)

k8s/batch-jobs.md

786/791

Scheduling periodic background work

  • A Cron Job is a Job that will be executed at specific intervals

    (the name comes from the traditional cronjobs executed by the UNIX crond)

  • It requires a schedule, represented as five space-separated fields:

    • minute [0,59]
    • hour [0,23]
    • day of the month [1,31]
    • month of the year [1,12]
    • day of the week ([0,6] with 0=Sunday)
  • * means "all valid values"; /N means "every N"

  • Example: */3 * * * * means "every three minutes"

  • The website https://crontab.guru/ can help to create cron schedules!

k8s/batch-jobs.md

787/791

Creating a Cron Job

  • Let's create a simple job to be executed every three minutes

  • Careful: make sure that the job terminates!

    (The Cron Job will not hold if a previous job is still running)

  • Create the Cron Job:

    kubectl create cronjob every3mins --schedule="*/3 * * * *" \
    --image=alpine -- sleep 10
  • Check the resource that was created:

    kubectl get cronjobs

k8s/batch-jobs.md

788/791

Cron Jobs in action

  • At the specified schedule, the Cron Job will create a Job

  • The Job will create a Pod

  • The Job will make sure that the Pod completes

    (re-creating another one if it fails, for instance if its node fails)

  • Check the Jobs that are created:
    kubectl get jobs

(It will take a few minutes before the first job is scheduled.)

k8s/batch-jobs.md

789/791

Setting a time limit

  • It is possible to set a time limit (or deadline) for a job

  • This is done with the field spec.activeDeadlineSeconds

    (by default, it is unlimited)

  • When the job is older than this time limit, all its pods are terminated

  • Note that there can also be a spec.activeDeadlineSeconds field in pods!

  • They can be set independently, and have different effects:

    • the deadline of the job will stop the entire job

    • the deadline of the pod will only stop an individual pod

790/791

:EN:- Running batch and cron jobs :FR:- Tâches périodiques (cron) et traitement par lots (batch)

k8s/batch-jobs.md

That's all, folks!
Questions?

end

shared/thankyou.md

791/791

Introductions

  • Hello!

  • On stage: Jérôme (@jpetazzo)

  • Backstage: Alexandre, Amy, Antoine, Aurélien (x2), Benji, David, Julien, Kostas, Nicolas, Thibault

  • The training will run from 9:30 to 13:00

  • There will be a break at (approximately) 11:00

  • You should must ask questions! Lots of questions!

  • Use Mattermost to ask questions, get help, etc. logistics.md

2/791
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow