Hello!
On stage: Jérôme (@jpetazzo)
Backstage: Alexandre, Amy, Antoine, Aurélien (x2), Benji, David, Julien, Kostas, Nicolas, Thibault
The training will run from 9:30 to 13:00
There will be a break at (approximately) 11:00
You should must ask questions! Lots of questions!
Use Mattermost to ask questions, get help, etc. logistics.md
At the end of each day, there is a series of exercises
To make the most out of the training, please try the exercises!
(it will help to practice and memorize the content of the day)
We recommend to take at least one hour to work on the exercises
(if you understood the content of the day, it will be much faster)
Each day will start with a quick review of the exercises of the previous day
This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials
Credit is also due to multiple contributors — thank you!
You can also follow along on your own, at your own pace
We included as much information as possible in these slides
We recommend having a mentor to help you ...
... Or be comfortable spending some time reading the Kubernetes documentation ...
... And looking for answers on StackOverflow and other outlets
We recommend that you open these slides in your browser:
Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
Type a slide number + ENTER to go to that slide
The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
Slides will remain online so you can review them later if needed
(let's say we'll keep them online at least 1 year, how about that?)
You can download the slides using that URL:
https://2022-02-enix.container.training/slides.zip
(then open the file 2.yml.html
)
You will find new versions of these slides on:
You are welcome to use, re-use, share these slides
These slides are written in Markdown
The sources of these slides are available in a public GitHub repository:
Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
This slide has a little magnifying glass in the top left corner
This magnifying glass indicates slides that provide extra details
Feel free to skip them if:
you are in a hurry
you are new to this and want to avoid cognitive overload
you want only the most essential information
You can review these slides another time if you want, they'll be waiting for you ☺
We've set up a chat room that we will monitor during the workshop
Don't hesitate to use it to ask questions, or get help, or share feedback
The chat room will also be available after the workshop
Join the chat room: Mattermost
Say hi in the chat room!
Be comfortable with the UNIX command line
navigating directories
editing files
a little bit of bash-fu (environment variables, loops)
Some Docker knowledge
docker run
, docker ps
, docker build
ideally, you know how to write a Dockerfile and build it
(even if it's a FROM
line and a couple of RUN
commands)
It's totally OK if you are not a Docker expert!
Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
Misattributed to Benjamin Franklin
(Probably inspired by Chinese Confucian philosopher Xunzi)
The whole workshop is hands-on
We are going to build, ship, and run containers!
You are invited to reproduce all the demos
All hands-on sections are clearly identified, like the gray rectangle below
This is the stuff you're supposed to do!
Go to https://2022-02-enix.container.training/ to view these slides
Each person gets a private cluster of cloud VMs (not shared with anybody else)
They'll remain up for the duration of the workshop
You should have a little card with login+password+IP addresses
You can automatically SSH from one VM to another
The nodes have aliases: node1
, node2
, etc.
Installing this stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
"The whole team downloaded all these container images from the WiFi!
... and it went great!" (Literally no-one ever)
All you need is a computer (or even a phone or tablet!), with:
an Internet connection
a web browser
an SSH client
On Linux, OS X, FreeBSD... you are probably all set
On Windows, get one of these:
On Android, JuiceSSH (Play Store) works pretty well
Nice-to-have: Mosh instead of SSH, if your Internet connection tends to lose packets
You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!
Mosh is "the mobile shell"
It is essentially SSH over UDP, with roaming features
It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
To install it: (apt|yum|brew) install mosh
It has been pre-installed on the VMs that we are using
To connect to a remote machine: mosh user@host
(It is going to establish an SSH connection, then hand off to UDP)
It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
Log into the first VM (node1
) with your SSH client:
ssh user@A.B.C.D
(Replace user
and A.B.C.D
with the user and IP address provided to you)
You should see a prompt looking like this:
[A.B.C.D] (...) user@node1 ~$
If anything goes wrong — ask for help!
tailhist
The shell history of the instructor is available online in real time
Note the IP address of the instructor's virtual machine (A.B.C.D)
Open http://A.B.C.D:1088 in your browser and you should see the history
The history is updated in real time
(using a WebSocket connection)
It should be green when the WebSocket is connected
(if it turns red, reloading the page should fix it)
Use something like Play-With-Docker or Play-With-Kubernetes
Zero setup effort; but environment are short-lived and might have limited resources
Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
Create a bunch of clusters for you and your friends (instructions)
Bigger setup effort; ideal for group training
If you are using your own Kubernetes cluster, you can use jpetazzo/shpod
shpod
provides a shell running in a pod on your own cluster
It comes with many tools pre-installed (helm, stern...)
These tools are used in many demos and exercises in these slides
shpod
also gives you completion and a fancy prompt
It can also be used as an SSH server if needed
These remarks apply only when using multiple nodes, of course.
Unless instructed, all commands must be run from the first VM, node1
We will only check out/copy the code on node1
During normal operations, we do not need access to the other nodes
If we had to troubleshoot issues, we would use a combination of:
SSH (to access system logs, daemon status...)
Docker API (to check running containers and container engine status)
Once in a while, the instructions will say:
"Open a new terminal."
There are multiple ways to do this:
create a new window or tab on your machine, and SSH into the VM;
use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
Tmux is a terminal multiplexer like screen
.
You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.
Deploy the dockercoins application to our Kubernetes cluster
Connect components together
Expose the web UI and open it in a web browser to check that it works
exercises/k8sfundamentals-brief.md
Deploy a local Kubernetes cluster if you don't already have one
Deploy dockercoins on that cluster
Connect to the web UI in your browser
Scale up dockercoins
exercises/localcluster-brief.md
Add readiness and liveness probes to a web service
(we will use the rng
service in the dockercoins app)
See what happens when the load increses
(spoiler alert: it involves timeouts!)
exercises/healthchecks-brief.md
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
Our sample application
(automatically generated title slide)
We will clone the GitHub repository onto our node1
The repository also contains scripts and tools that we will use through the workshop
node1
:git clone https://github.com/jpetazzo/container.training
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
Let's start this before we look around, as downloading will take a little time...
Go to the dockercoins
directory, in the cloned repository:
cd ~/container.training/dockercoins
Use Compose to build and run all containers:
docker-compose up
Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs.
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoin
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoin
How dockercoins works:
generate a few random bytes
hash these bytes
increment a counter (to keep track of speed)
repeat forever!
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoin
How dockercoins works:
generate a few random bytes
hash these bytes
increment a counter (to keep track of speed)
repeat forever!
DockerCoin is not a cryptocurrency
(the only common points are "randomness," "hashing," and "coins" in the name)
The dockercoins app is made of 5 services:
rng
= web service generating random bytes
hasher
= web service computing hash of POSTed data
worker
= background process calling rng
and hasher
webui
= web interface to watch progress
redis
= data store (holds a counter updated by worker
)
These 5 services are visible in the application's Compose file, docker-compose.yml
worker
invokes web service rng
to generate random bytes
worker
invokes web service hasher
to hash these bytes
worker
does this in an infinite loop
every second, worker
updates redis
to indicate how many loops were done
webui
queries redis
, and computes and exposes "hashing speed" in our browser
(See diagram on next slide!)
How does each service find out the address of the other ones?
How does each service find out the address of the other ones?
We do not hard-code IP addresses in the code
We do not hard-code FQDNs in the code, either
We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
worker/worker.py
redis = Redis("redis")def get_random_bytes(): r = requests.get("http://rng/32") return r.contentdef hash_bytes(data): r = requests.post("http://hasher/", data=data, headers={"Content-Type": "application/octet-stream"})
(Full source code available here)
Containers can have network aliases (resolvable through DNS)
Compose file version 2+ makes each container reachable through its service name
Compose file version 1 required "links" sections to accomplish this
Network aliases are automatically namespaced
you can have multiple apps declaring and using a service named database
containers in the blue app will resolve database
to the IP of the blue database
containers in the green app will resolve database
to the IP of the green database
You can check the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training
The application is in the dockercoins subdirectory
The Compose file (docker-compose.yml) lists all 5 services
redis
is using an official image from the Docker Hub
hasher
, rng
, worker
, webui
are each built from a Dockerfile
Each service's Dockerfile and source code is in its own directory
(hasher
is in the hasher directory,
rng
is in the rng
directory, etc.)
This is relevant only if you have used Compose before 2016...
Compose 1.6 introduced support for a new Compose file format (aka "v2")
Services are no longer at the top level, but under a services
section
There has to be a version
key at the top level, with value "2"
(as a string, not an integer)
Containers are placed on a dedicated network, making links unnecessary
There are other minor differences, but upgrade is easy and straightforward
On the left-hand side, the "rainbow strip" shows the container names
On the right-hand side, we see the output of our containers
We can see the worker
service making requests to rng
and hasher
For rng
and hasher
, we see HTTP access logs
"Logs are exciting and fun!" (No-one, ever)
The webui
container exposes a web dashboard; let's view it
With a web browser, connect to node1
on port 8000
Remember: the nodeX
aliases are valid only on the nodes themselves
In your browser, you need to enter the IP address of your node
A drawing area should show up, and after a few seconds, a blue graph will appear.
It looks like the speed is approximately 4 hashes/second
Or more precisely: 4 hashes/second, with regular dips down to zero
Why?
It looks like the speed is approximately 4 hashes/second
Or more precisely: 4 hashes/second, with regular dips down to zero
Why?
The app actually has a constant, steady speed: 3.33 hashes/second
(which corresponds to 1 hash every 0.3 seconds, for reasons)
Yes, and?
The worker doesn't update the counter after every loop, but up to once per second
The speed is computed by the browser, checking the counter about once per second
Between two consecutive updates, the counter will increase either by 4, or by 0
The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
What can we conclude from this?
The worker doesn't update the counter after every loop, but up to once per second
The speed is computed by the browser, checking the counter about once per second
Between two consecutive updates, the counter will increase either by 4, or by 0
The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
What can we conclude from this?
If we interrupt Compose (with ^C
), it will politely ask the Docker Engine to stop the app
The Docker Engine will send a TERM
signal to the containers
If the containers do not exit in a timely manner, the Engine sends a KILL
signal
^C
If we interrupt Compose (with ^C
), it will politely ask the Docker Engine to stop the app
The Docker Engine will send a TERM
signal to the containers
If the containers do not exit in a timely manner, the Engine sends a KILL
signal
^C
Some containers exit immediately, others take longer.
The containers that do not handle SIGTERM
end up being killed after a 10s timeout. If we are very impatient, we can hit ^C
a second time!
docker-compose down
Kubernetes concepts
(automatically generated title slide)
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
What does that really mean?
Let's imagine that we have a 3-tier e-commerce app:
web frontend
API backend
database (that we will keep out of Kubernetes for now)
We have built images for our frontend and backend components
(e.g. with Dockerfiles and docker build
)
We are running them successfully with a local environment
(e.g. with Docker Compose)
Let's see how we would deploy our app on Kubernetes!
atseashop/api:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Keep processing requests during the upgrade; update my containers one at a time
Autoscaling
(straightforward on CPU; more complex on other metrics)
Resource management and scheduling
(reserve CPU/RAM for containers; placement constraints)
Advanced rollout patterns
(blue/green deployment, canary deployment)
Batch jobs
(one-off; parallel; also cron-style periodic execution)
Fine-grained access control
(defining what can be done by whom on which resources)
Stateful services
(databases, message queues, etc.)
Automating complex tasks with operators
(e.g. database replication, failover, etc.)
Ha ha ha ha
OK, I was trying to scare you, it's much simpler than that ❤️
The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI
(Courtesy of Yongbok Kim)
The second one is a simplified representation of a Kubernetes cluster
(Courtesy of Imesh Gunaratne)
The nodes executing our containers run a collection of services:
a container Engine (typically Docker)
kubelet (the "node agent")
kube-proxy (a necessary but not sufficient network component)
Nodes were formerly called "minions"
(You might see that word in older articles or documentation)
The Kubernetes logic (its "brains") is a collection of services:
the API server (our point of entry to everything!)
core services like the scheduler and controller manager
etcd
(a highly available key/value store; the "database" of Kubernetes)
Together, these services form the control plane of our cluster
The control plane is also called the "master"
It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
Normal applications are restricted from running on this node
(By using a mechanism called "taints")
When high availability is required, each service of the control plane must be resilient
The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
The services of the control plane can run in or out of containers
For instance: since etcd
is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
In that case, there is no "master node"
For this reason, it is more accurate to say "control plane" rather than "master."
There is no particular constraint
(no need to have an odd number of nodes for quorum)
A cluster can have zero node
(but then it won't be able to start any pods)
For testing and development, having a single node is fine
For production, make sure that you have extra capacity
(so that your workload still fits if you lose a node or a group of nodes)
Kubernetes is tested with up to 5000 nodes
(however, running a cluster of that size requires a lot of tuning)
No!
No!
By default, Kubernetes uses the Docker Engine to run containers
We can leverage other pluggable runtimes through the Container Runtime Interface
We could also use (deprecated)rkt
("Rocket") from CoreOS
ctr
Yes!
Yes!
In this workshop, we run our app on a single node first
We will need to build images and ship them around
We can do these things without Docker
(and get diagnosed with NIH¹ syndrome)
Docker is still the most stable container engine today
(but other options are maturing very quickly)
On our development environments, CI pipelines ... :
Yes, almost certainly
On our production servers:
Yes (today)
Probably not (in the future)
More information about CRI on the Kubernetes blog
We will interact with our Kubernetes cluster through the Kubernetes API
The Kubernetes API is (mostly) RESTful
It allows us to create, read, update, delete resources
A few common resource types are:
node (a machine — physical or virtual — in our cluster)
pod (group of containers running together on a node)
service (stable network endpoint to connect to one or multiple containers)
How would we scale the pod shown on the previous slide?
Do create additional pods
each pod can be on a different node
each pod will have its own IP address
Do not add more NGINX containers in the pod
all the NGINX containers would be on the same node
they would all have the same IP address
(resulting in Address alreading in use
errors)
Should we put e.g. a web application server and a cache together?
("cache" being something like e.g. Memcached or Redis)
Putting them in the same pod means:
they have to be scaled together
they can communicate very efficiently over localhost
Putting them in different pods means:
they can be scaled separately
they must communicate over remote IP addresses
(incurring more latency, lower performance)
Both scenarios can make sense, depending on our goals
The first diagram is courtesy of Lucas Käldström, in this presentation
The second diagram is courtesy of Weave Works
a pod can have multiple containers working together
IP addresses are associated with pods, not with individual containers
Both diagrams used with permission.
:EN:- Kubernetes concepts :FR:- Kubernetes en théorie
First contact with kubectl
(automatically generated title slide)
kubectl
kubectl
is (almost) the only tool we'll need to talk to Kubernetes
It is a rich CLI tool around the Kubernetes API
(Everything you can do with kubectl
, you can do directly with the API)
On our machines, there is a ~/.kube/config
file with:
the Kubernetes API address
the path to our TLS certificates used to authenticate
You can also use the --kubeconfig
flag to pass a config file
Or directly --server
, --user
, etc.
kubectl
can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...
kubectl
is the new SSHWe often start managing servers with SSH
(installing packages, troubleshooting ...)
At scale, it becomes tedious, repetitive, error-prone
Instead, we use config management, central logging, etc.
In many cases, we still need SSH:
as the underlying access method (e.g. Ansible)
to debug tricky scenarios
to inspect and poke at things
kubectl
We often start managing Kubernetes clusters with kubectl
(deploying applications, troubleshooting ...)
At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone
Instead, we use automated pipelines, observability tooling, etc.
In many cases, we still need kubectl
:
to debug tricky scenarios
to inspect and poke at things
The Kubernetes API is always the underlying access method
kubectl get
Node
resources with kubectl get
!Look at the composition of our cluster:
kubectl get node
These commands are equivalent:
kubectl get nokubectl get nodekubectl get nodes
kubectl get
can output JSON, YAML, or be directly formattedGive us more info about the nodes:
kubectl get nodes -o wide
Let's have some YAML:
kubectl get no -o yaml
See that kind: List
at the end? It's the type of our result!
kubectl
and jq
kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity"
We can list all available resource types by running kubectl api-resources
(In Kubernetes 1.10 and prior, this command used to be kubectl get
)
We can view the definition for a resource type with:
kubectl explain type
We can view the definition of a field in a resource, for instance:
kubectl explain node.spec
Or get the full definition of all fields and sub-fields:
kubectl explain node --recursive
We can access the same information by reading the API documentation
The API documentation is usually easier to read, but:
it won't show custom types (like Custom Resource Definitions)
we need to make sure that we look at the correct version
kubectl api-resources
and kubectl explain
perform introspection
(they communicate with the API server and obtain the exact type definitions)
The most common resource names have three forms:
singular (e.g. node
, service
, deployment
)
plural (e.g. nodes
, services
, deployments
)
short (e.g. no
, svc
, deploy
)
Some resources do not have a short name
Endpoints
only have a plural form
(because even a single Endpoints
resource is actually a list of endpoints)
We can use kubectl get -o yaml
to see all available details
However, YAML output is often simultaneously too much and not enough
For instance, kubectl get node node1 -o yaml
is:
too much information (e.g.: list of images available on this node)
not enough information (e.g.: doesn't show pods running on this node)
difficult to read for a human operator
For a comprehensive overview, we can use kubectl describe
instead
kubectl describe
kubectl describe
needs a resource type and (optionally) a resource name
It is possible to provide a resource name prefix
(all matching objects will be displayed)
kubectl describe
will retrieve some extra information about the resource
node1
with one of the following commands:kubectl describe node/node1kubectl describe node node1
(We should notice a bunch of control plane pods.)
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Where are the pods that we saw just a moment earlier?!?
kubectl get namespaceskubectl get namespacekubectl get ns
kubectl get namespaceskubectl get namespacekubectl get ns
You know what ... This kube-system
thing looks suspicious.
In fact, I'm pretty sure it showed up earlier, when we did:
kubectl describe node node1
By default, kubectl
uses the default
namespace
We can see resources in all namespaces with --all-namespaces
List the pods in all namespaces:
kubectl get pods --all-namespaces
Since Kubernetes 1.14, we can also use -A
as a shorter version:
kubectl get pods -A
Here are our system pods!
etcd
is our etcd server
kube-apiserver
is the API server
kube-controller-manager
and kube-scheduler
are other control plane components
coredns
provides DNS-based service discovery (replacing kube-dns as of 1.11)
kube-proxy
is the (per-node) component managing port mappings and such
weave
is the (per-node) component managing the network overlay
the READY
column indicates the number of containers in each pod
(1 for most pods, but weave
has 2, for instance)
default
)kube-system
namespace:kubectl get pods --namespace=kube-systemkubectl get pods -n kube-system
kubectl
commandsWe can use -n
/--namespace
with almost every kubectl
command
Example:
kubectl create --namespace=X
to create something in namespace XWe can use -A
/--all-namespaces
with most commands that manipulate multiple objects
Examples:
kubectl delete
can delete resources across multiple namespaces
kubectl label
can add/remove/update labels across multiple namespaces
kube-public
?kube-public
namespace:kubectl -n kube-public get pods
Nothing!
kube-public
is created by kubeadm & used for security bootstrapping.
kube-public
kube-public
is a ConfigMap named cluster-info
List ConfigMap objects:
kubectl -n kube-public get configmaps
Inspect cluster-info
:
kubectl -n kube-public get configmap cluster-info -o yaml
Note the selfLink
URI: /api/v1/namespaces/kube-public/configmaps/cluster-info
We can use that!
cluster-info
Earlier, when trying to access the API server, we got a Forbidden
message
But cluster-info
is readable by everyone (even without authentication)
cluster-info
:curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
We were able to access cluster-info
(without auth)
It contains a kubeconfig
file
kubeconfig
kubeconfig
file from this ConfigMapkubeconfig
:curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig
This file holds the canonical address of the API server, and the public key of the CA
This file does not hold client keys or tokens
This is not sensitive information, but allows us to establish trust
kube-node-lease
?Starting with Kubernetes 1.14, there is a kube-node-lease
namespace
(or in Kubernetes 1.13 if the NodeLease feature gate is enabled)
That namespace contains one Lease object per node
Node leases are a new way to implement node heartbeats
(i.e. node regularly pinging the control plane to say "I'm alive!")
For more details, see KEP-0009 or the node controller documentation k8s/kubectlget.md
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
There is already one service on our cluster: the Kubernetes API itself.
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification
Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc
The command above should either time out, or show an authentication error. Why?
Connections to ClusterIP services only work from within the cluster
If we are outside the cluster, the curl
command will probably time out
(Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)
This is the case with most "real" Kubernetes clusters
To try the connection from within the cluster, we can use shpod
This is what we should see when connecting from within the cluster:
$ curl -k https://10.96.0.1{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403}
We can see kind
, apiVersion
, metadata
These are typical of a Kubernetes API reply
Because we are talking to the Kubernetes API
The Kubernetes API tells us "Forbidden"
(because it requires authentication)
The Kubernetes API is reachable from within the cluster
(many apps integrating with Kubernetes will use this)
Each service also gets a DNS record
The Kubernetes DNS resolver is available from within pods
(and sometimes, from within nodes, depending on configuration)
Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)
:EN:- Getting started with kubectl :FR:- Se familiariser avec kubectl
Running our first containers on Kubernetes
(automatically generated title slide)
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
In that container in the pod, we are going to run a simple ping
command
This material assumes that you're running a recent version of Kubernetes
(at least 1.19)
You can check your version number with kubectl version
(look at the server part)
In Kubernetes 1.17 and older, kubectl run
creates a Deployment
If you're running such an old version:
it's obsolete and no longer maintained
Kubernetes 1.17 is EOL since January 2021
upgrade NOW! k8s/kubectl-run.md
kubectl run
kubectl run
is convenient to start a single pod
We need to specify at least a name and the image we want to use
Optionally, we can specify the command to run in the pod
localhost
, the loopback interface:kubectl run pingpong --image alpine ping 127.0.0.1
The output tells us that a Pod was created:
pod/pingpong created
Let's use the kubectl logs
command
It takes a Pod name as argument
Unless specified otherwise, it will only show logs of the first container in the pod
(Good thing there's only one in ours!)
ping
command:kubectl logs pingpong
Just like docker logs
, kubectl logs
supports convenient options:
-f
/--follow
to stream logs in real time (à la tail -f
)
--tail
to indicate how many lines you want to see (from the end)
--since
to get logs only after a given timestamp
View the latest logs of our ping
command:
kubectl logs pingpong --tail 1 --follow
Stop it with Ctrl-C
kubectl
gives us a simple command to scale a workload:
kubectl scale TYPE NAME --replicas=HOWMANY
Let's try it on our Pod, so that we have more Pods!
kubectl scale pod pingpong --replicas=3
🤔 We get the following error, what does that mean?
Error from server (NotFound): the server could not find the requested resource
We cannot "scale a Pod"
(that's not completely true; we could give it more CPU/RAM)
If we want more Pods, we need to create more Pods
(i.e. execute kubectl run
multiple times)
There must be a better way!
(spoiler alert: yes, there is a better way!)
NotFound
What's the meaning of that error?
Error from server (NotFound): the server could not find the requested resource
When we execute kubectl scale THAT-RESOURCE --replicas=THAT-MANY
,
it is like telling Kubernetes:
go to THAT-RESOURCE and set the scaling button to position THAT-MANY
Pods do not have a "scaling button"
Try to execute the kubectl scale pod
command with -v6
We see a PATCH
request to /scale
: that's the "scaling button"
(technically it's called a subresource of the Pod)
We are going to create a ReplicaSet
(= set of replicas = set of identical pods)
In fact, we will create a Deployment, which itself will create a ReplicaSet
Why so many layers? We'll explain that shortly, don't worry!
ping
--
:kubectl create deployment pingpong --image=alpine -- ping 127.0.0.1
The --
is used to separate:
"options/flags of kubectl create
command to run in the container
kubectl get all
Note: kubectl get all
is a lie. It doesn't show everything.
(But it shows a lot of "usual suspects", i.e. commonly used resources.)
NAME READY STATUS RESTARTS AGEpod/pingpong 1/1 Running 0 4m17spod/pingpong-6ccbc77f68-kmgfn 1/1 Running 0 11sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h45NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/pingpong 1/1 1 1 11sNAME DESIRED CURRENT READY AGEreplicaset.apps/pingpong-6ccbc77f68 1 1 1 11s
Our new Pod is not named pingpong
, but pingpong-xxxxxxxxxxx-yyyyy
.
We have a Deployment named pingpong
, and an extra ReplicaSet, too. What's going on?
We have the following resources:
deployment.apps/pingpong
This is the Deployment that we just created.
replicaset.apps/pingpong-xxxxxxxxxx
This is a Replica Set created by this Deployment.
pod/pingpong-xxxxxxxxxx-yyyyy
This is a pod created by the Replica Set.
Let's explain what these things are.
Can have one or multiple containers
Runs on a single node
(Pod cannot "straddle" multiple nodes)
Pods cannot be moved
(e.g. in case of node outage)
Pods cannot be scaled horizontally
(except by manually creating more Pods)
A Pod is not a process; it's an environment for containers
it cannot be "restarted"
it cannot "crash"
The containers in a Pod can crash
They may or may not get restarted
(depending on Pod's restart policy)
If all containers exit successfully, the Pod ends in "Succeeded" phase
If some containers fail and don't get restarted, the Pod ends in "Failed" phase
Set of identical (replicated) Pods
Defined by a pod template + number of desired replicas
If there are not enough Pods, the Replica Set creates more
(e.g. in case of node outage; or simply when scaling up)
If there are too many Pods, the Replica Set deletes some
(e.g. if a node was disconnected and comes back; or when scaling down)
We can scale up/down a Replica Set
we update the manifest of the Replica Set
as a consequence, the Replica Set controller creates/deletes Pods
Replica Sets control identical Pods
Deployments are used to roll out different Pods
(different image, command, environment variables, ...)
When we update a Deployment with a new Pod definition:
a new Replica Set is created with the new Pod definition
that new Replica Set is progressively scaled up
meanwhile, the old Replica Set(s) is(are) scaled down
This is a rolling update, minimizing application downtime
When we scale up/down a Deployment, it scales up/down its Replica Set
kubectl scale
again, but on the Deployment!Scale our pingpong
deployment:
kubectl scale deployment pingpong --replicas 3
Note that we could also write it like this:
kubectl scale deployment/pingpong --replicas 3
Check that we now have multiple pods:
kubectl get pods
What if we scale the Replica Set instead of the Deployment?
The Deployment would notice it right away and scale back to the initial level
The Replica Set makes sure that we have the right numbers of Pods
The Deployment makes sure that the Replica Set has the right size
(conceptually, it delegates the management of the Pods to the Replica Set)
This might seem weird (why this extra layer?) but will soon make sense
(when we will look at how rolling updates work!)
kubectl logs
needs a Pod name
But it can also work with a type/name
(e.g. deployment/pingpong
)
ping
command:kubectl logs deploy/pingpong --tail 2
It shows us the logs of the first Pod of the Deployment
We'll see later how to get the logs of all the Pods!
The deployment pingpong
watches its replica set
The replica set ensures that the right number of pods are running
What happens if pods disappear?
watch kubectl get pods
kubectl logs
:kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
kubectl delete pod
terminates the pod gracefully
(sending it the TERM signal and waiting for it to shutdown)
As soon as the pod is in "Terminating" state, the Replica Set replaces it
But we can still see the output of the "Terminating" pod in kubectl logs
Until 30 seconds later, when the grace period expires
The pod is then killed, and kubectl logs
exits
What happens if we delete a standalone Pod?
(like the first pingpong
Pod that we created)
kubectl delete pod pingpong
No replacement Pod gets created because there is no controller watching it
That's why we will rarely use standalone Pods in practice
(except for e.g. punctual debugging or executing a short supervised task)
:EN:- Running pods and deployments :FR:- Créer un pod et un déploiement
Kubernetes network model
(automatically generated title slide)
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
In detail:
all nodes must be able to reach each other, without NAT
all pods must be able to reach each other, without NAT
pods and nodes must be able to reach each other, without NAT
each pod is aware of its IP address (no NAT)
pod IP addresses are assigned by the network implementation
Kubernetes doesn't mandate any particular implementation
Everything can reach everything
No address translation
No port translation
No new protocol
The network implementation can decide how to allocate addresses
IP addresses don't have to be "portable" from a node to another
(We can use e.g. a subnet per node and use a simple routed topology)
The specification is simple enough to allow many various implementations
Everything can reach everything
if you want security, you need to add network policies
the network implementation that you use needs to support them
There are literally dozens of implementations out there
(https://github.com/containernetworking/cni/ lists more than 25 plugins)
Pods have level 3 (IP) connectivity, but services are level 4 (TCP or UDP)
(Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)
kube-proxy
is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables)
The nodes that we are using have been set up to use Weave
We don't endorse Weave in a particular way, it just Works For Us
Don't worry about the warning about kube-proxy
performance
Unless you:
If necessary, there are alternatives to kube-proxy
; e.g.
kube-router
Most Kubernetes clusters use CNI "plugins" to implement networking
When a pod is created, Kubernetes delegates the network setup to these plugins
(it can be a single plugin, or a combination of plugins, each doing one task)
Typically, CNI plugins will:
allocate an IP address (by calling an IPAM plugin)
add a network interface into the pod's network namespace
configure the interface as well as required routes etc.
The "pod-to-pod network" or "pod network":
provides communication between pods and nodes
is generally implemented with CNI plugins
The "pod-to-service network":
provides internal communication and load balancing
is generally implemented with kube-proxy (or e.g. kube-router)
Network policies:
provide firewalling and isolation
can be bundled with the "pod network" or provided by another component
Inbound traffic can be handled by multiple components:
something like kube-proxy or kube-router (for NodePort services)
load balancers (ideally, connected to the pod network)
It is possible to use multiple pod networks in parallel
(with "meta-plugins" like CNI-Genie or Multus)
Some solutions can fill multiple roles
(e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)
:EN:- The Kubernetes network model :FR:- Le modèle réseau de Kubernetes
Exposing containers
(automatically generated title slide)
We can connect to our pods using their IP address
Then we need to figure out a lot of things:
how do we look up the IP address of the pod(s)?
how do we connect from outside the cluster?
how do we load balance traffic?
what if a pod fails?
Kubernetes has a resource type named Service
Services address all these questions!
Services give us a stable endpoint to connect to a pod or a group of pods
An easy way to create a service is to use kubectl expose
If we have a deployment named my-little-deploy
, we can run:
kubectl expose deployment my-little-deploy --port=80
... and this will create a service with the same name (my-little-deploy
)
Services are automatically added to an internal DNS zone
(in the example above, our code can now connect to http://my-little-deploy/)
We don't need to look up the IP address of the pod(s)
(we resolve the IP address of the service using DNS)
There are multiple service types; some of them allow external traffic
(e.g. LoadBalancer
and NodePort
)
Services provide load balancing
(for both internal and external traffic)
Service addresses are independent from pods' addresses
(when a pod fails, the service seamlessly sends traffic to its replacement)
There are different types of services:
ClusterIP
, NodePort
, LoadBalancer
, ExternalName
There are also headless services
Services can also have optional external IPs
There is also another resource type called Ingress
(specifically for HTTP services)
Wow, that's a lot! Let's start with the basics ...
ClusterIP
It's the default service type
A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
This IP address is reachable only from within the cluster (nodes and pods)
Our code can connect to the service using the original port number
Perfect for internal communication, within the cluster
LoadBalancer
An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
This is available only when the underlying infrastructure provides some kind of "load balancer as a service"
Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
Ideally, traffic would flow directly from the load balancer to the pods
In practice, it will often flow through a NodePort
first
NodePort
A port number is allocated for the service
(by default, in the 30000-32767 range)
That port is made available on all our nodes and anybody can connect to it
(we can connect to any node on that port to reach the service)
Our code needs to be changed to connect to that new port number
Under the hood: kube-proxy
sets up a bunch of iptables
rules on our nodes
Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
Since ping
doesn't have anything to connect to, we'll have to run something else
We could use the nginx
official image, but ...
... we wouldn't be able to tell the backends from each other!
We are going to use jpetazzo/color
, a tiny HTTP server written in Go
jpetazzo/color
listens on port 80
It serves a page showing the pod's name
(this will be useful when checking load balancing behavior)
We will create a deployment with kubectl create deployment
Then we will scale it with kubectl scale
kubectl get pods -w
Create a deployment for this very lightweight HTTP server:
kubectl create deployment blue --image=jpetazzo/color
Scale it to 10 replicas:
kubectl scale deployment blue --replicas=10
ClusterIP
serviceExpose the HTTP port of our server:
kubectl expose deployment blue --port=80
Look up which IP address was allocated:
kubectl get service
You can assign IP addresses to services, but they are still layer 4
(i.e. a service is not an IP address; it's an IP address + protocol + port)
This is caused by the current implementation of kube-proxy
(it relies on mechanisms that don't support layer 3)
As a result: you have to indicate the port number for your service
(with some exceptions, like ExternalName
or headless services, covered later)
IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
curl http://$IP:80/
IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
curl http://$IP:80/
Try it a few times! Our requests are load balanced across multiple pods.
ExternalName
Services of type ExternalName
are quite different
No load balancer (internal or external) is created
Only a DNS entry gets added to the DNS managed by Kubernetes
That DNS entry will just be a CNAME
to a provided record
Example:
kubectl create service externalname k8s --external-name kubernetes.io
Creates a CNAME k8s
pointing to kubernetes.io
We can add an External IP to a service, e.g.:
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
1.2.3.4
should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
Connections to 1.2.3.4:80
will be sent to our service
External IPs will also show up on services of type LoadBalancer
(they will be added automatically by the process provisioning the load balancer)
Sometimes, we want to access our scaled services directly:
if we want to save a tiny little bit of latency (typically less than 1ms)
if we need to connect over arbitrary ports (instead of a few fixed ones)
if we need to communicate over another protocol than UDP or TCP
if we want to decide how to balance the requests client-side
...
In that case, we can use a "headless service"
A headless service is obtained by setting the clusterIP
field to None
(Either with --cluster-ip=None
, or by providing a custom YAML)
As a result, the service doesn't have a virtual IP address
Since there is no virtual IP address, there is no load balancer either
CoreDNS will return the pods' IP addresses as multiple A
records
This gives us an easy way to discover all the replicas for a deployment
A service has a number of "endpoints"
Each endpoint is a host + port where the service is available
The endpoints are maintained and updated automatically by Kubernetes
blue
service:kubectl describe service blue
In the output, there will be a line starting with Endpoints:
.
That line will list a bunch of addresses in host:port
format.
When we have many endpoints, our display commands truncate the list
kubectl get endpoints
If we want to see the full list, we can use one of the following commands:
kubectl describe endpoints bluekubectl get endpoints blue -o yaml
These commands will show us a list of IP addresses
These IP addresses should match the addresses of the corresponding pods:
kubectl get pods -l app=blue -o wide
endpoints
not endpoint
endpoints
is the only resource that cannot be singular$ kubectl get endpointerror: the server doesn't have a resource type "endpoint"
This is because the type itself is plural (unlike every other resource)
There is no endpoint
object: type Endpoints struct
The type doesn't represent a single endpoint, but a list of endpoints
In the kube-system
namespace, there should be a service named kube-dns
This is the internal DNS server that can resolve service names
The default domain name for the service we created is default.svc.cluster.local
Get the IP address of the internal DNS server:
IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
Resolve the cluster IP for the blue
service:
host blue.default.svc.cluster.local $IP
Ingress
Ingresses are another type (kind) of resource
They are specifically for HTTP services
(not TCP or UDP)
They can also handle TLS certificates, URL rewriting ...
They require an Ingress Controller to function
:EN:- Service discovery and load balancing :EN:- Accessing pods through services :EN:- Service types: ClusterIP, NodePort, LoadBalancer
:FR:- Exposer un service :FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer :FR:- Utiliser CoreDNS pour la service discovery
Shipping images with a registry
(automatically generated title slide)
Initially, our app was running on a single node
We could build and run in the same place
Therefore, we did not need to ship anything
Now that we want to run on a cluster, things are different
The easiest way to ship container images is to use a registry
What happens when we execute docker run alpine
?
If the Engine needs to pull the alpine
image, it expands it into library/alpine
library/alpine
is expanded into index.docker.io/library/alpine
The Engine communicates with index.docker.io
to retrieve library/alpine:latest
To use something else than index.docker.io
, we specify it in the image name
Examples:
docker pull gcr.io/google-containers/alpine-with-bash:1.0docker build -t registry.mycompany.io:5000/myimage:awesome .docker push registry.mycompany.io:5000/myimage:awesome
Create one deployment for each component
(hasher, redis, rng, webui, worker)
Expose deployments that need to accept connections
(hasher, redis, rng, webui)
For redis, we can use the official redis image
For the 4 others, we need to build images and push them to some registry
There are many options!
Manually:
build locally (with docker build
or otherwise)
push to the registry
Automatically:
build and test locally
when ready, commit and push a code repository
the code repository notifies an automated build system
that system gets the code, builds it, pushes the image to the registry
There are SAAS products like Docker Hub, Quay ...
Each major cloud provider has an option as well
(ACR on Azure, ECR on AWS, GCR on Google Cloud...)
There are also commercial products to run our own registry
(Docker EE, Quay...)
And open source options, too!
When picking a registry, pay attention to its build system
(when it has one)
Conceptually, it is possible to build images on the fly from a repository
Example: ctr.run
(deprecated in August 2020, after being aquired by Datadog)
It did allow something like this:
docker run ctr.run/github.com/jpetazzo/container.training/dockercoins/hasher
No alternative yet
(free startup idea, anyone?)
:EN:- Shipping images to Kubernetes :FR:- Déployer des images sur notre cluster
For everyone's convenience, we took care of building DockerCoins images
We pushed these images to the DockerHub, under the dockercoins user
These images are tagged with a version number, v0.1
The full image names are therefore:
dockercoins/hasher:v0.1
dockercoins/rng:v0.1
dockercoins/webui:v0.1
dockercoins/worker:v0.1
Exercise — Deploy Dockercoins
(automatically generated title slide)
We want to deploy the dockercoins app
There are 5 components in the app:
hasher, redis, rng, webui, worker
We'll use one Deployment for each component
(created with kubectl create deployment
)
We'll connect them with Services
(create with kubectl expose
)
exercises/k8sfundamentals-details.md
We'll use the following images:
hasher → dockercoins/hasher:v0.1
redis → redis
rng → dockercoins/rng:v0.1
webui → dockercoins/webui:v0.1
worker → dockercoins/worker:v0.1
All services should be internal services, except the web UI
(since we want to be able to connect to the web UI from outside)
exercises/k8sfundamentals-details.md
We should be able to see the web UI in our browser
(with the graph showing approximately 3-4 hashes/second)
exercises/k8sfundamentals-details.md
Make sure to expose services with the right ports
(check the logs of the worker; they indicate the port numbers)
The web UI can be exposed with a NodePort or LoadBalancer Service
exercises/k8sfundamentals-details.md
Running our application on Kubernetes
(automatically generated title slide)
Deploy redis
:
kubectl create deployment redis --image=redis
Deploy everything else:
kubectl create deployment hasher --image=dockercoins/hasher:v0.1kubectl create deployment rng --image=dockercoins/rng:v0.1kubectl create deployment webui --image=dockercoins/webui:v0.1kubectl create deployment worker --image=dockercoins/worker:v0.1
If we wanted to deploy images from another registry ...
... Or with a different tag ...
... We could use the following snippet:
REGISTRY=dockercoins TAG=v0.1 for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
💡 Oh right! We forgot to expose
.
Three deployments need to be reachable by others: hasher
, redis
, rng
worker
doesn't need to be exposed
webui
will be dealt with later
kubectl expose deployment redis --port 6379kubectl expose deployment rng --port 80kubectl expose deployment hasher --port 80
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
We should now see the worker
, well, working happily.
Now we would like to access the Web UI
We will expose it with a NodePort
(just like we did for the registry)
Create a NodePort
service for the Web UI:
kubectl expose deploy/webui --type=NodePort --port=80
Check the port that was allocated:
kubectl get svc
Yes, this may take a little while to update. (Narrator: it was DNS.)
Yes, this may take a little while to update. (Narrator: it was DNS.)
Alright, we're back to where we started, when we were running on a single node!
:EN:- Running our demo app on Kubernetes :FR:- Faire tourner l'application de démo sur Kubernetes
Labels and annotations
(automatically generated title slide)
Most Kubernetes resources can have labels and annotations
Both labels and annotations are arbitrary strings
(with some limitations that we'll explain in a minute)
Both labels and annotations can be added, removed, changed, dynamically
This can be done with:
the kubectl edit
command
the kubectl label
and kubectl annotate
... many other ways! (kubectl apply -f
, kubectl patch
, ...)
Create a Deployment:
kubectl create deployment clock --image=jpetazzo/clock
Look at its annotations and labels:
kubectl describe deployment clock
So, what do we get?
We see one label:
Labels: app=clock
This is added by kubectl create deployment
And one annotation:
Annotations: deployment.kubernetes.io/revision: 1
This is to keep track of successive versions when doing rolling updates
Find the name of the Pod:
kubectl get pods
Display its information:
kubectl describe pod clock-xxxxxxxxxx-yyyyy
So, what do we get?
We see two labels:
Labels: app=clock pod-template-hash=xxxxxxxxxx
app=clock
comes from kubectl create deployment
too
pod-template-hash
was assigned by the Replica Set
(when we will do rolling updates, each set of Pods will have a different hash)
There are no annotations:
Annotations: <none>
A selector is an expression matching labels
It will restrict a command to the objects matching at least all these labels
List all the pods with at least app=clock
:
kubectl get pods --selector=app=clock
List all the pods with a label app
, regardless of its value:
kubectl get pods --selector=app
kubectl label
and kubectl annotate
Set a label on the clock
Deployment:
kubectl label deployment clock color=blue
Check it out:
kubectl describe deployment clock
kubectl get
gives us a couple of useful flags to check labels
kubectl get --show-labels
shows all labels
kubectl get -L xyz
shows the value of label xyz
List all the labels that we have on pods:
kubectl get pods --show-labels
List the value of label app
on these pods:
kubectl get pods -L app
If a selector has multiple labels, it means "match at least these labels"
Example: --selector=app=frontend,release=prod
--selector
can be abbreviated as -l
(for labels)
We can also use negative selectors
Example: --selector=app!=clock
Selectors can be used with most kubectl
commands
Examples: kubectl delete
, kubectl label
, ...
--show-labels
flag with kubectl get
kubectl get --show-labels po,rs,deploy,svc,no
The key for both labels and annotations:
must start and end with a letter or digit
can also have .
-
_
(but not in first or last position)
can be up to 63 characters, or 253 + /
+ 63
Label values are up to 63 characters, with the same restrictions
Annotations values can have arbitrary characters (yes, even binary)
Maximum length isn't defined
(dozens of kilobytes is fine, hundreds maybe not so much)
:EN:- Labels and annotations :FR:- Labels et annotations
Revisiting kubectl logs
(automatically generated title slide)
kubectl logs
In this section, we assume that we have a Deployment with multiple Pods
(e.g. pingpong
that we scaled to at least 3 pods)
We will highlights some of the limitations of kubectl logs
kubectl logs
shows us the output of a single Podkubectl logs deploy/pingpong --tail 1 --follow
kubectl logs
only shows us the logs of one of the Pods.
When we specify a deployment name, only one single pod's logs are shown
We can view the logs of multiple pods by specifying a selector
If we check the pods created by the deployment, they all have the label app=pingpong
(this is just a default label that gets added when using kubectl create deployment
)
app=pingpong
label:kubectl logs -l app=pingpong --tail 1
pingpong
pods?-l
and -f
flags:kubectl logs -l app=pingpong --tail 1 -f
Note: combining -l
and -f
is only possible since Kubernetes 1.14!
Let's try to understand why ...
Scale up our deployment:
kubectl scale deployment pingpong --replicas=8
Stream the logs:
kubectl logs -l app=pingpong --tail 1 -f
We see a message like the following one:
error: you are attempting to follow 8 log streams,but maximum allowed concurency is 5,use --max-log-requests to increase the limit
kubectl
opens one connection to the API server per pod
For each pod, the API server opens one extra connection to the corresponding kubelet
If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
This could easily put a lot of stress on the API server
Prior Kubernetes 1.14, it was decided to not allow multiple connections
From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with --max-log-requests
)
For more details about the rationale, see PR #67573
kubectl logs
We don't see which pod sent which log line
If pods are restarted / replaced, the log stream stops
If new pods are added, we don't see their logs
To stream the logs of multiple pods, we need to write a selector
There are external tools to address these shortcomings
(e.g.: Stern)
kubectl logs -l ... --tail N
If we run this with Kubernetes 1.12, the last command shows multiple lines
This is a regression when --tail
is used together with -l
/--selector
It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
The problem was fixed in Kubernetes 1.13
See #70554 for details.
:EN:- Viewing logs with "kubectl logs" :FR:- Consulter les logs avec "kubectl logs"
Accessing logs from the CLI
(automatically generated title slide)
The kubectl logs
command has limitations:
it cannot stream logs from multiple pods at a time
when showing logs from multiple pods, it mixes them all together
We are going to see how to do it better
We could (if we were so inclined) write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could (if we were so inclined) write a program or script that would:
take a selector as an argument
enumerate all pods matching that selector (with kubectl get -l ...
)
fork one kubectl logs --follow ...
command per container
annotate the logs (the output of each kubectl logs ...
process) with their origin
preserve ordering by using kubectl logs --timestamps ...
and merge the output
We could do it, but thankfully, others did it for us already!
Stern is an open source project originally by Wercker.
From the README:
Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.
The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.
Exactly what we need!
Run stern
(without arguments) to check if it's installed:
$ sternTail multiple pods and containers from KubernetesUsage:stern pod-query [flags]
If it's missing, let's see how to install it
Stern is written in Go
Go programs are usually very easy to install
(no dependencies, extra libraries to install, etc)
Binary releases are available here on GitHub
Stern is also available through most package managers
(e.g. on macOS, we can brew install stern
or sudo port install stern
)
There are two ways to specify the pods whose logs we want to see:
-l
followed by a selector expression (like with many kubectl
commands)
with a "pod query," i.e. a regex used to match pod names
These two ways can be combined if necessary
stern pingpong
The --tail N
flag shows the last N
lines for each container
(Instead of showing the logs since the creation of the container)
The -t
/ --timestamps
flag shows timestamps
The --all-namespaces
flag is self-explanatory
weave
system containers:stern --tail 1 --timestamps --all-namespaces weave
When specifying a selector, we can omit the value for a label
This will match all objects having that label (regardless of the value)
Everything created with kubectl run
has a label run
Everything created with kubectl create deployment
has a label app
We can use that property to view the logs of all the pods created with kubectl create deployment
kubectl create deployment
:stern -l app
:EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI
Namespaces
(automatically generated title slide)
We would like to deploy another copy of DockerCoins on our cluster
We could rename all our deployments and services:
hasher → hasher2, redis → redis2, rng → rng2, etc.
That would require updating the code
There has to be a better way!
We would like to deploy another copy of DockerCoins on our cluster
We could rename all our deployments and services:
hasher → hasher2, redis → redis2, rng → rng2, etc.
That would require updating the code
There has to be a better way!
As hinted by the title of this section, we will use namespaces
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources of the same kind with the same name in the same namespace
(but it's OK to have e.g. two rng
services in different namespaces)
We cannot have two resources with the same name
(or can we...?)
We cannot have two resources of the same kind with the same name
(but it's OK to have an rng
service, an rng
deployment, and an rng
daemon set)
We cannot have two resources of the same kind with the same name in the same namespace
(but it's OK to have e.g. two rng
services in different namespaces)
Except for resources that exist at the cluster scope
(these do not belong to a namespace)
For namespaced resources:
the tuple (kind, name, namespace) needs to be unique
For resources at the cluster scope:
the tuple (kind, name) needs to be unique
kubectl api-resources
If we deploy a cluster with kubeadm
, we have three or four namespaces:
default
(for our applications)
kube-system
(for the control plane)
kube-public
(contains one ConfigMap for cluster discovery)
kube-node-lease
(in Kubernetes 1.14 and later; contains Lease objects)
If we deploy differently, we may have different namespaces
We can use kubectl create namespace
:
kubectl create namespace blue
Or we can construct a very minimal YAML snippet:
kubectl apply -f- <<EOFapiVersion: v1kind: Namespacemetadata: name: blueEOF
We can pass a -n
or --namespace
flag to most kubectl
commands:
kubectl -n blue get svc
We can also change our current context
A context is a (user, cluster, namespace) tuple
We can manipulate contexts with the kubectl config
command
kubectl config get-contexts
The current context (the only one!) is tagged with a *
What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?
NAME is an arbitrary string to identify the context
CLUSTER is a reference to a cluster
(i.e. API endpoint URL, and optional certificate)
AUTHINFO is a reference to the authentication information to use
(i.e. a TLS client certificate, token, or otherwise)
NAMESPACE is the namespace
(empty string = default
)
We want to use a different namespace
Solution 1: update the current context
This is appropriate if we need to change just one thing (e.g. namespace or authentication).
Solution 2: create a new context and switch to it
This is appropriate if we need to change multiple things and switch back and forth.
Let's go with solution 1!
This is done through kubectl config set-context
We can update a context by passing its name, or the current context with --current
Update the current context to use the blue
namespace:
kubectl config set-context --current --namespace=blue
Check the result:
kubectl config get-contexts
kubectl get all
jpetazzo/kubercoins
contains everything we need!Clone the kubercoins repository:
cd ~git clone https://github.com/jpetazzo/kubercoins
Create all the DockerCoins resources:
kubectl create -f kubercoins
If the argument behind -f
is a directory, all the files in that directory are processed.
The subdirectories are not processed, unless we also add the -R
flag.
Retrieve the port number allocated to the webui
service:
kubectl get svc webui
Point our browser to http://X.X.X.X:3xxxx
If the graph shows up but stays at zero, give it a minute or two!
Namespaces do not provide isolation
A pod in the green
namespace can communicate with a pod in the blue
namespace
A pod in the default
namespace can communicate with a pod in the kube-system
namespace
CoreDNS uses a different subdomain for each namespace
Example: from any pod in the cluster, you can connect to the Kubernetes API with:
https://kubernetes.default.svc.cluster.local:443/
Actual isolation is implemented with network policies
Network policies are resources (like deployments, services, namespaces...)
Network policies specify which flows are allowed:
between pods
from pods to the outside world
and vice-versa
blue
namespacekubectl config set-context --current --namespace=
Note: we could have used --namespace=default
for the same result.
We can also use a little helper tool called kubens
:
# Switch to namespace fookubens foo# Switch back to the previous namespacekubens -
On our clusters, kubens
is called kns
instead
(so that it's even fewer keystrokes to switch namespaces)
kubens
and kubectx
With kubens
, we can switch quickly between namespaces
With kubectx
, we can switch quickly between contexts
Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx
On our clusters, they are installed as kns
and kctx
(for brevity and to avoid completion clashes between kubectx
and kubectl
)
kube-ps1
It's easy to lose track of our current cluster / context / namespace
kube-ps1
makes it easy to track these, by showing them in our shell prompt
It is installed on our training clusters, and when using shpod
It gives us a prompt looking like this one:
[123.45.67.89] (kubernetes-admin@kubernetes:default) docker@node1 ~
(The highlighted part is context:namespace
, managed by kube-ps1
)
Highly recommended if you work across multiple contexts or namespaces!
kube-ps1
It's a simple shell script available from https://github.com/jonmosco/kube-ps1
It needs to be installed in our profile/rc files
(instructions differ depending on platform, shell, etc.)
Once installed, it defines aliases called kube_ps1
, kubeon
, kubeoff
(to selectively enable/disable it when needed)
Pro-tip: install it on your machine during the next break!
:EN:- Organizing resources with Namespaces :FR:- Organiser les ressources avec des namespaces
Deploying with YAML
(automatically generated title slide)
So far, we created resources with the following commands:
kubectl run
kubectl create deployment
kubectl expose
We can also create resources directly with YAML manifests
kubectl apply
vs create
kubectl create -f whatever.yaml
creates resources if they don't exist
if resources already exist, don't alter them
(and display error message)
kubectl apply -f whatever.yaml
creates resources if they don't exist
if resources already exist, update them
(to match the definition provided by the YAML file)
stores the manifest as an annotation in the resource
---
kind: ... apiVersion: ... metadata: ... name: ... ... --- kind: ... apiVersion: ... metadata: ... name: ... ...
apiVersion: v1 kind: List items: - kind: ... apiVersion: ... ... - kind: ... apiVersion: ... ...
We provide a YAML manifest with all the resources for Dockercoins
(Deployments and Services)
We can use it if we need to deploy or redeploy Dockercoins
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
(If we deployed Dockercoins earlier, we will see warning messages, because the resources that we created lack the necessary annotation. We can safely ignore them.)
We can also use a YAML file to delete resources
kubectl delete -f ...
will delete all the resources mentioned in a YAML file
(useful to clean up everything that was created by kubectl apply -f ...
)
The definitions of the resources don't matter
(just their kind
, apiVersion
, and name
)
We can also tell kubectl
to remove old resources
This is done with kubectl apply -f ... --prune
It will remove resources that don't exist in the YAML file(s)
But only if they were created with kubectl apply
in the first place
(technically, if they have an annotation kubectl.kubernetes.io/last-applied-configuration
)
¹If English is not your first language: to prune means to remove dead or overgrown branches in a tree, to help it to grow.
Imagine the following workflow:
do not use kubectl run
, kubectl create deployment
, kubectl expose
...
define everything with YAML
kubectl apply -f ... --prune --all
that YAML
keep that YAML under version control
enforce all changes to go through that YAML (e.g. with pull requests)
Our version control system now has a full history of what we deploy
Compares to "Infrastructure-as-Code", but for app deployments
When creating resources from YAML manifests, the namespace is optional
If we specify a namespace:
resources are created in the specified namespace
this is typical for things deployed only once per cluster
example: system components, cluster add-ons ...
If we don't specify a namespace:
resources are created in the current namespace
this is typical for things that may be deployed multiple times
example: applications (production, staging, feature branches ...)
:EN:- Deploying with YAML manifests :FR:- Déployer avec des manifests YAML
Declarative vs imperative
(automatically generated title slide)
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.
Declarative seems simpler at first ...
... As long as you know how to brew tea
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
Did you know there was an ISO standard specifying how to brew tea?
Imperative systems:
simpler
if a task is interrupted, we have to restart from scratch
Declarative systems:
if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary
we need to be able to observe the system
... and compute a "diff" between what we have and what we want
With Kubernetes, we cannot say: "run this container"
All we can do is write a spec and push it to the API server
(by creating a resource like e.g. a Pod or a Deployment)
The API server will validate that spec (and reject it if it's invalid)
Then it will store it in etcd
A controller will "notice" that spec and act upon it
Watch for the spec
fields in the YAML files later!
The spec describes how we want the thing to be
Kubernetes will reconcile the current state with the spec
(technically, this is done by a number of controllers)
When we want to change some resource, we update the spec
Kubernetes will then converge that resource
:EN:- Declarative vs imperative models :FR:- Modèles déclaratifs et impératifs
They say, "a picture is worth one thousand words."
The following 19 slides show what really happens when we run:
kubectl create deployment web --image=nginx
Authoring YAML
(automatically generated title slide)
We have already generated YAML implicitly, with e.g.:
kubectl run
kubectl create deployment
(and a few other kubectl create
variants)
kubectl expose
When and why do we need to write our own YAML?
How do we write YAML from scratch?
Many advanced (and even not-so-advanced) features require to write YAML:
pods with multiple containers
resource limits
healthchecks
DaemonSets, StatefulSets
and more!
How do we access these features?
Completely from scratch with our favorite editor
(yeah, right)
Dump an existing resource with kubectl get -o yaml ...
(it is recommended to clean up the result)
Ask kubectl
to generate the YAML
(with a kubectl create --dry-run=client -o yaml
)
Use The Docs, Luke
(the documentation almost always has YAML examples)
Start with a namespace:
kind: NamespaceapiVersion: v1metadata: name: hello
We can use kubectl explain
to see resource definitions:
kubectl explain -r pod.spec
Not the easiest option!
kubectl get -o yaml
works!
A lot of fields in metadata
are not necessary
(managedFields
, resourceVersion
, uid
, creationTimestamp
...)
Most objects will have a status
field that is not necessary
Default or empty values can also be removed for clarity
This can be done manually or with the kubectl-neat
plugin
kubectl get -o yaml ... | kubectl neat
--dry-run=client
optionGenerate the YAML for a Deployment without creating it:
kubectl create deployment web --image nginx --dry-run=client
Optionally clean it up with kubectl neat
, too
--dry-run
with kubectl apply
The --dry-run
option can also be used with kubectl apply
However, it can be misleading (it doesn't do a "real" dry run)
Let's see what happens in the following scenario:
generate the YAML for a Deployment
tweak the YAML to transform it into a DaemonSet
apply that YAML to see what would actually be created
kubectl apply --dry-run=client
Generate the YAML for a deployment:
kubectl create deployment web --image=nginx -o yaml > web.yaml
Change the kind
in the YAML to make it a DaemonSet
:
sed -i s/Deployment/DaemonSet/ web.yaml
Ask kubectl
what would be applied:
kubectl apply -f web.yaml --dry-run=client --validate=false -o yaml
The resulting YAML doesn't represent a valid DaemonSet.
Since Kubernetes 1.13, we can use server-side dry run and diffs
Server-side dry run will do all the work, but not persist to etcd
(all validation and mutation hooks will be executed)
kubectl apply -f web.yaml --dry-run=server --validate=false -o yaml
The resulting YAML doesn't have the replicas
field anymore.
Instead, it has the fields expected in a DaemonSet.
The YAML is verified much more extensively
The only step that is skipped is "write to etcd"
YAML that passes server-side dry run should apply successfully
(unless the cluster state changes by the time the YAML is actually applied)
Validating or mutating hooks that have side effects can also be an issue
kubectl diff
Kubernetes 1.13 also introduced kubectl diff
kubectl diff
does a server-side dry run, and shows differences
kubectl diff
on the YAML that we tweaked earlier:kubectl diff -f web.yaml
Note: we don't need to specify --validate=false
here.
Using YAML (instead of kubectl create <kind>
) allows to be declarative
The YAML describes the desired state of our cluster and applications
YAML can be stored, versioned, archived (e.g. in git repositories)
To change resources, change the YAML files
(instead of using kubectl edit
/scale
/label
/etc.)
Changes can be reviewed before being applied
(with code reviews, pull requests ...)
This workflow is sometimes called "GitOps"
(there are tools like Weave Flux or GitKube to facilitate it)
Get started with kubectl create deployment
and kubectl expose
(until you have something that works)
Then, run these commands again, but with -o yaml --dry-run=client
(to generate and save YAML manifests)
Try to apply these manifests in a clean environment
(e.g. a new Namespace)
Check that everything works; tweak and iterate if needed
Commit the YAML to a repo 💯🏆️
Don't hesitate to remove unused fields
(e.g. creationTimestamp: null
, most {}
values...)
Check your YAML with:
kube-score (installable with krew)
Check live resources with tools like popeye
Remember that like all linters, they need to be configured for your needs!
:EN:- Techniques to write YAML manifests :FR:- Comment écrire des manifests YAML
Setting up Kubernetes
(automatically generated title slide)
Kubernetes is made of many components that require careful configuration
Secure operation typically requires TLS certificates and a local CA
(certificate authority)
Setting up everything manually is possible, but rarely done
(except for learning purposes)
Let's do a quick overview of available options!
Are you writing code that will eventually run on Kubernetes?
Then it's a good idea to have a development cluster!
Instead of shipping containers images, we can test them on Kubernetes
Extremely useful when authoring or testing Kubernetes-specific objects
(ConfigMaps, Secrets, StatefulSets, Jobs, RBAC, etc.)
Extremely convenient to quickly test/check what a particular thing looks like
(e.g. what are the fields a Deployment spec?)
It's perfectly fine to work with a cluster that has only one node
It simplifies a lot of things:
pod networking doesn't even need CNI plugins, overlay networks, etc.
these clusters can be fully contained (no pun intended) in an easy-to-ship VM or container image
some of the security aspects may be simplified (different threat model)
images can be built directly on the node (we don't need to ship them with a registry)
Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube
(some of these also support clusters with multiple nodes)
Many cloud providers and hosting providers offer "managed Kubernetes"
The deployment and maintenance of the control plane is entirely managed by the provider
(ideally, clusters can be spun up automatically through an API, CLI, or web interface)
Given the complexity of Kubernetes, this approach is strongly recommended
(at least for your first production clusters)
After working for a while with Kubernetes, you will be better equipped to decide:
whether to operate it yourself or use a managed offering
which offering or which distribution works best for you and your needs
Most "Turnkey Solutions" offer fully managed control planes
(including control plane upgrades, sometimes done automatically)
However, with most providers, we still need to take care of nodes
(provisioning, upgrading, scaling the nodes)
Example with Amazon EKS "managed node groups":
...when bugs or issues are reported [...] you're responsible for deploying these patched AMI versions to your managed node groups.
Most providers let you pick which Kubernetes version you want
some providers offer up-to-date versions
others lag significantly (sometimes by 2 or 3 minor versions)
Some providers offer multiple networking or storage options
Others will only support one, tied to their infrastructure
(changing that is in theory possible, but might be complex or unsupported)
Some providers let you configure or customize the control plane
(generally through Kubernetes "feature gates")
Pricing models differ from one provider to another
nodes are generally charged at their usual price
control plane may be free or incur a small nominal fee
Beyond pricing, there are huge differences in features between providers
The "major" providers are not always the best ones!
See this page for a list of available providers
If you want to run Kubernetes yourselves, there are many options
(free, commercial, proprietary, open source ...)
Some of them are installers, while some are complete platforms
Some of them leverage other well-known deployment tools
(like Puppet, Terraform ...)
There are too many options to list them all
(check this page for an overview!)
kubeadm is a tool part of Kubernetes to facilitate cluster setup
Many other installers and distributions use it (but not all of them)
It can also be used by itself
Excellent starting point to install Kubernetes on your own machines
(virtual, physical, it doesn't matter)
It even supports highly available control planes, or "multi-master"
(this is more complex, though, because it introduces the need for an API load balancer)
The resources below are mainly for educational purposes!
Kubernetes The Hard Way by Kelsey Hightower
step by step guide to install Kubernetes on Google Cloud
covers certificates, high availability ...
“Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”
Deep Dive into Kubernetes Internals for Builders and Operators
conference presentation showing step-by-step control plane setup
emphasis on simplicity, not on security and availability
How did we set up these Kubernetes clusters that we're using?
We used kubeadm
on freshly installed VM instances running Ubuntu LTS
Install Docker
Install Kubernetes packages
Run kubeadm init
on the first node (it deploys the control plane on that node)
Set up Weave (the overlay network) with a single kubectl apply
command
Run kubeadm join
on the other nodes (with the token produced by kubeadm init
)
Copy the configuration file generated by kubeadm init
Check the prepare VMs README for more details
kubeadm
"drawbacks"Doesn't set up Docker or any other container engine
(this is by design, to give us choice)
Doesn't set up the overlay network
(this is also by design, for the same reasons)
HA control plane requires some extra steps
Note that HA control plane also requires setting up a specific API load balancer
(which is beyond the scope of kubeadm)
:EN:- Various ways to install Kubernetes :FR:- Survol des techniques d'installation de Kubernetes
Running a local development cluster
(automatically generated title slide)
Let's review some options to run Kubernetes locally
There is no "best option", it depends what you value:
ability to run on all platforms (Linux, Mac, Windows, other?)
ability to run clusters with multiple nodes
ability to run multiple clusters side by side
ability to run recent (or even, unreleased) versions of Kubernetes
availability of plugins
etc.
Available on Mac and Windows
Gives you one cluster with one node
Very easy to use if you are already using Docker Desktop:
go to Docker Desktop preferences and enable Kubernetes
Ideal for Docker users who need good integration between both platforms
Based on K3s by Rancher Labs
Requires Docker
Runs Kubernetes nodes in Docker containers
Can deploy multiple clusters, with multiple nodes, and multiple master nodes
As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)
They have different syntax and options, this can be confusing
(but don't let that stop you!)
Install k3d
(e.g. get the binary from https://github.com/rancher/k3d/releases)
Create a simple cluster:
k3d cluster create petitcluster
Create a more complex cluster with a custom version:
k3d cluster create groscluster \ --image rancher/k3s:v1.18.9-k3s1 --servers 3 --agents 5
(3 nodes for the control plane + 5 worker nodes)
Clusters are automatically added to .kube/config
file
Kubernetes-in-Docker
Requires Docker (obviously!)
Deploying a single node cluster using the latest version is simple:
kind create cluster
More advanced scenarios require writing a short config file
(to define multiple nodes, multiple master nodes, set Kubernetes versions ...)
Can deploy multiple clusters
The "legacy" option!
(note: this is not a bad thing, it means that it's very stable, has lots of plugins, etc.)
Supports many drivers
(HyperKit, Hyper-V, KVM, VirtualBox, but also Docker and many others)
Can deploy a single cluster; recent versions can deploy multiple nodes
Great option if you want a "Kubernetes first" experience
(i.e. if you don't already have Docker and/or don't want/need it)
Available on Linux, and since recently, on Mac and Windows as well
The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
Also supports clustering (as in, multiple machines running MicroK8s)
DNS is not enabled by default; enable it with microk8s enable dns
Available on Mac and Windows
Runs a single cluster with a single node
Lets you pick the Kubernetes version that you want to use
(and change it any time you like)
Emphasis on ease of use (like Docker Desktop)
Very young product (first release in May 2021)
Based on k3s and other proven components
Choose your own adventure!
Pick any Linux distribution!
Build your cluster from scratch or use a Kubernetes installer!
Discover exotic CNI plugins and container runtimes!
The only limit is yourself, and the time you are willing to sink in!
:EN:- Kubernetes options for local development :FR:- Installation de Kubernetes pour travailler en local
Controlling a Kubernetes cluster remotely
(automatically generated title slide)
kubectl
can be used either on cluster instances or outside the cluster
Here, we are going to use kubectl
from our local machine
The commands in this chapter should be run on your local machine.
kubectl
is officially available on Linux, macOS, Windows
(and unofficially anywhere we can build and run Go binaries)
You may skip these commands if you are following along from:
a tablet or phone
a web-based terminal
an environment where you can't install and run new binaries
kubectl
kubectl
on your local machine, you can skip thisNote: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing kubectl
might be more complicated (or even impossible) so feel free to skip this section.
kubectl
Check that kubectl
works correctly
(before even trying to connect to a remote cluster!)
kubectl
to show its version number:kubectl version --client
The output should look like this:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0",GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean",BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc",Platform:"darwin/amd64"}
~/.kube/config
If you already have a ~/.kube/config
file, rename it
(we are going to overwrite it in the following slides!)
If you never used kubectl
on your machine before: nothing to do!
Make a copy of ~/.kube/config
; if you are using macOS or Linux, you can do:
cp ~/.kube/config ~/.kube/config.before.training
If you are using Windows, you will need to adapt this command
node1
The ~/.kube/config
file that is on node1
contains all the credentials we need
Let's copy it over!
Copy the file from node1
; if you are using macOS or Linux, you can do:
scp USER@X.X.X.X:.kube/config ~/.kube/config# Make sure to replace X.X.X.X with the IP address of node1,# and USER with the user name used to log into node1!
If you are using Windows, adapt these instructions to your SSH client
There is a good chance that we need to update the server address
To know if it is necessary, run kubectl config view
Look for the server:
address:
if it matches the public IP address of node1
, you're good!
if it is anything else (especially a private IP address), update it!
To update the server address, run:
kubectl config set-cluster kubernetes --server=https://X.X.X.X:6443# Make sure to replace X.X.X.X with the IP address of node1!
Generally, the Kubernetes API uses a certificate that is valid for:
kubernetes
kubernetes.default
kubernetes.default.svc
kubernetes.default.svc.cluster.local
kubernetes
servicenode1
)On most clouds, the IP address of the node is an internal IP address
... And we are going to connect over the external IP address
... And that external IP address was not used when creating the certificate!
We need to tell kubectl
to skip TLS verification
(only do this with testing clusters, never in production!)
The following command will do the trick:
kubectl config set-cluster kubernetes --insecure-skip-tls-verify
Check the versions of the local client and remote server:
kubectl version
View the nodes of the cluster:
kubectl get nodes
We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.
:EN:- Working with remote Kubernetes clusters :FR:- Travailler avec des clusters distants
Accessing internal services
(automatically generated title slide)
When we are logged in on a cluster node, we can access internal services
(by virtue of the Kubernetes network model: all nodes can reach all pods and services)
When we are accessing a remote cluster, things are different
(generally, our local machine won't have access to the cluster's internal subnet)
How can we temporarily access a service without exposing it to everyone?
When we are logged in on a cluster node, we can access internal services
(by virtue of the Kubernetes network model: all nodes can reach all pods and services)
When we are accessing a remote cluster, things are different
(generally, our local machine won't have access to the cluster's internal subnet)
How can we temporarily access a service without exposing it to everyone?
kubectl proxy
: gives us access to the API, which includes a proxy for HTTP resources
kubectl port-forward
: allows forwarding of TCP ports to arbitrary pods, services, ...
The labs and demos in this section assume that we have set up kubectl
on our
local machine in order to access a remote cluster.
We will therefore show how to access services and pods of the remote cluster, from our local machine.
You can also run these commands directly on the cluster (if you haven't
installed and set up kubectl
locally).
Running commands locally will be less useful
(since you could access services and pods directly),
but keep in mind that these commands will work anywhere as long as you have
installed and set up kubectl
to communicate with your cluster.
kubectl proxy
in theoryRunning kubectl proxy
gives us access to the entire Kubernetes API
The API includes routes to proxy HTTP traffic
These routes look like the following:
/api/v1/namespaces/<namespace>/services/<service>/proxy
We just add the URI to the end of the request, for instance:
/api/v1/namespaces/<namespace>/services/<service>/proxy/index.html
We can access services
and pods
this way
kubectl proxy
in practicewebui
service through kubectl proxy
Run an API proxy in the background:
kubectl proxy &
Access the webui
service:
curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html
Terminate the proxy:
kill %1
kubectl port-forward
in theoryWhat if we want to access a TCP service?
We can use kubectl port-forward
instead
It will create a TCP relay to forward connections to a specific port
(of a pod, service, deployment...)
The syntax is:
kubectl port-forward service/name_of_service local_port:remote_port
If only one port number is specified, it is used for both local and remote ports
kubectl port-forward
in practiceForward connections from local port 10000 to remote port 6379:
kubectl port-forward svc/redis 10000:6379 &
Connect to the Redis server:
telnet localhost 10000
Issue a few commands, e.g. INFO server
then QUIT
kill %1
:EN:- Securely accessing internal services :FR:- Accès sécurisé aux services internes
:T: Accessing internal services from our local machine
:Q: What's the advantage of "kubectl port-forward" compared to a NodePort? :A: It can forward arbitrary protocols :A: It doesn't require Kubernetes API credentials :A: It offers deterministic load balancing (instead of random) :A: ✔️It doesn't expose the service to the public
:Q: What's the security concept behind "kubectl port-forward"? :A: ✔️We authenticate with the Kubernetes API, and it forwards connections on our behalf :A: It detects our source IP address, and only allows connections coming from it :A: It uses end-to-end mTLS (mutual TLS) to authenticate our connections :A: There is no security (as long as it's running, anyone can connect from anywhere)
Accessing the API with kubectl proxy
(automatically generated title slide)
kubectl proxy
The API requires us to authenticate¹
There are many authentication methods available, including:
TLS client certificates
(that's what we've used so far)
HTTP basic password authentication
(from a static file; not recommended)
various token mechanisms
(detailed in the documentation)
¹OK, we lied. If you don't authenticate, you are considered to
be user system:anonymous
, which doesn't have any access rights by default.
curl
Retrieve the ClusterIP allocated to the kubernetes
service:
kubectl get svc kubernetes
Replace the IP below and try to connect with curl
:
curl -k https://10.96.0.1/
The API will tell us that user system:anonymous
cannot access this path.
If we wanted to talk to the API, we would need to:
extract our TLS key and certificate information from ~/.kube/config
(the information is in PEM format, encoded in base64)
use that information to present our certificate when connecting
(for instance, with openssl s_client -key ... -cert ... -connect ...
)
figure out exactly which credentials to use
(once we start juggling multiple clusters)
change that whole process if we're using another authentication method
🤔 There has to be a better way!
kubectl proxy
for authenticationkubectl proxy
runs a proxy in the foreground
This proxy lets us access the Kubernetes API without authentication
(kubectl proxy
adds our credentials on the fly to the requests)
This proxy lets us access the Kubernetes API over plain HTTP
This is a great tool to learn and experiment with the Kubernetes API
... And for serious uses as well (suitable for one-shot scripts)
For unattended use, it's better to create a service account
kubectl proxy
kubectl proxy
and then do a simple request with curl
!Start kubectl proxy
in the background:
kubectl proxy &
Access the API's default route:
curl localhost:8001
kill %1
The output is a list of available API routes.
The Kubernetes API serves an OpenAPI Specification
(OpenAPI was formerly known as Swagger)
OpenAPI has many advantages
(generate client library code, generate test code ...)
For us, this means we can explore the API with Swagger UI
(for instance with the Swagger UI add-on for Firefox)
kubectl proxy
is intended for local useBy default, the proxy listens on port 8001
(But this can be changed, or we can tell kubectl proxy
to pick a port)
By default, the proxy binds to 127.0.0.1
(Making it unreachable from other machines, for security reasons)
By default, the proxy only accepts connections from:
^localhost$,^127\.0\.0\.1$,^\[::1\]$
This is great when running kubectl proxy
locally
Not-so-great when you want to connect to the proxy from a remote machine
kubectl proxy
on a remote machineIf we wanted to connect to the proxy from another machine, we would need to:
bind to INADDR_ANY
instead of 127.0.0.1
accept connections from any address
This is achieved with:
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
Do not do this on a real cluster: it opens full unauthenticated access!
Running kubectl proxy
openly is a huge security risk
It is slightly better to run the proxy where you need it
(and copy credentials, e.g. ~/.kube/config
, to that place)
It is even better to use a limited account with reduced permissions
kubectl proxy
also gives access to all internal services
Specifically, services are exposed as such:
/api/v1/namespaces/<namespace>/services/<service>/proxy
We can use kubectl proxy
to access an internal service in a pinch
(or, for non HTTP services, kubectl port-forward
)
This is not very useful when running kubectl
directly on the cluster
(since we could connect to the services directly anyway)
But it is very powerful as soon as you run kubectl
from a remote machine
Exercise — Local Cluster
(automatically generated title slide)
We want to have our own local Kubernetes cluster
(we can use Docker Desktop, KinD, minikube... anything will do!)
Then we want to run a copy of dockercoins on that cluster
We want to be able to connect to the web UI
(we can expose the port, or use port-forward, or whatever)
exercises/localcluster-details.md
exercises/localcluster-details.md
On a Mac or Windows machine:
the easiest solution is probably Docker Desktop
On a Linux machine:
the easiest solution is probably KinD or k3d
To connect to the web UI:
kubectl port-forward
is probably the easiest solution
exercises/localcluster-details.md
If you already have a local Kubernetes cluster:
try to run another one!
Try to use another method than kubectl port-forward
exercises/localcluster-details.md
Scaling our demo app
(automatically generated title slide)
Our ultimate goal is to get more DockerCoins
(i.e. increase the number of loops per second shown on the web UI)
Let's look at the architecture again:
The loop is done in the worker; perhaps we could try adding more workers?
worker
Deploymentkubectl get pods -w
worker
replicas:kubectl scale deployment worker --replicas=2
After a few seconds, the graph in the web UI should show up.
worker
Deployment further:kubectl scale deployment worker --replicas=3
The graph in the web UI should go up again.
(This is looking great! We're gonna be RICH!)
worker
Deployment to a bigger number:kubectl scale deployment worker --replicas=10
worker
Deployment to a bigger number:kubectl scale deployment worker --replicas=10
The graph will peak at 10 hashes/second.
(We can add as many workers as we want: we will never go past 10 hashes/second.)
It may look like it, because the web UI shows instant speed
The instant speed can briefly exceed 10 hashes/second
The average speed cannot
The instant speed can be biased because of how it's computed
The instant speed is computed client-side by the web UI
The web UI checks the hash counter once per second
(and does a classic (h2-h1)/(t2-t1) speed computation)
The counter is updated once per second by the workers
These timings are not exact
(e.g. the web UI check interval is client-side JavaScript)
Sometimes, between two web UI counter measurements,
the workers are able to update the counter twice
During that cycle, the instant speed will appear to be much bigger
(but it will be compensated by lower instant speed before and after)
If this was high-quality, production code, we would have instrumentation
(Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)
It's not!
Perhaps we could benchmark our web services?
(with tools like ab
, or even simpler, httping
)
We want to check hasher
and rng
We are going to use httping
It's just like ping
, but using HTTP GET
requests
(it measures how long it takes to perform one GET
request)
It's used like this:
httping [-c count] http://host:port/path
Or even simpler:
httping ip.ad.dr.ess
We will use httping
on the ClusterIP addresses of our services
We can simply check the output of kubectl get services
Or do it programmatically, as in the example below
HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})
Now we can access the IP addresses of our services through $HASHER
and $RNG
.
hasher
and rng
response timeshttping -c 3 $HASHERhttping -c 3 $RNG
hasher
is fine (it should take a few milliseconds to reply)
rng
is not (it should take about 700 milliseconds if there are 10 workers)
Something is wrong with rng
, but ... what?
:EN:- Scaling up our demo app :FR:- Scale up de l'application de démo
The bottleneck seems to be rng
What if we don't have enough entropy and can't generate enough random numbers?
We need to scale out the rng
service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of rng
uses /dev/urandom
, which never runs out of entropy...
...and is just as good as /dev/random
.)
Daemon sets
(automatically generated title slide)
We want to scale rng
in a way that is different from how we scaled worker
We want one (and exactly one) instance of rng
per node
We do not want two instances of rng
on the same node
We will do that with a daemon set
kubectl scale deployment rng --replicas=...
?Can't we just do kubectl scale deployment rng --replicas=...
?
Nothing guarantees that the rng
containers will be distributed evenly
If we add nodes later, they will not automatically run a copy of rng
If we remove (or reboot) a node, one rng
container will restart elsewhere
(and we will end up with two instances rng
on the same node)
By contrast, a daemon set will start one pod per node and keep it that way
(as nodes are added or removed)
Daemon sets are great for cluster-wide, per-node processes:
kube-proxy
weave
(our overlay network)
monitoring agents
hardware management tools (e.g. SCSI/FC HBA agents)
etc.
They can also be restricted to run only on some nodes
Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
option 1: read the docs
option 2: vi
our way out of it
rng
resourceDump the rng
resource in YAML:
kubectl get deploy/rng -o yaml >rng.yml
Edit rng.yml
What if we just changed the kind
field?
(It can't be that easy, right?)
kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
What if we just changed the kind
field?
(It can't be that easy, right?)
kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
We all knew this couldn't be that easy, right!
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)progressDeadlineSeconds
field (also used by the rollout mechanism)status: {}
line at the enderror validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)progressDeadlineSeconds
field (also used by the rollout mechanism)status: {}
line at the endOr, we could also ...
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag's actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
Wait ... Now, can it be that easy?
deployment
into a daemonset
?kubectl get all
deployment
into a daemonset
?kubectl get all
We have two resources called rng
:
the deployment that was existing before
the daemon set that we just created
We also have one too many pods.
(The pod corresponding to the deployment still exists.)
deploy/rng
and ds/rng
You can have different resource types with the same name
(i.e. a deployment and a daemon set both named rng
)
We still have the old rng
deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/rng 1 1 1 1 18m
But now we have the new rng
daemon set as well
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/rng 2 2 2 2 2 <none> 9s
If we check with kubectl get pods
, we see:
one pod for the deployment (named rng-xxxxxxxxxx-yyyyy
)
one pod per node for the daemon set (named rng-zzzzz
)
NAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Running 0 11mrng-b85tm 1/1 Running 0 25srng-hfbrr 1/1 Running 0 25s[...]
If we check with kubectl get pods
, we see:
one pod for the deployment (named rng-xxxxxxxxxx-yyyyy
)
one pod per node for the daemon set (named rng-zzzzz
)
NAME READY STATUS RESTARTS AGErng-54f57d4d49-7pt82 1/1 Running 0 11mrng-b85tm 1/1 Running 0 25srng-hfbrr 1/1 Running 0 25s[...]
The daemon set created one pod per node, except on the master node.
The master node has taints preventing pods from running there.
(To schedule a pod on this node anyway, the pod will require appropriate tolerations.)
(Off by one? We don't run these pods on the node hosting the control plane.)
Look at the web UI
The graph should now go above 10 hashes per second!
Look at the web UI
The graph should now go above 10 hashes per second!
It looks like the newly created pods are serving traffic correctly
How and why did this happen?
(We didn't do anything special to add them to the rng
service load balancer!)
Labels and selectors
(automatically generated title slide)
The rng
service is load balancing requests to a set of pods
That set of pods is defined by the selector of the rng
service
rng
service definition:kubectl describe service rng
The selector is app=rng
It means "all the pods having the label app=rng
"
(They can have additional labels as well, that's OK!)
We can use selectors with many kubectl
commands
For instance, with kubectl get
, kubectl logs
, kubectl delete
... and more
app=rng
:kubectl get pods -l app=rngkubectl get pods --selector app=rng
But ... why do these pods (in particular, the new ones) have this app=rng
label?
When we create a deployment with kubectl create deployment rng
,
this deployment gets the label app=rng
The replica sets created by this deployment also get the label app=rng
The pods created by these replica sets also get the label app=rng
When we created the daemon set from the deployment, we re-used the same spec
Therefore, the pods created by the daemon set get the same labels
Note: when we use kubectl run stuff
, the label is run=stuff
instead.
We would like to remove a pod from the load balancer
What would happen if we removed that pod, with kubectl delete pod ...
?
We would like to remove a pod from the load balancer
What would happen if we removed that pod, with kubectl delete pod ...
?
It would be re-created immediately (by the replica set or the daemon set)
We would like to remove a pod from the load balancer
What would happen if we removed that pod, with kubectl delete pod ...
?
It would be re-created immediately (by the replica set or the daemon set)
What would happen if we removed the app=rng
label from that pod?
We would like to remove a pod from the load balancer
What would happen if we removed that pod, with kubectl delete pod ...
?
It would be re-created immediately (by the replica set or the daemon set)
What would happen if we removed the app=rng
label from that pod?
It would also be re-created immediately
We would like to remove a pod from the load balancer
What would happen if we removed that pod, with kubectl delete pod ...
?
It would be re-created immediately (by the replica set or the daemon set)
What would happen if we removed the app=rng
label from that pod?
It would also be re-created immediately
Why?!?
The "mission" of a replica set is:
"Make sure that there is the right number of pods matching this spec!"
The "mission" of a daemon set is:
"Make sure that there is a pod matching this spec on each node!"
The "mission" of a replica set is:
"Make sure that there is the right number of pods matching this spec!"
The "mission" of a daemon set is:
"Make sure that there is a pod matching this spec on each node!"
In fact, replica sets and daemon sets do not check pod specifications
They merely have a selector, and they look for pods matching that selector
Yes, we can fool them by manually creating pods with the "right" labels
Bottom line: if we remove our app=rng
label ...
... The pod "disappears" for its parent, which re-creates another pod to replace it
Since both the rng
daemon set and the rng
replica set use app=rng
...
... Why don't they "find" each other's pods?
Since both the rng
daemon set and the rng
replica set use app=rng
...
... Why don't they "find" each other's pods?
Replica sets have a more specific selector, visible with kubectl describe
(It looks like app=rng,pod-template-hash=abcd1234
)
Daemon sets also have a more specific selector, but it's invisible
(It looks like app=rng,controller-revision-hash=abcd1234
)
As a result, each controller only "sees" the pods it manages
Currently, the rng
service is defined by the app=rng
selector
The only way to remove a pod is to remove or change the app
label
... But that will cause another pod to be created instead!
What's the solution?
Currently, the rng
service is defined by the app=rng
selector
The only way to remove a pod is to remove or change the app
label
... But that will cause another pod to be created instead!
What's the solution?
We need to change the selector of the rng
service!
Let's add another label to that selector (e.g. active=yes
)
If a selector specifies multiple labels, they are understood as a logical AND
(in other words: the pods must match all the labels)
We cannot have a logical OR
(e.g. app=api AND (release=prod OR release=preprod)
)
We can, however, apply as many extra labels as we want to our pods:
use selector app=api AND prod-or-preprod=yes
add prod-or-preprod=yes
to both sets of pods
We will see later that in other places, we can use more advanced selectors
Add the label active=yes
to all our rng
pods
Update the selector for the rng
service to also include active=yes
Toggle traffic to a pod by manually adding/removing the active
label
Profit!
Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.
We want to add the label active=yes
to all pods that have app=rng
We could edit each pod one by one with kubectl edit
...
... Or we could use kubectl label
to label them all
kubectl label
can use selectors itself
active=yes
to all pods that have app=rng
:kubectl label pods -l app=rng active=yes
We need to edit the service specification
Reminder: in the service definition, we will see app: rng
in two places
the label of the service itself (we don't need to touch that one)
the selector of the service (that's the one we want to change)
active: yes
to its selector:kubectl edit service rng
We need to edit the service specification
Reminder: in the service definition, we will see app: rng
in two places
the label of the service itself (we don't need to touch that one)
the selector of the service (that's the one we want to change)
active: yes
to its selector:kubectl edit service rng
... And then we get the weirdest error ever. Why?
YAML parsers try to help us:
xyz
is the string "xyz"
42
is the integer 42
yes
is the boolean value true
If we want the string "42"
or the string "yes"
, we have to quote them
So we have to use active: "yes"
For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!
Update the YAML manifest of the service
Add active: "yes"
to its selector
This time it should work!
If we did everything correctly, the web UI shouldn't show any change.
We want to disable the pod that was created by the deployment
All we have to do, is remove the active
label from that pod
To identify that pod, we can use its name
... Or rely on the fact that it's the only one with a pod-template-hash
label
Good to know:
kubectl label ... foo=
doesn't remove a label (it sets it to an empty string)
to remove label foo
, use kubectl label ... foo-
to change an existing label, we would need to add --overwrite
POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)kubectl logs --tail 1 --follow $POD
(We should see a steady stream of HTTP logs)kubectl label pod -l app=rng,pod-template-hash active-
(The stream of HTTP logs should stop immediately)There might be a slight change in the web UI (since we removed a bit
of capacity from the rng
service). If we remove more pods,
the effect should be more visible.
If we scale up our cluster by adding new nodes, the daemon set will create more pods
These pods won't have the active=yes
label
If we want these pods to have that label, we need to edit the daemon set spec
We can do that with e.g. kubectl edit daemonset rng
Reminder: a daemon set is a resource that creates more resources!
There is a difference between:
the label(s) of a resource (in the metadata
block in the beginning)
the selector of a resource (in the spec
block)
the label(s) of the resource(s) created by the first resource (in the template
block)
We would need to update the selector and the template
(metadata labels are not mandatory)
The template must match the selector
(i.e. the resource will refuse to create resources that it will not select)
When a pod is misbehaving, we can delete it: another one will be recreated
But we can also change its labels
It will be removed from the load balancer (it won't receive traffic anymore)
Another pod will be recreated immediately
But the problematic pod is still here, and we can inspect and debug it
We can even re-add it to the rotation if necessary
(Very useful to troubleshoot intermittent and elusive bugs)
Conversely, we can add pods matching a service's selector
These pods will then receive requests and serve traffic
Examples:
one-shot pod with all debug flags enabled, to collect logs
pods created automatically, but added to rotation in a second step
(by setting their label accordingly)
This gives us building blocks for canary and blue/green deployments
As indicated earlier, service selectors are limited to a AND
But in many other places in the Kubernetes API, we can use complex selectors
(e.g. Deployment, ReplicaSet, DaemonSet, NetworkPolicy ...)
These allow extra operations; specifically:
checking for presence (or absence) of a label
checking if a label is (or is not) in a given set
Relevant documentation:
theSelector: matchLabels: app: portal component: api matchExpressions: - key: release operator: In values: [ production, preproduction ] - key: signed-off-by operator: Exists
This selector matches pods that meet all the indicated conditions.
operator
can be In
, NotIn
, Exists
, DoesNotExist
.
A nil
selector matches nothing, a {}
selector matches everything.
(Because that means "match all pods that meet at least zero condition".)
Each Service has a corresponding Endpoints resource
(see kubectl get endpoints
or kubectl get ep
)
That Endpoints resource is used by various controllers
(e.g. kube-proxy
when setting up iptables
rules for ClusterIP services)
These Endpoints are populated (and updated) with the Service selector
We can update the Endpoints manually, but our changes will get overwritten
... Except if the Service selector is empty!
If a service selector is empty, Endpoints don't get updated automatically
(but we can still set them manually)
This lets us create Services pointing to arbitrary destinations
(potentially outside the cluster; or things that are not in pods)
Another use-case: the kubernetes
service in the default
namespace
(its Endpoints are maintained automatically by the API server)
:EN:- Scaling with Daemon Sets :FR:- Utilisation de Daemon Sets
Rolling updates
(automatically generated title slide)
By default (without rolling updates), when a scaled resource is updated:
new pods are created
old pods are terminated
... all at the same time
if something goes wrong, ¯\_(ツ)_/¯
With rolling updates, when a Deployment is updated, it happens progressively
The Deployment controls multiple Replica Sets
Each Replica Set is a group of identical Pods
(with the same image, arguments, parameters ...)
During the rolling update, we have at least two Replica Sets:
the "new" set (corresponding to the "target" version)
at least one "old" set
We can have multiple "old" sets
(if we start another update before the first one is done)
Two parameters determine the pace of the rollout: maxUnavailable
and maxSurge
They can be specified in absolute number of pods, or percentage of the replicas
count
At any given time ...
there will always be at least replicas
-maxUnavailable
pods available
there will never be more than replicas
+maxSurge
pods in total
there will therefore be up to maxUnavailable
+maxSurge
pods being updated
We have the possibility of rolling back to the previous version
(if the update fails or is unsatisfactory in any way)
kubectl
and jq
:kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
As of Kubernetes 1.8, we can do rolling updates with:
deployments
, daemonsets
, statefulsets
Editing one of these resources will automatically result in a rolling update
Rolling updates can be monitored with the kubectl rollout
subcommand
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=dockercoins/worker:v0.2
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=dockercoins/worker:v0.2
That rollout should be pretty quick. What shows in the web UI?
At first, it looks like nothing is happening (the graph remains at the same level)
According to kubectl get deploy -w
, the deployment
was updated really quickly
But kubectl get pods -w
tells a different story
The old pods
are still here, and they stay in Terminating
state for a while
Eventually, they are terminated; and then the graph decreases significantly
This delay is due to the fact that our worker doesn't handle signals
Kubernetes sends a "polite" shutdown request to the worker, which ignores it
After a grace period, Kubernetes gets impatient and kills the container
(The grace period is 30 seconds, but can be changed if needed)
Update worker
by specifying a non-existent image:
kubectl set image deploy worker worker=dockercoins/worker:v0.3
Check what's going on:
kubectl rollout status deploy worker
Update worker
by specifying a non-existent image:
kubectl set image deploy worker worker=dockercoins/worker:v0.3
Check what's going on:
kubectl rollout status deploy worker
Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
Why is our app a bit slower?
Because MaxUnavailable=25%
... So the rollout terminated 2 replicas out of 10 available
Okay, but why do we see 5 new replicas being rolled out?
Because MaxSurge=25%
... So in addition to replacing 2 replicas, the rollout is also starting 3 more
It rounded down the number of MaxUnavailable pods conservatively,
but the total number of pods being rolled out is allowed to be 25+25=50%
We start with 10 pods running for the worker
deployment
Current settings: MaxUnavailable=25% and MaxSurge=25%
When we start the rollout:
Now we have 8 replicas up and running, and 5 being deployed
Our rollout is stuck at this point!
If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
Connect to the dashboard that we deployed earlier
Check that we have failures in Deployments, Pods, and Replica Sets
Can we see the reason for the failure?
We could push some v0.3
image
(the pod retry logic will eventually catch it and the rollout will proceed)
Or we could invoke a manual rollback
kubectl rollout undo deploy workerkubectl rollout status deploy worker
We reverted to v0.2
But this version still has a performance problem
How can we get back to the previous version?
kubectl rollout undo
again?Try it:
kubectl rollout undo deployment worker
Check the web UI, the list of pods ...
🤔 That didn't work.
If we see successive versions as a stack:
kubectl rollout undo
doesn't "pop" the last element from the stack
it copies the N-1th element to the top
Multiple "undos" just swap back and forth between the last two versions!
kubectl rollout undo deployment worker
Our version numbers are easy to guess
What if we had used git hashes?
What if we had changed other parameters in the Pod spec?
kubectl rollout history
kubectl rollout history deployment worker
We don't see all revisions.
We might see something like 1, 4, 5.
(Depending on how many "undos" we did before.)
These revisions correspond to our Replica Sets
This information is stored in the Replica Set annotations
kubectl describe replicasets -l app=worker | grep -A3 ^Annotations
The missing revisions are stored in another annotation:
deployment.kubernetes.io/revision-history
These are not shown in kubectl rollout history
We could easily reconstruct the full list with a script
(if we wanted to!)
kubectl rollout undo
can work with a revision numberRoll back to the "known good" deployment version:
kubectl rollout undo deployment worker --to-revision=1
Check the web UI or the list of pods
We want to:
v0.1
The corresponding changes can be expressed in the following YAML snippet:
spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10
We could use kubectl edit deployment worker
But we could also use kubectl patch
with the exact YAML shown before
kubectl patch deployment worker -p "spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10"kubectl rollout status deployment workerkubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
:EN:- Rolling updates :EN:- Rolling back a bad deployment
:FR:- Mettre à jour un déploiement :FR:- Concept de rolling update et rollback :FR:- Paramétrer la vitesse de déploiement
Healthchecks
(automatically generated title slide)
Containers can have healthchecks
There are three kinds of healthchecks, corresponding to very different use-cases:
liveness = detect when a container is "dead" and needs to be restarted
readiness = detect when a container is ready to serve traffic
startup = detect if a container has finished to boot
These healthchecks are optional (we can use none, all, or some of them)
Different probes are available (HTTP request, TCP connection, program execution)
Let's see the difference and how to use them!
This container is dead, we don't know how to fix it, other than restarting it.
Indicates if the container is dead or alive
A dead container cannot come back to life
If the liveness probe fails, the container is killed (destroyed)
(to make really sure that it's really dead; no zombies or undeads!)
What happens next depends on the pod's restartPolicy
:
Never
: the container is not restarted
OnFailure
or Always
: the container is restarted
To indicate failures that can't be recovered
deadlocks (causing all requests to time out)
internal corruption (causing all requests to error)
Anything where our incident response would be "just restart/reboot it"
Do not use liveness probes for problems that can't be fixed by a restart
Make sure that a container is ready before continuing a rolling update.
Indicates if the container is ready to handle traffic
When doing a rolling update, the Deployment controller waits for Pods to be ready
(a Pod is ready when all the containers in the Pod are ready)
Improves reliability and safety of rolling updates:
don't roll out a broken version (that doesn't pass readiness checks)
don't lose processing capacity during a rolling update
Temporarily remove a container (overloaded or otherwise) from a Service load balancer.
A container can mark itself "not ready" temporarily
(e.g. if it's overloaded or needs to reload/restart/garbage collect...)
If a container becomes "unready" it might be ready again soon
If the readiness probe fails:
the container is not killed
if the pod is a member of a service, it is temporarily removed
it is re-added as soon as the readiness probe passes again
To indicate failure due to an external cause
database is down or unreachable
mandatory auth or other backend service unavailable
To indicate temporary failure or unavailability
application can only service N parallel connections
runtime is busy doing garbage collection or initial data load
To redirect new connections to other Pods
(e.g. fail the readiness probe when the Pod's load is too high)
If a web server depends on a database to function, and the database is down:
the web server's liveness probe should succeed
the web server's readiness probe should fail
Same thing for any hard dependency (without which the container can't work)
Do not fail liveness probes for problems that are external to the container
Probes are executed at intervals of periodSeconds
(default: 10)
The timeout for a probe is set with timeoutSeconds
(default: 1)
If a probe takes longer than that, it is considered as a FAIL
A probe is considered successful after successThreshold
successes (default: 1)
A probe is considered failing after failureThreshold
failures (default: 3)
A probe can have an initialDelaySeconds
parameter (default: 0)
Kubernetes will wait that amount of time before running the probe for the first time
(this is important to avoid killing services that take a long time to start)
The container takes too long to start, and is killed by the liveness probe!
By default, probes (including liveness) start immediately
With the default probe interval and failure threshold:
a container must respond in less than 30 seconds, or it will be killed!
There are two ways to avoid that:
set initialDelaySeconds
(a fixed, rigid delay)
use a startupProbe
Kubernetes will run only the startup probe, and when it succeeds, run the other probes
For containers that take a long time to start
(more than 30 seconds)
Especially if that time can vary a lot
(e.g. fast in dev, slow in prod, or the other way around)
HTTP request
specify URL of the request (and optional headers)
any status code between 200 and 399 indicates success
TCP connection
arbitrary exec
a command is executed in the container
exit status of zero indicates success
Rolling updates proceed when containers are actually ready
(as opposed to merely started)
Containers in a broken state get killed and restarted
(instead of serving errors or timeouts)
Unavailable backends get removed from load balancer rotation
(thus improving response times across the board)
If a probe is not defined, it's as if there was an "always successful" probe
Here is a pod template for the rng
web service of the DockerCoins app:
apiVersion: v1kind: Podmetadata: name: healthy-appspec: containers: - name: myapp image: myregistry.io/myapp:v1.0 livenessProbe: httpGet: path: /health port: 80 periodSeconds: 5
If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed.
Here is a pod template for a Redis server:
apiVersion: v1kind: Podmetadata: name: redis-with-livenessspec: containers: - name: redis image: redis livenessProbe: exec: command: ["redis-cli", "ping"]
If the Redis process becomes unresponsive, it will be killed.
Do we want liveness, readiness, both?
(sometimes, we can use the same check, but with different failure thresholds)
Do we have existing HTTP endpoints that we can use?
Do we need to add new endpoints, or perhaps use something else?
Are our healthchecks likely to use resources and/or slow down the app?
Do they depend on additional services?
(this can be particularly tricky, see next slide)
Liveness checks should not be influenced by the state of external services
All checks should reply quickly (by default, less than 1 second)
Otherwise, they are considered to fail
This might require to check the health of dependencies asynchronously
(e.g. if a database or API might be healthy but still take more than 1 second to reply, we should check the status asynchronously and report a cached status)
(In that context, worker = process that doesn't accept connections)
Readiness is useful mostly for rolling updates
(because workers aren't backends for a service)
Liveness may help us restart a broken worker, but how can we check it?
Embedding an HTTP server is a (potentially expensive) option
Using a "lease" file can be relatively easy:
touch a file during each iteration of the main loop
check the timestamp of that file from an exec probe
Writing logs (and checking them from the probe) also works
:EN:- Using healthchecks to improve availability :FR:- Utiliser des healthchecks pour améliorer la disponibilité
The Kubernetes dashboard
(automatically generated title slide)
Kubernetes resources can also be viewed with a web dashboard
Dashboard users need to authenticate
(typically with a token)
The dashboard should be exposed over HTTPS
(to prevent interception of the aforementioned token)
Ideally, this requires obtaining a proper TLS certificate
(for instance, with Let's Encrypt)
Our k8s
directory has no less than three manifests!
dashboard-recommended.yaml
(purely internal dashboard; user must be created manually)
dashboard-with-token.yaml
(dashboard exposed with NodePort; creates an admin user for us)
dashboard-insecure.yaml
aka YOLO
(dashboard exposed over HTTP; gives root access to anonymous users)
dashboard-insecure.yaml
This will allow anyone to deploy anything on your cluster
(without any authentication whatsoever)
Do not use this, except maybe on a local cluster
(or a cluster that you will destroy a few minutes later)
On "normal" clusters, use dashboard-with-token.yaml
instead!
The dashboard itself
An HTTP/HTTPS unwrapper (using socat
)
The guest/admin account
kubectl apply -f ~/container.training/k8s/dashboard-insecure.yaml
kubectl get svc dashboard
You'll want the 3xxxx
port.
The dashboard will then ask you which authentication you want to use.
We have three authentication options at this point:
token (associated with a role that has appropriate permissions)
kubeconfig (e.g. using the ~/.kube/config
file from node1
)
"skip" (use the dashboard "service account")
Let's use "skip": we're logged in!
We have three authentication options at this point:
token (associated with a role that has appropriate permissions)
kubeconfig (e.g. using the ~/.kube/config
file from node1
)
"skip" (use the dashboard "service account")
Let's use "skip": we're logged in!
Remember, we just added a backdoor to our Kubernetes cluster!
kubectl delete -f ~/container.training/k8s/dashboard-insecure.yaml
The steps that we just showed you are for educational purposes only!
If you do that on your production cluster, people can and will abuse it
For an in-depth discussion about securing the dashboard,
check this excellent post on Heptio's blog
dashboard-with-token.yaml
This is a less risky way to deploy the dashboard
It's not completely secure, either:
we're using a self-signed certificate
this is subject to eavesdropping attacks
Using kubectl port-forward
or kubectl proxy
is even better
The dashboard itself (but exposed with a NodePort
)
A ServiceAccount with cluster-admin
privileges
(named kubernetes-dashboard:cluster-admin
)
kubectl apply -f ~/container.training/k8s/dashboard-with-token.yaml
The manifest creates a ServiceAccount
Kubernetes will automatically generate a token for that ServiceAccount
kubectl --namespace=kubernetes-dashboard \ describe secret cluster-admin-token
The token should start with eyJ...
(it's a JSON Web Token).
Note that the secret name will actually be cluster-admin-token-xxxxx
.
(But kubectl
prefix matches are great!)
kubectl get svc --namespace=kubernetes-dashboard
You'll want the 3xxxx
port.
The dashboard will then ask you which authentication you want to use.
Select "token" authentication
Copy paste the token (starting with eyJ...
) obtained earlier
We're logged in!
read-only dashboard
optimized for "troubleshooting and incident response"
see vision and goals for details
Security implications of kubectl apply
(automatically generated title slide)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
☠️☠️☠️
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
Example: the official setup instructions for most pod networks
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
Example: the official setup instructions for most pod networks
It introduces new failure modes
(for instance, if you try to apply YAML from a link that's no longer valid)
:EN:- The Kubernetes dashboard :FR:- Le dashboard Kubernetes
k9s
(automatically generated title slide)
Somewhere in between CLI and GUI (or web UI), we can find the magic land of TUI
often using libraries like curses and its successors
Some folks love them, some folks hate them, some are indifferent ...
But it's nice to have different options!
Let's see one particular TUI for Kubernetes: k9s
If you are using a training cluster or the shpod image, k9s is pre-installed
Otherwise, it can be installed easily:
or by fetching a binary release
We don't need to set up or configure anything
(it will use the same configuration as kubectl
and other well-behaved clients)
Just run k9s
to fire it up!
Press :
to change the type of resource to view
Then type, for instance, ns
or namespace
or nam[TAB]
, then [ENTER]
Use the arrows to move down to e.g. kube-system
, and press [ENTER]
Or, type /kub
or /sys
to filter the output, and press [ENTER]
twice
(once to exit the filter, once to enter the namespace)
We now see the pods in kube-system
!
l
to view logs
d
to describe
s
to get a shell (won't work if sh
isn't available in the container image)
e
to edit
shift-f
to define port forwarding
ctrl-k
to kill
[ESC]
to get out or get back
On top of the screen, we should see shortcuts like this:
<0> all<1> kube-system<2> default
Pressing the corresponding number switches to that namespace
(or shows resources across all namespaces with 0
)
Locate a namespace with a copy of DockerCoins, and go there!
View Deployments (type :
deploy
[ENTER]
)
Select e.g. worker
Scale it with s
View its aggregated logs with l
Exit at any time with Ctrl-C
k9s will "remember" where you were
(and go back there next time you run it)
Very convenient to navigate through resources
(hopping from a deployment, to its pod, to another namespace, etc.)
Very convenient to quickly view logs of e.g. init containers
Very convenient to get a (quasi) realtime view of resources
(if we use watch kubectl get
a lot, we will probably like k9s)
Doesn't promote automation / scripting
(if you repeat the same things over and over, there is a scripting opportunity)
Not all features are available
(e.g. executing arbitrary commands in containers)
Try it out, and see if it makes you more productive!
:EN:- The k9s TUI :FR:- L'interface texte k9s
Tilt
(automatically generated title slide)
What does a development workflow look like?
make changes
test / see these changes
repeat!
What does it look like, with containers?
🤔
Preparation
Iteration
docker build
docker run
docker stop
Straightforward when we have a single container.
Preparation
docker build
+ docker run
Iteration
Note: only works with interpreted languages.
(Compiled languages require extra work.)
Preparation
docker-compose up
Iteration
docker-compose up
(as needed)Simplifies complex scenarios (multiple containers).
Facilitates updating images.
Preparation
Iteration
Seems simple enough, right?
Preparation
Iteration
Ah, right ...
Remember "build, ship, and run"
Registries are involved in the "ship" phase
With Docker, we were building and running on the same node
We didn't need a registry!
With Kubernetes, though ...
If our Kubernetes has only one node ...
... We can build directly on that node ...
... We don't need to push images ...
... We don't need to run a registry!
Examples: Docker Desktop, Minikube ...
Which registry should we use?
(Docker Hub, Quay, cloud-based, self-hosted ...)
Should we use a single registry, or one per cluster or environment?
Which tags and credentials should we use?
(in particular when using a shared registry!)
How do we provision that registry and its users?
How do we adjust our Kubernetes YAML manifests?
(e.g. to inject image names and tags)
The whole cycle (build+push+update) is expensive
If we have many services, how do we update only the ones we need?
Can we take shortcuts?
(e.g. synchronized files without going through a whole build+push+update cycle)
Tilt is a tool to address all these questions
There are other similar tools (e.g. Skaffold)
We arbitrarily decided to focus on that one
The dockercoins
directory in our repository has a Tiltfile
That Tiltfile includes definitions for the DockerCoins app, including:
building the images for the app
Kubernetes manifests to deploy the app
a self-hosted registry to host the app image
Let's try it out!
These instructions are valid only if you run Tilt on your local machine.
If you are running Tilt on a remote machine or in a Pod, see next slide.
Start Tilt:
tilt up
Then press "space" or connect to http://localhost:10350/
If Tilt runs remotely, we can't access http://localhost:10350
Our Tiltfile includes an ngrok tunnel, let's use that
Start Tilt:
tilt up
The ngrok URL should appear in the Tilt output
(something like https://xxxx-aa-bb-cc-dd.ngrok.io/
)
Open that URL in your browser
Note: it's also possible to run tilt up --host=0.0.0.0
.
Tilt is designed to run in dev environments
It will try to figure out if we're really in a dev environment:
if Tilt thinks that are on a local dev cluster, it will start
otherwise, it will give us a warning and it won't continue
In the latter case, we need to add one line to the Tiltfile
(to tell Tilt "it's okay, you can run safely in this environment!")
If this happens, add the line to the Tiltfile
(Tilt will tell you exactly what to add!)
We don't need to restart Tilt, it will detect the change immediately
Kubernetes manifests for a local registry
Kubernetes manifests for DockerCoins
Instructions indicating how to build DockerCoins' images
A tiny bit of sugar
(telling Tilt which registry to use)
Tilt keeps track of dependencies between files and resources
(a bit like a make
that would run continuously)
It automatically alters some resources
(for instance, it updates the images used in our Kubernetes manifests)
That's it!
(And of course, it provides a great web UI, lots of libraries, etc.)
Let's change e.g. worker/worker.py
Thanks to this line,
docker_build('dockercoins/worker', 'worker')
... Tilt watches the worker
directory and uses it to build dockercoins/worker
Thanks to this line,
default_registry('localhost:30555')
... Tilt actually renames dockercoins/worker
to localhost:30555/dockercoins_worker
Tilt will tag the image with something like tilt-xxxxxxxxxx
Thanks to this line,
k8s_yaml('../k8s/dockercoins.yaml')
... Tilt is aware of our Kubernetes resources
The worker
Deployment uses dockercoins/worker
, so it must be updated
dockercoins/worker
becomes localhost:30555/dockercoins_worker:tilt-xxx
The worker
Deployment gets updated on the Kubernetes cluster
All these operations (and their log output) are visible in the Tilt UI
The Tiltfile is written in Starlark
(essentially a subset of Python)
Tilt monitors the Tiltfile too
(so it reloads it immediately when we change it)
Dependency engine
(build or run only what's necessary)
Ability to watch resources
(execute actions immediately, without explicitly running a command)
Rich library of function and helpers
(build container images, manipulate YAML manifests...)
Convenient UI (web; TUI also available)
(provides immediate feedback and logs)
Extensibility!
:EN:- Development workflow with Tilt :FR:- Développer avec Tilt
Exercise — Healthchecks
(automatically generated title slide)
We want to add healthchecks to the rng
service in dockercoins
The rng
service exhibits an interesting behavior under load:
its latency increases (which will cause probes to time out!)
We want to see:
what happens when the readiness probe fails
what happens when the liveness probe fails
how to set "appropriate" probes and probe parameters
exercises/healthchecks-details.md
First, deploy a new copy of dockercoins
(for instance, in a brand new namespace)
Pro tip #1: ping (e.g. with httping
) the rng
service at all times
it should initially show a few milliseconds latency
that will increase when we scale up
it will also let us detect when the service goes "boom"
Pro tip #2: also keep an eye on the web UI
exercises/healthchecks-details.md
Add a readiness probe to rng
this requires editing the pod template in the Deployment manifest
use a simple HTTP check on the /
route of the service
keep all other parameters (timeouts, thresholds...) at their default values
Check what happens when deploying an invalid image for rng
(e.g. alpine
)
(If the probe was set up correctly, the app will continue to work,
because Kubernetes won't switch over the traffic to the alpine
containers,
because they don't pass the readiness probe.)
exercises/healthchecks-details.md
Then roll back rng
to the original image
Check what happens when we scale up the worker
Deployment to 15+ workers
(get the latency above 1 second)
(We should now observe intermittent unavailability of the service, i.e. every 30 seconds it will be unreachable for a bit, then come back, then go away again, etc.)
exercises/healthchecks-details.md
Now replace the readiness probe with a liveness probe
What happens now?
(At first the behavior looks the same as with the readiness probe: service becomes unreachable, then reachable again, etc.; but there is a significant difference behind the scenes. What is it?)
exercises/healthchecks-details.md
Bonus questions!
What happens if we enable both probes at the same time?
What strategies can we use so that both probes are useful?
exercises/healthchecks-details.md
Exposing HTTP services with Ingress resources
(automatically generated title slide)
HTTP services are typically exposed on port 80
(and 443 for HTTPS)
NodePort
services are great, but they are not on port 80
(by default, they use port range 30000-32767)
How can we get many HTTP services on port 80? 🤔
Service with type: LoadBalancer
costs a little bit of money; not always available
Service with one (or multiple) ExternalIP
requires public nodes; limited by number of nodes
Service with hostPort
or hostNetwork
same limitations as ExternalIP
; even harder to manage
Ingress resources
addresses all these limitations, yay!
LoadBalancer
vs Ingress
Service with type: LoadBalancer
Ingress
Kubernetes API resource (kubectl get ingress
/ingresses
/ing
)
Designed to expose HTTP services
Requires an ingress controller
(otherwise, resources can be created, but nothing happens)
Some ingress controllers are based on existing load balancers
(HAProxy, NGINX...)
Some are standalone, and sometimes designed for Kubernetes
(Contour, Traefik...)
Note: there is no "default" or "official" ingress controller!
Load balancing
SSL termination
Name-based virtual hosting
URI routing
(e.g. /api
→api-service
, /static
→assets-service
)
(Not always supported; supported through annotations, CRDs, etc.)
Routing with other headers or cookies
A/B testing
Canary deployment
etc.
Step 1: deploy an ingress controller
(one-time setup)
Step 2: create Ingress resources
maps a domain and/or path to a Kubernetes Service
the controller watches ingress resources and sets up a LB
Step 3: set up DNS
GKE has "GKE Ingress", a custom ingress controller
(enabled by default)
EKS has "AWS ALB Ingress Controller" as well
(not enabled by default, requires extra setup)
They leverage cloud-specific HTTP load balancers
(GCP HTTP LB, AWS ALB)
They typically a cost per ingress resource k8s/ingress.md
Most ingress controllers will create a LoadBalancer Service
(and will receive all HTTP/HTTPS traffic through it)
We need to point our DNS entries to the IP address of that LB
Some rare ingress controllers will allocate one LB per ingress resource
(example: the GKE Ingress and ALB Ingress mentioned previously)
This leads to increased costs
Note that it's possible to have multiple "rules" per ingress resource
(this will reduce costs but may be less convenient to manage)
We will deploy the Traefik ingress controller
this is an arbitrary choice
maybe motivated by the fact that Traefik releases are named after cheeses
For DNS, we will use nip.io
*.1.2.3.4.nip.io
resolves to 1.2.3.4
We will create ingress resources for various HTTP services
We want our ingress load balancer to be available on port 80
The best way to do that would be with a LoadBalancer
service
... but it requires support from the underlying infrastructure
Instead, we are going to use the hostNetwork
mode on the Traefik pods
Let's see what this hostNetwork
mode is about ...
hostNetwork
Normally, each pod gets its own network namespace
(sometimes called sandbox or network sandbox)
An IP address is assigned to the pod
This IP address is routed/connected to the cluster network
All containers of that pod are sharing that network namespace
(and therefore using the same IP address)
hostNetwork: true
No network namespace gets created
The pod is using the network namespace of the host
It "sees" (and can use) the interfaces (and IP addresses) of the host
The pod can receive outside traffic directly, on any port
Downside: with most network plugins, network policies won't work for that pod
most network policies work at the IP address level
filtering that pod = filtering traffic from the node
We could use pods specifying hostPort: 80
... but with most CNI plugins, this doesn't work or requires additional setup
We could use a NodePort
service
... but that requires changing the --service-node-port-range
flag in the API server
We could create a service with an external IP
... this would work, but would require a few extra steps
(figuring out the IP address and adding it to the service)
The Traefik documentation recommends to use a Helm chart
For simplicity, we're going to use a custom YAML manifest
Our manifest will:
use a Daemon Set so that each node can accept connections
enable hostNetwork
add a toleration so that Traefik also runs on all nodes
We could do the same with the official Helm chart k8s/ingress.md
A taint is an attribute added to a node
It prevents pods from running on the node
... Unless they have a matching toleration
When deploying with kubeadm
:
a taint is placed on the node dedicated to the control plane
the pods running the control plane have a matching toleration
kubectl get node node1 -o json | jq .speckubectl get node node2 -o json | jq .spec
We should see a result only for node1
(the one with the control plane):
"taints": [ { "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" } ]
The key
can be interpreted as:
a reservation for a special set of pods
(here, this means "this node is reserved for the control plane")
an error condition on the node
(for instance: "disk full," do not start new pods here!)
The effect
can be:
NoSchedule
(don't run new pods here)
PreferNoSchedule
(try not to run new pods here)
NoExecute
(don't run new pods and evict running pods)
kubectl -n kube-system get deployments coredns -o json | jq .spec.template.spec.tolerations
The result should include:
{ "effect": "NoSchedule", "key": "node-role.kubernetes.io/master" }
It means: "bypass the exact taint that we saw earlier on node1
."
kube-proxy
:kubectl -n kube-system get ds kube-proxy -o json | jq .spec.template.spec.tolerations
The result should include:
{ "operator": "Exists" }
This one is a special case that means "ignore all taints and run anyway."
We provide a YAML file (k8s/traefik.yaml
) which is essentially the sum of:
Traefik's Daemon Set resources (patched with hostNetwork
and tolerations)
Traefik's RBAC rules allowing it to watch necessary API objects
kubectl apply -f ~/container.training/k8s/traefik.yaml
curl localhost
We should get a 404 page not found
error.
This is normal: we haven't provided any ingress rule yet.
To make our lives easier, we will use nip.io
Check out http://red.A.B.C.D.nip.io
(replacing A.B.C.D with the IP address of node1
)
We should get the same 404 page not found
error
(meaning that our DNS is "set up properly", so to speak!)
Traefik provides a web dashboard
With the current install method, it's listening on port 8080
http://node1:8080
(replacing node1
with its IP address)We are going to use the jpetazzo/color
image
This image contains a simple static HTTP server on port 80
We will run 3 deployments (red
, green
, blue
)
We will create 3 services (one for each deployment)
Then we will create 3 ingress rules (one for each service)
We will route <color>.A.B.C.D.nip.io
to the corresponding deployment
Run all three deployments:
kubectl create deployment red --image=jpetazzo/colorkubectl create deployment green --image=jpetazzo/colorkubectl create deployment blue --image=jpetazzo/color
Create a service for each of them:
kubectl expose deployment red --port=80kubectl expose deployment green --port=80kubectl expose deployment blue --port=80
Before Kubernetes 1.19, we must use YAML manifests
(see example on next slide)
Since Kubernetes 1.19, we can use kubectl create ingress
kubectl create ingress red \ --rule=red.A.B.C.D.nip.io/*=red:80
We can specify multiple rules per resource
kubectl create ingress rgb \ --rule=red.A.B.C.D.nip.io/*=red:80 \ --rule=green.A.B.C.D.nip.io/*=green:80 \ --rule=blue.A.B.C.D.nip.io/*=blue:80
*
!The *
is important:
--rule=red.A.B.C.D.nip.io/*=red:80
It means "all URIs below that path"
Without the *
, it means "only that exact path"
(if we omit it, requests for e.g. red.A.B.C.D.nip.io/hello
will 404)
Here is a minimal host-based ingress resource:
apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: redspec: rules: - host: red.A.B.C.D.nip.io http: paths: - path: / backend: serviceName: red servicePort: 80
(It is in k8s/ingress.yaml
.)
The YAML on the previous slide uses apiVersion: networking.k8s.io/v1beta1
Starting with Kubernetes 1.19, networking.k8s.io/v1
is available
However, with Kubernetes 1.19 (and later), we can use kubectl create ingress
We chose to keep an "old" (deprecated!) YAML example for folks still using older versions of Kubernetes
If we want to see "modern" YAML, we can use -o yaml --dry-run=client
:
kubectl create ingress red -o yaml --dry-run=client \ --rule=red.A.B.C.D.nip.io/*=red:80
Create the ingress resources with kubectl create ingress
(or use the YAML manifests if using Kubernetes 1.18 or older)
Make sure to update the hostnames!
Check that you can connect to the exposed web apps
You can have multiple ingress controllers active simultaneously
(e.g. Traefik and NGINX)
You can even have multiple instances of the same controller
(e.g. one for internal, another for external traffic)
To indicate which ingress controller should be used by a given Ingress resouce:
before Kubernetes 1.18, use the kubernetes.io/ingress.class
annotation
since Kubernetes 1.18, use the ingressClassName
field
(which should refer to an existing IngressClass
resource)
A lot of things have been left out of the Ingress v1 spec
(routing requests according to weight, cookies, across namespaces...)
Example: stripping path prefixes
Traefik v1: traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
Traefik v2: requires a CRD
The Gateway API SIG might be the future of Ingress
It proposes new resources:
GatewayClass, Gateway, HTTPRoute, TCPRoute...
It is still in alpha stage
Let's see how to implement canary releases
The example here will use Traefik v1
(which is obsolete)
It won't work on your Kubernetes cluster!
(unless you're running an oooooold version of Kubernetes)
(and an equally oooooooold version of Traefik)
We've left it here just as an example!
A canary release (or canary launch or canary deployment) is a release that will process only a small fraction of the workload
After deploying the canary, we compare its metrics to the normal release
If the metrics look good, the canary will progressively receive more traffic
(until it gets 100% and becomes the new normal release)
If the metrics aren't good, the canary is automatically removed
When we deploy a bad release, only a tiny fraction of traffic is affected
Example 1: canary for a microservice
Example 2: canary for a web app
Example 3: canary for shipping physical goods
We're going to implement example 1 (per-request routing)
We need to deploy the canary and expose it with a separate service
Then, in the Ingress resource, we need:
multiple paths
entries (one for each service, canary and normal)
an extra annotation indicating the weight of each service
If we want, we can send requests to more than 2 services
apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: rgb annotations: traefik.ingress.kubernetes.io/service-weights: | red: 50% green: 25% blue: 25%spec: rules: - host: rgb.A.B.C.D.nip.io http: paths: - path: / backend: serviceName: red servicePort: 80 - path: / backend: serviceName: green servicePort: 80 - path: / backend: serviceName: blue servicePort: 80
Just to illustrate how different things are ...
With the NGINX ingress controller:
define two ingress ressources
(specifying rules with the same host+path)
add nginx.ingress.kubernetes.io/canary
annotations on each
With Linkerd2:
define two services
define an extra service for the weighted aggregate of the two
define a TrafficSplit (this is a CRD introduced by the SMI spec)
What we saw is just one of the multiple building blocks that we need to achieve a canary release.
We also need:
metrics (latency, performance ...) for our releases
automation to alter canary weights
(increase canary weight if metrics look good; decrease otherwise)
a mechanism to manage the lifecycle of the canary releases
(create them, promote them, delete them ...)
For inspiration, check flagger by Weave.
:EN:- The Ingress resource :FR:- La ressource ingress
Ingress and TLS certificates
(automatically generated title slide)
Most ingress controllers support TLS connections
(in a way that is standard across controllers)
The TLS key and certificate are stored in a Secret
The Secret is then referenced in the Ingress resource:
spec: tls: - secretName: XXX hosts: - YYY rules: - ZZZ
In the next section, we will need a TLS key and certificate
These usually come in PEM format:
-----BEGIN CERTIFICATE-----MIIDATCCAemg......-----END CERTIFICATE-----
We will see how to generate a self-signed certificate
(easy, fast, but won't be recognized by web browsers)
We will also see how to obtain a certificate from Let's Encrypt
(requires the cluster to be reachable through a domain name)
A very popular option is to use the cert-manager operator
It's a flexible, modular approach to automated certificate management
For simplicity, in this section, we will use certbot
The method shown here works well for one-time certs, but lacks:
automation
renewal
If you're doing this in a training:
the instructor will tell you what to use
If you're doing this on your own Kubernetes cluster:
you should use a domain that points to your cluster
More precisely:
you should use a domain that points to your ingress controller
If you don't have a domain name, you can use nip.io
(if your ingress controller is on 1.2.3.4, you can use whatever.1.2.3.4.nip.io
)
$DOMAIN
We will use $DOMAIN
in the following section
Let's set it now
DOMAIN
environment variable:export DOMAIN=...
We present 3 methods to obtain a certificate
We suggest that we use method 1 (self-signed certificate)
it's the simplest and fastest method
it doesn't rely on other components
You're welcome to try methods 2 and 3 (leveraging certbot)
they're great if you want to understand "how the sausage is made"
they require some hacks (make sure port 80 is available)
they won't be used in production (cert-manager is better)
openssl
, generating a self-signed cert is just one command away!openssl req \ -newkey rsa -nodes -keyout privkey.pem \ -x509 -days 30 -subj /CN=$DOMAIN/ -out cert.pem
This will create two files, privkey.pem
and cert.pem
.
certbot
is an ACME client
(Automatic Certificate Management Environment)
We can use it to obtain certificates from Let's Encrypt
It needs to listen to port 80
(to complete the HTTP-01 challenge)
If port 80 is already taken by our ingress controller, see method 3
certbot
contacts Let's Encrypt, asking for a cert for $DOMAIN
Let's Encrypt gives a token to certbot
Let's Encrypt then tries to access the following URL:
http://$DOMAIN/.well-known/acme-challenge/<token>
That URL needs to be routed to certbot
Once Let's Encrypt gets the response from certbot
, it issues the certificate
There is a very convenient container image, certbot/certbot
Let's use a volume to get easy access to the generated key and certificate
EMAIL=your.address@example.comdocker run --rm -p 80:80 -v $PWD/letsencrypt:/etc/letsencrypt \ certbot/certbot certonly \ -m $EMAIL \ --standalone --agree-tos -n \ --domain $DOMAIN \ --test-cert
This will get us a "staging" certificate.
Remove --test-cert
to obtain a real certificate.
If everything went fine:
the key and certificate files are in letsencrypt/live/$DOMAIN
they are owned by root
Grant ourselves permissions on these files:
sudo chown -R $USER letsencrypt
Copy the certificate and key to the current directory:
cp letsencrypt/live/test/{cert,privkey}.pem .
Sometimes, we can't simply listen to port 80:
But we can define an Ingress to route the HTTP-01 challenge to certbot
!
Our Ingress needs to route all requests to /.well-known/acme-challenge
to certbot
There are at least two ways to do that:
certbot
in a Pod (and extract the cert+key when it's done)certbot
in a container on a node (and manually route traffic to it)We're going to use the second option
(mostly because it will give us an excuse to tinker with Endpoints resources!)
We need the following resources:
an Endpoints¹ listing a hard-coded IP address and port
(where our certbot
container will be listening)
a Service corresponding to that Endpoints
an Ingress sending requests to /.well-known/acme-challenge/*
to that Service
(we don't even need to include a domain name in it)
Then we need to start certbot
so that it's listening on the right address+port
¹Endpoints is always plural, because even a single resource is a list of endpoints.
We prepared a YAML file to create the three resources
However, the Endpoints needs to be adapted to put the current node's address
Edit ~/containers.training/k8s/certbot.yaml
(replace A.B.C.D
with the current node's address)
Create the resources:
kubectl apply -f ~/containers.training/k8s/certbot.yaml
Now we can run certbot
, listening on the port listed in the Endpoints
(i.e. 8000)
certbot
:EMAIL=your.address@example.comdocker run --rm -p 8000:80 -v $PWD/letsencrypt:/etc/letsencrypt \ certbot/certbot certonly \ -m $EMAIL \ --standalone --agree-tos -n \ --domain $DOMAIN \ --test-cert
This is using the staging environment.
Remove --test-cert
to get a production certificate.
Just like in the previous method, the certificate is in letsencrypt/live/$DOMAIN
(and owned by root)
Grand ourselves permissions on these files:
sudo chown -R $USER letsencrypt
Copy the certificate and key to the current directory:
cp letsencrypt/live/$DOMAIN/{cert,privkey}.pem .
We now have two files:
privkey.pem
(the private key)
cert.pem
(the certificate)
We can create a Secret to hold them
kubectl create secret tls $DOMAIN --cert=cert.pem --key=privkey.pem
To enable TLS for an Ingress, we need to add a tls
section to the Ingress:
spec: tls: - secretName: DOMAIN hosts: - DOMAIN rules: ...
The list of hosts will be used by the ingress controller
(to know which certificate to use with SNI)
Of course, the name of the secret can be different
(here, for clarity and convenience, we set it to match the domain)
kubectl create ingress
We can also create an Ingress using TLS directly
To do it, add ,tls=secret-name
to an Ingress rule
Example:
kubectl create ingress hello \ --rule=hello.example.com/*=hello:80,tls=hello
The domain will automatically be inferred from the rule
Many ingress controllers can use different "stores" for keys and certificates
Our ingress controller needs to be configured to use secrets
(as opposed to, e.g., obtain certificates directly with Let's Encrypt)
Add the tls
section to an existing Ingress
If you need to see what the tls
section should look like, you can:
kubectl explain ingress.spec.tls
kubectl create ingress --dry-run=client -o yaml ...
check ~/container.training/k8s/ingress.yaml
for inspiration
read the docs
Check that the URL now works over https
(it might take a minute to be picked up by the ingress controller)
To repeat something mentioned earlier ...
The methods presented here are for educational purpose only
In most production scenarios, the certificates will be obtained automatically
A very popular option is to use the cert-manager operator
Since TLS certificates are stored in Secrets...
...It means that our Ingress controller must be able to read Secrets
A vulnerability in the Ingress controller can have dramatic consequences
See CVE-2021-25742 for an example
This can be mitigated by limiting which Secrets the controller can access
(RBAC rules can specify resource names)
Downside: each TLS secret must explicitly be listed in RBAC
(but that's better than a full cluster compromise, isn't it?)
:EN:- Ingress and TLS :FR:- Certificats TLS et ingress
Volumes
(automatically generated title slide)
Volumes are special directories that are mounted in containers
Volumes can have many different purposes:
share files and directories between containers running on the same machine
share files and directories between containers and their host
centralize configuration information in Kubernetes and expose it to containers
manage credentials and secrets and expose them securely to containers
store persistent data for stateful services
access storage systems (like Ceph, EBS, NFS, Portworx, and many others)
Kubernetes and Docker volumes are very similar
(the Kubernetes documentation says otherwise ...
but it refers to Docker 1.7, which was released in 2015!)
Docker volumes allow us to share data between containers running on the same host
Kubernetes volumes allow us to share data between containers in the same pod
Both Docker and Kubernetes volumes enable access to storage systems
Kubernetes volumes are also used to expose configuration and secrets
Docker has specific concepts for configuration and secrets
(but under the hood, the technical implementation is similar)
If you're not familiar with Docker volumes, you can safely ignore this slide!
Volumes and Persistent Volumes are related, but very different!
Volumes:
appear in Pod specifications (we'll see that in a few slides)
do not exist as API resources (cannot do kubectl get volumes
)
Persistent Volumes:
are API resources (can do kubectl get persistentvolumes
)
correspond to concrete volumes (e.g. on a SAN, EBS, etc.)
cannot be associated with a Pod directly; but through a Persistent Volume Claim
won't be discussed further in this section
We will start with the simplest Pod manifest we can find
We will add a volume to that Pod manifest
We will mount that volume in a container in the Pod
By default, this volume will be an emptyDir
(an empty directory)
It will "shadow" the directory where it's mounted
apiVersion: v1kind: Podmetadata: name: nginx-without-volumespec: containers: - name: nginx image: nginx
This is a MVP! (Minimum Viable Pod😉)
It runs a single NGINX container.
kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml
Get its IP address:
IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
Send a request with curl:
curl $IPADDR
(We should see the "Welcome to NGINX" page.)
We need to add the volume in two places:
at the Pod level (to declare the volume)
at the container level (to mount the volume)
We will declare a volume named www
No type is specified, so it will default to emptyDir
(as the name implies, it will be initialized as an empty directory at pod creation)
In that pod, there is also a container named nginx
That container mounts the volume www
to path /usr/share/nginx/html/
apiVersion: v1kind: Podmetadata: name: nginx-with-volumespec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/
kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml
Get its IP address:
IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
Send a request with curl:
curl $IPADDR
(We should now see a "403 Forbidden" error page.)
Let's add another container to the Pod
Let's mount the volume in both containers
That container will populate the volume with static files
NGINX will then serve these static files
To populate the volume, we will clone the Spoon-Knife repository
this repository is https://github.com/octocat/Spoon-Knife
it's very popular (more than 100K stars!)
apiVersion: v1kind: Podmetadata: name: nginx-with-gitspec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure
We added another container to the pod
That container mounts the www
volume on a different path (/www
)
It uses the alpine
image
When started, it installs git
and clones the octocat/Spoon-Knife
repository
(that repository contains a tiny HTML website)
As a result, NGINX now serves this website
This one will be time-sensitive!
We need to catch the Pod IP address as soon as it's created
Then send a request to it as fast as possible
kubectl get pods -o wide --watch
kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml
curl $IP
curl $IP
The first time, we should see "403 Forbidden".
The second time, we should see the HTML file from the Spoon-Knife repository.
Both containers are started at the same time
NGINX starts very quickly
(it can serve requests immediately)
But at this point, the volume is empty
(NGINX serves "403 Forbidden")
The other containers installs git and clones the repository
(this takes a bit longer)
When the other container is done, the volume holds the repository
(NGINX serves the HTML file)
The default restartPolicy
is Always
This would cause our git
container to run again ... and again ... and again
(with an exponential back-off delay, as explained in the documentation)
That's why we specified restartPolicy: OnFailure
There is a short period of time during which the website is not available
(because the git
container hasn't done its job yet)
With a bigger website, we could get inconsistent results
(where only a part of the content is ready)
In real applications, this could cause incorrect results
How can we avoid that?
We can define containers that should execute before the main ones
They will be executed in order
(instead of in parallel)
They must all succeed before the main containers are started
This is exactly what we need here!
Let's see one in action
See Init Containers documentation for all the details.
apiVersion: v1kind: Podmetadata: name: nginx-with-initspec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ initContainers: - name: git image: alpine command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/
Create the pod:
kubectl create -f ~/container.training/k8s/nginx-4-with-init.yaml
Try to send HTTP requests as soon as the pod comes up
This time, instead of "403 Forbidden" we get a "connection refused"
NGINX doesn't start until the git container has done its job
We never get inconsistent results
(a "half-ready" container)
Load content
Generate configuration (or certificates)
Database migrations
Waiting for other services to be up
(to avoid flurry of connection errors in main container)
etc.
The lifecycle of a volume is linked to the pod's lifecycle
This means that a volume is created when the pod is created
This is mostly relevant for emptyDir
volumes
(other volumes, like remote storage, are not "created" but rather "attached" )
A volume survives across container restarts
A volume is destroyed (or, for remote storage, detached) when the pod is destroyed
:EN:- Sharing data between containers with volumes :EN:- When and how to use Init Containers
:FR:- Partager des données grâce aux volumes :FR:- Quand et comment utiliser un Init Container
Managing configuration
(automatically generated title slide)
Some applications need to be configured (obviously!)
There are many ways for our code to pick up configuration:
command-line arguments
environment variables
configuration files
configuration servers (getting configuration from a database, an API...)
... and more (because programmers can be very creative!)
How can we do these things with containers and Kubernetes?
There are many ways to pass configuration to code running in a container:
baking it into a custom image
command-line arguments
environment variables
injecting configuration files
exposing it over the Kubernetes API
configuration servers
Let's review these different strategies!
Put the configuration in the image
(it can be in a configuration file, but also ENV
or CMD
actions)
It's easy! It's simple!
Unfortunately, it also has downsides:
multiplication of images
different images for dev, staging, prod ...
minor reconfigurations require a whole build/push/pull cycle
Avoid doing it unless you don't have the time to figure out other options
Indicate what should run in the container
Pass command
and/or args
in the container options in a Pod's template
Both command
and args
are arrays
Example (source):
args:- "agent"- "-bootstrap-expect=3"- "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NS)\""- "-client=0.0.0.0"- "-data-dir=/consul/data"- "-server"- "-ui"
args
or command
?Use command
to override the ENTRYPOINT
defined in the image
Use args
to keep the ENTRYPOINT
defined in the image
(the parameters specified in args
are added to the ENTRYPOINT
)
In doubt, use command
It is also possible to use both command
and args
(they will be strung together, just like ENTRYPOINT
and CMD
)
See the docs to see how they interact together
Works great when options are passed directly to the running program
(otherwise, a wrapper script can work around the issue)
Works great when there aren't too many parameters
(to avoid a 20-lines args
array)
Requires documentation and/or understanding of the underlying program
("which parameters and flags do I need, again?")
Well-suited for mandatory parameters (without default values)
Not ideal when we need to pass a real configuration file anyway
Pass options through the env
map in the container specification
Example:
env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!"
value
must be a string! Make sure that numbers and fancy strings are quoted.
🤔 Why this weird {name: xxx, value: yyy}
scheme? It will be revealed soon!
In the previous example, environment variables have fixed values
We can also use a mechanism called the downward API
The downward API allows exposing pod or container information
either through special files (we won't show that for now)
or through environment variables
The value of these environment variables is computed when the container is started
Remember: environment variables won't (can't) change after container start
Let's see a few concrete examples!
- name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
Useful to generate FQDN of services
(in some contexts, a short name is not enough)
For instance, the two commands should be equivalent:
curl api-backendcurl api-backend.$MY_POD_NAMESPACE.svc.cluster.local
- name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP
Useful if we need to know our IP address
(we could also read it from eth0
, but this is more solid)
- name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory
Useful for runtimes where memory is garbage collected
Example: the JVM
(the memory available to the JVM should be set with the -Xmx
flag)
Best practice: set a memory limit, and pass it to the runtime
Note: recent versions of the JVM can do this automatically
(see JDK-8146115) and this blog post for detailed examples)
This documentation page tells more about these environment variables
And this one explains the other way to use the downward API
(through files that get created in the container filesystem)
That second link also includes a list of all the fields that can be used with the downward API
Works great when the running program expects these variables
Works great for optional parameters with reasonable defaults
(since the container image can provide these defaults)
Sort of auto-documented
(we can see which environment variables are defined in the image, and their values)
Can be (ab)used with longer values ...
... You can put an entire Tomcat configuration file in an environment ...
... But should you?
(Do it if you really need to, we're not judging! But we'll see better ways.)
Sometimes, there is no way around it: we need to inject a full config file
Kubernetes provides a mechanism for that purpose: configmaps
A configmap is a Kubernetes resource that exists in a namespace
Conceptually, it's a key/value map
(values are arbitrary strings)
We can think about them in (at least) two different ways:
as holding entire configuration file(s)
as holding individual configuration parameters
Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!
In this case, each key/value pair corresponds to a configuration file
Key = name of the file
Value = content of the file
There can be one key/value pair, or as many as necessary
(for complex apps with multiple configuration files)
Examples:
# Create a configmap with a single key, "app.conf"kubectl create configmap my-app-config --from-file=app.conf# Create a configmap with a single key, "app.conf" but another filekubectl create configmap my-app-config --from-file=app.conf=app-prod.conf# Create a configmap with multiple keys (one per file in the config.d directory)kubectl create configmap my-app-config --from-file=config.d/
In this case, each key/value pair corresponds to a parameter
Key = name of the parameter
Value = value of the parameter
Examples:
# Create a configmap with two keyskubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue# Create a configmap from a file containing key=val pairskubectl create cm my-app-config \ --from-env-file=app.conf
Configmaps can be exposed as plain files in the filesystem of a container
this is achieved by declaring a volume and mounting it in the container
this is particularly effective for configmaps containing whole files
Configmaps can be exposed as environment variables in the container
this is achieved with the downward API
this is particularly effective for configmaps containing individual parameters
Let's see how to do both!
We are going to deploy HAProxy, a popular load balancer
It expects to find its configuration in a specific place:
/usr/local/etc/haproxy/haproxy.cfg
We will create a ConfigMap holding the configuration file
Then we will mount that ConfigMap in a Pod running HAProxy
In this example, we will deploy two versions of our app:
the "blue" version in the blue
namespace
the "green" version in the green
namespace
In both namespaces, we will have a Deployment and a Service
(both named color
)
We want to load balance traffic between both namespaces
(we can't do that with a simple service selector: these don't cross namespaces)
We're going to use the image jpetazzo/color
(it is a simple "HTTP echo" server showing which pod served the request)
We can create each Namespace, Deployment, and Service by hand, or...
kubectl apply -f ~/container.training/k8s/rainbow.yaml
Reminder: Service x
in Namespace y
is available through:
x.y
, x.y.svc
, x.y.svc.cluster.local
Since the cluster.local
suffix can change, we'll use x.y.svc
kubectl run --rm -it --restart=Never --image=nixery.dev/curl my-test-pod \ curl color.blue.svc
Here is the file that we will use, k8s/haproxy.cfg:
global daemondefaults mode tcp timeout connect 5s timeout client 50s timeout server 50slisten very-basic-load-balancer bind *:80 server blue color.blue.svc:80 server green color.green.svc:80# Note: the services above must exist,# otherwise HAproxy won't start.
Create a ConfigMap named haproxy
and holding the configuration file:
kubectl create configmap haproxy --from-file=~/container.training/k8s/haproxy.cfg
Check what our configmap looks like:
kubectl get configmap haproxy -o yaml
Here is k8s/haproxy.yaml, a Pod manifest using that ConfigMap:
apiVersion: v1kind: Podmetadata: name: haproxyspec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy:1 volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/
kubectl apply -f ~/container.training/k8s/haproxy.yaml
kubectl get pod haproxy -o wideIP=$(kubectl get pod haproxy -o json | jq -r .status.podIP)
If everything went well, when we should see a perfect round robin
(one request to blue
, one request to green
, one request to blue
, etc.)
for i in $(seq 10); docurl $IPdone
We are going to run a Docker registry on a custom port
By default, the registry listens on port 5000
This can be changed by setting environment variable REGISTRY_HTTP_ADDR
We are going to store the port number in a configmap
Then we will expose that configmap as a container environment variable
Our configmap will have a single key, http.addr
:
kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80
Check our configmap:
kubectl get configmap registry -o yaml
We are going to use the following pod definition:
apiVersion: v1kind: Podmetadata: name: registryspec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr
kubectl apply -f ~/container.training/k8s/registry.yaml
Check the IP address allocated to the pod:
kubectl get pod registry -o wideIP=$(kubectl get pod registry -o json | jq -r .status.podIP)
Confirm that the registry is available on port 80:
curl $IP/v2/_catalog
:EN:- Managing application configuration :EN:- Exposing configuration with the downward API :EN:- Exposing configuration with Config Maps
:FR:- Gérer la configuration des applications :FR:- Configuration au travers de la downward API :FR:- Configurer les applications avec des Config Maps k8s/configuration.md
Managing secrets
(automatically generated title slide)
Sometimes our code needs sensitive information:
passwords
API tokens
TLS keys
...
Secrets can be used for that purpose
Secrets and ConfigMaps are very similar
ConfigMap and Secrets are key-value maps
(a Secret can contain zero, one, or many key-value pairs)
They can both be exposed with the downward API or volumes
They can both be created with YAML or with a CLI command
(kubectl create configmap
/ kubectl create secret
)
They can have different RBAC permissions
(e.g. the default view
role can read ConfigMaps but not Secrets)
They indicate a different intent:
"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."
"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."
(Source: the author of both features)
The type indicates which keys must exist in the secrets, for instance:
kubernetes.io/tls
requires tls.crt
and tls.key
kubernetes.io/basic-auth
requires username
and password
kubernetes.io/ssh-auth
requires ssh-privatekey
kubernetes.io/dockerconfigjson
requires .dockerconfigjson
kubernetes.io/service-account-token
requires token
, namespace
, ca.crt
(the whole list is in the documentation)
This is merely for our (human) convenience:
“Ah yes, this secret is a ...”
Let's see how to access an image on private registry!
These images are protected by a username + password
(on some registries, it's token + password, but it's the same thing)
To access a private image, we need to:
create a secret
reference that secret in a Pod template
or reference that secret in a ServiceAccount used by a Pod
Let's try to access an image on a private registry!
Create a Deployment using that image:
kubectl create deployment priv \ --image=docker-registry.enix.io/jpetazzo/private
Check that the Pod won't start:
kubectl get pods --selector=app=priv
kubectl create secret docker-registry enix \ --docker-server=docker-registry.enix.io \ --docker-username=reader \ --docker-password=VmQvqdtXFwXfyy4Jb5DR
Why do we have to specify the registry address?
If we use multiple sets of credentials for different registries, it prevents leaking the credentials of one registry to another registry.
The first way to use a secret is to add it to imagePullSecrets
(in the spec
section of a Pod template)
priv
Deployment that we created earlier:kubectl patch deploy priv --patch='spec: template: spec: imagePullSecrets: - name: enix'
kubectl get pods --selector=app=priv
We can add the secret to the ServiceAccount
This is convenient to automatically use credentials for all pods
(as long as they're using a specific ServiceAccount, of course)
kubectl patch serviceaccount default --patch='imagePullSecrets:- name: enix'
When shown with e.g. kubectl get secrets -o yaml
, secrets are base64-encoded
Likewise, when defining it with YAML, data
values are base64-encoded
Example:
kind: SecretapiVersion: v1metadata: name: pin-codesdata: onetwothreefour: MTIzNA== zerozerozerozero: MDAwMA==
Keep in mind that this is just encoding, not encryption
It is very easy to automatically extract and decode secrets
stringData
When creating a Secret, it is possible to bypass base64
Just use stringData
instead of data
:
kind: SecretapiVersion: v1metadata: name: pin-codesstringData: onetwothreefour: 1234 zerozerozerozero: 0000
It will show up as base64 if you kubectl get -o yaml
No type
was specified, so it defaults to Opaque
It is possible to encrypted secrets at rest
This means that secrets will be safe if someone ...
steals our etcd servers
steals our backups
snoops the e.g. iSCSI link between our etcd servers and SAN
However, starting the API server will now require human intervention
(to provide the decryption keys)
This is only for extremely regulated environments (military, nation states...)
Since Kubernetes 1.19, it is possible to mark a ConfigMap or Secret as immutable
kubectl patch configmap xyz --patch='{"immutable": true}'
This brings performance improvements when using lots of ConfigMaps and Secrets
(lots = tens of thousands)
Once a ConfigMap or Secret has been marked as immutable:
immutable
field can't be changed back either:EN:- Handling passwords and tokens safely
:FR:- Manipulation de mots de passe, clés API etc.
Executing batch jobs
(automatically generated title slide)
Deployments are great for stateless web apps
(as well as workers that keep running forever)
Pods are great for one-off execution that we don't care about
(because they don't get automatically restarted if something goes wrong)
Jobs are great for "long" background work
("long" being at least minutes or hours)
CronJobs are great to schedule Jobs at regular intervals
(just like the classic UNIX cron
daemon with its crontab
files)
A Job will create a Pod
If the Pod fails, the Job will create another one
The Job will keep trying until:
either a Pod succeeds,
or we hit the backoff limit of the Job (default=6)
kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'
Our Job will create a Pod named flipcoin-xxxxx
If the Pod succeeds, the Job stops
If the Pod fails, the Job creates another Pod
kubectl get pods --selector=job-name=flipcoin
We can specify a number of "completions" (default=1)
This indicates how many times the Job must be executed
We can specify the "parallelism" (default=1)
This indicates how many Pods should be running in parallel
These options cannot be specified with kubectl create job
(we have to write our own YAML manifest to use them)
A Cron Job is a Job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
It requires a schedule, represented as five space-separated fields:
*
means "all valid values"; /N
means "every N"
Example: */3 * * * *
means "every three minutes"
The website https://crontab.guru/ can help to create cron schedules!
Let's create a simple job to be executed every three minutes
Careful: make sure that the job terminates!
(The Cron Job will not hold if a previous job is still running)
Create the Cron Job:
kubectl create cronjob every3mins --schedule="*/3 * * * *" \ --image=alpine -- sleep 10
Check the resource that was created:
kubectl get cronjobs
At the specified schedule, the Cron Job will create a Job
The Job will create a Pod
The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
kubectl get jobs
(It will take a few minutes before the first job is scheduled.)
It is possible to set a time limit (or deadline) for a job
This is done with the field spec.activeDeadlineSeconds
(by default, it is unlimited)
When the job is older than this time limit, all its pods are terminated
Note that there can also be a spec.activeDeadlineSeconds
field in pods!
They can be set independently, and have different effects:
the deadline of the job will stop the entire job
the deadline of the pod will only stop an individual pod
:EN:- Running batch and cron jobs :FR:- Tâches périodiques (cron) et traitement par lots (batch)
Hello!
On stage: Jérôme (@jpetazzo)
Backstage: Alexandre, Amy, Antoine, Aurélien (x2), Benji, David, Julien, Kostas, Nicolas, Thibault
The training will run from 9:30 to 13:00
There will be a break at (approximately) 11:00
You should must ask questions! Lots of questions!
Use Mattermost to ask questions, get help, etc. logistics.md
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |