Rolling update in kubernetes

Rolling Update
This feature gradually brings down the old RC and brings up the new one. This results in slow deployment, however, there is no deployment. At all times, few old pods and few new pods are available in this process.

A Deployment controller provides declarative updates for Pods and ReplicaSets.we describe the desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. we can define Deployments to create new ReplicaSets or to remove existing Deployments and adopt all their resources with new Deployments

The important bit in the description above is “at a controlled rate”: that means that a group of Pods can be updated one by one, two by two, by removing them all at once and spinning up new ones, the choice is yours. The exact behavior is configured by a snippet similar to this one
# cat update.yml
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate

*Can be either Recreate or RollingUpdate. In the first case, Kubernetes will terminate all the Pods, and then proceed to start the updated ones. This is great for a development environment but doesn’t implement zero-downtime. Alternatively, the value RollingUpdate configures Kubernetes to use the maxSurge and maxUnavailable parameter values.

*Defines how many additional Pods can be started, compared to the number of replicas. Can be a set number, or a percentage.

*Defines how many Pods can be stopped from the current number of replicas. Can be a set number, or a percentage.
Let’s illustrate the process with some examples

Deploy by adding a Pod, then remove an old one
In the first configuration, we allow a single additional Pod maxSurge=1 above the desired number of 3, and the number of available Pods cannot go below it maxUnavailable=0.With this configuration, Kubernetes will spin up an additional Pod, then stop an “old” one down. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment at the cost of extra infrastructure. Otherwise, the pod will be deployed on an already used Node, and it will cannibalize resources from other Pods hosted on the same node

Deploy by removing a Pod, then add a new one
In the next, we allow no additional Pod maxSurge=0 while allowing one single Pod at a time to be unavailable maxUnavailable=1.In that case, Kubernetes will first stop a Pod before starting up a new one. The main benefit of this approach is that the infrastructure doesn’t need to scale up, keeping costs under constant control. On the downside, the maximum workload will be less.

Deploy by updating pods as fast as possible
Finally, the last configuration allows one additional Pod maxSurge=1 as well as one that is not available maxUnavailable=1, at any moment in time. This configuration drastically reduces the time needed to switch between application versions but combines the cons from both the previous ones.


Recent Comments

No comments

Leave a Comment