canary deployment with istio

Canary deployment is an upgraded version of an existing deployment, with all the required application code and dependencies. It is used to test out new features and upgrades to see how they handle the production environment.

Setting up Canary Deployment on Kubernetes
Step 1: Pull Docker Image
*Download the image with:
# docker pull nginx
Status: Downloaded newer image for nginx:latest

*Verify you have it by listing all local images
# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
19_04_react latest 3457c011a6 2 days ago 1.11GB
centos1_ansible latest 20af1303e 5 days ago 1.05GB
nginx latest 62d49f9b 7 days ago 133MB

Step 2: Create the Kubernetes Deployment
Create the deployment definition using a yaml file. Use a text editor of your choice and provide a name for the file. Add the following content to the file
# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
version: "1.0"
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: index.html
volumes:
- name: index.html
hostPath:
path: /Users/sofija/Documents/nginx/v1
*Save and exit the file.
*Create the deployment by running the below command
# kubectl apply -f nginx-deployment.yaml

*Check whether we have successfully deployed the pods with
# kubectl get pods -o wide
The output should display three running Nginx pods

Step 3: Create the Service
The next step is to create a service definition for the Kubernetes cluster. The service will route requests to the specified pods.
Create a new yaml file with
# cat nginx-deployment.service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
version: "1.0"
ports:
- port: 8888
targetPort: 80
*Save and exit the service file.
*Let's create the service:
# kubectl apply -f nginx-deployment.service.yaml

Step 4: Check First Version of Cluster
To verify the service is running, open a web browser, and navigate to the IP and port number defined in the service file.
To see the external IP address of the service, use the command:
# kubectl get service
If you are running Kubernetes locally, use localhost as the IP.
The browser should display a Hello World message from version 1

Step 5: Create a Canary Deployment
With version 1 of the application in place, we deploy version 2, the canary deployment.
*Start by creating the yaml file for the canary deployment.
# cat nginx-canary-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
version: "2.0"
spec:
containers:
- name: nginx
image: nginx:alpine
resources:
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: index.html
volumes:
- name: index.html
hostPath:
path: /Users/sofija/Documents/nginx/v2
*Save and exit the file
*Create the canary deployment with the below command
# kubectl apply -f nginx-canary-deployment.yaml

*Verify that successfully deployed the three additional pods
# kubectl get pods -o wide
The output should display the Nginx canary deployment pods, along with the original Nginx pods.

Step 6: Run the Canary Deployment
To test out the updated pods, we need to modify the service file and direct part of the traffic to version: “2.0”
*To do so, open the yaml file with, Find and remove the line version: “1.0”
# cat nginx-deployment.service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
version: "2.0"
ports:
- port: 8888
targetPort: 80
*Save the changes and exit the file
*Create the updated service with the below command:
# kubectl apply -f nginx-deployment.service.yml
The traffic is now split between version 1 and version 2 pods. If we refresh the web page a few times, we see different results depending on where the service redirects your request.

since we are going to deploy the service in an Istio enabled cluster, all we need to do is set a routing rule to control the traffic distribution. For example, if we want to send 10% of the traffic to the canary, we could use kubectl to set a routing rule something like this
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: index.html
spec:
hosts:
- index.html
http:
- route:
- destination:
host: index.html
subset: v1
weight: 90
- destination:
host: index.html
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: index.html
spec:
host: index.html
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Istio routing rules can be used to route traffic based on specific criteria, allowing more sophisticated canary deployment scenarios. Say, for example, instead of exposing the canary to an arbitrary percentage of users, we want to try it out on internal users, maybe even just a percentage of them. The following command could be used to send 50% of traffic from users at some-company-name.com to the canary version, leaving all other users unaffected
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- match:
- headers:
cookie:
regex: "^(.*?;)?(email=[^;]*@some-company-name.com)(;.*)?$"
route:
- destination:
host: helloworld
subset: v1
weight: 50
- destination:
host: helloworld
subset: v2
weight: 50
- route:
- destination:
host: helloworld
subset: v1
The auto scalers bound to the 2 version Deployments will automatically scale the replicas accordingly, but that will not affect the traffic distribution





Recent Comments

No comments

Leave a Comment