Blue green deployment with istio on kubernetes
Blue/Green Deployment with istio
Istio is a service mesh designed to make communication among microservices reliable, transparent, and secure. Istio intercepts the external and internal traffic targeting the services deployed in container platforms such as Kubernetes.
The Blue Deployment
A Kubernetes deployment specifies a group of instances of an application. Behind the scenes, it creates a replicaset that is responsible for keeping the specified number of instances up and running.
We can create our "blue" deployment by saving the following yaml to a file blue.yaml.
# cat blue.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-1.10
spec:
replicas: 2
template:
metadata:
labels:
name: nginx
version: "1.10"
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- name: http
containerPort: 80
we can then create the deployment using the kubectl command.
# kubectl apply -f blue.yaml
deployment "nginx-1.10" created
Once we have a deployment we can provide a way to access the instances of the deployment by creating a Service. Services are decoupled from deployments so that means that you don't explicitly point service at a deployment. What you do instead is specify a label selector which is used to list the pods that make up the service. When using deployments, this is typically set up so that it matches the pods for a deployment. In this case, we have two labels, name=nginx, and version=1.10. We will set these as the label selector for the service below. Save this to service.yaml.
# catservice.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: nginx
version: "1.10"
type: LoadBalancer
Creating the service will create a load balancer that is accessible outside the cluster
# kubectl apply -f service.yaml
service "nginx" created
You can test that the service is accessible and get the version.
# EXTERNAL_IP=$(kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[*].ip}")
# curl -s http://$EXTERNAL_IP/version | grepnginx
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetesClusterIP 10.96.0.1 443/TCP 22h
my-nginxClusterIP 10.100.110.46 80/TCP 18h
my-nginx-web-service NodePort 10.97.35.15 80:32683/TCP 22h
nginx NodePort 10.105.211.91 8080:32715/TCP,443:32580/TCP 5m9s
nginxservice LoadBalancer 10.105.239.232 10.9.54.100 82:31691/TCP 21h
Create the Green Deployment
The Green Deployment is created by updating to the next version. An entirely new Deployment will be created with different labels. Note that these labels don't match the Service yet and so requests will not be sent to pods in the Deployment.
# cat green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1.11
spec:
selector:
matchLabels:
run: nginx-1.11
replicas: 2
template:
metadata:
labels:
run: nginx-1.11
spec:
containers:
- name: nginx
image: nginx:1.11
ports:
- name: http
containerPort: 80
we can create the new deployment so
# kubectl apply -f green.yaml
deployment "nginx-1.11" created
To cut over to the "green" deployment we will update the selector for the service. Edit the service.yaml and change the selector version to "1.11". That will make it so that it matches the pods on the "green" deployment.
# cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: nginx
version: "1.11"
type: LoadBalancer
This apply will update the existing nginx service in place
# kubectl apply -f service.yaml
service "nginx" configured
Configuring Blue/Green Deployments
Gateway
An Istio Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, etc. In the below definition, we are pointing the gateway to the default Ingress Gateway created by Istio during the installation.
Let’s create the gateway as a Kubernetes object
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: app-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Destination Rule
An Istio DestinationRule defines policies that apply to traffic intended for service after routing has occurred. Notice how the rule is declared based on the labels defined in the original Kubernetes deployment.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myapp
spec:
host: myapp
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Virtual Service
A virtual service defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for the traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service based on a version. In the below definition, we are declaring the weights as 50 for both v1 and v2, which means the traffic will be evenly distributed
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- app-gateway
http:
- route:
- destination:
host: myapp
subset: v1
weight: 50
- destination:
host: myapp
subset: v2
weight: 50
we can define all the above in one YAML file that can be used from kubectl.
#kubectl apply -f app-gateway.yaml
Now, let’s go ahead and access the service. Since we are using Minikube with NodePort, we need to get the exact port on which the Ingress Gateway is running.
Run the below commands to access the Ingress Host (Minikube) and Ingress port.
# export INGRESS_HOST=$(minikube ip)
# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
If we access the URI from the browser, we will see the traffic getting routed evenly between blue and green pages.
Relevant Blogs:
Docker volumes
Recent Comments
No comments
Leave a Comment
We will be happy to hear what you think about this post