Architecture of Kubernetes

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments.


The Kubernetes master

The Kubernetes master is the access point (or the control plane) from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers. A cluster will always have at least one master but may have more depending on the cluster’s replication pattern.


etcd:        

The master stores the state and configuration data for the entire cluster in etcd, a persistent and distributed key-value data store. Each node has access to etcd and through it, nodes learn how to maintain the configurations of the containers they’re running. You can run etcd on the Kubernetes master or in standalone configurations.


Kube-API server:

Masters communicate with the rest of the cluster through the Kube-apiserver, the main access point to the control plane. For example, the Kube-apiserver makes sure that configurations in etcd match with configurations of containers deployed in the cluster.


Kube-controller-manager:

The Kube-controller-manager handles control loops that manage the state of the cluster via the Kubernetes API server. Deployments, replicas, and nodes have controls handled by this service. For example, the node controller is responsible for registering a node and monitoring its health throughout its lifecycle.


Kube-scheduler:

Node workloads in the cluster are tracked and managed by the Kube-scheduler. This service keeps track of the capacity and resources of nodes and assigns work to nodes based on their availability.


Cloud-controller-manager:

The cloud-controller-manager is a service running in Kubernetes that helps keep it “cloud-agnostic.” The cloud-controller-manager serves as an abstraction layer between the APIs and tools of a cloud provider (for example, storage volumes or load balancers) and their representational counterparts in Kubernetes.


Node:

All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they’re deployed to nodes in the cluster by Kubernetes. Your applications (web servers, databases, API servers, etc.) run inside the containers.


Kubelet:

Each Kubernetes node runs an agent process called a kubelet that is responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. The kubelet collects performance and health information from the node, pods, and containers it runs and shares that information with the control plane to help it make scheduling decisions.


Kube-proxy:

The Kube-proxy is a network proxy that runs on nodes in the cluster. It also works as a load balancer for services running on a node.


Pod:

The basic scheduling unit is a pod, which consists of one or more containers guaranteed to be co-located on the host machine and that can share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.


Pod Spec:

You describe the desired state of the containers in a pod through a YAML or JSON object called a Pod Spec. These objects are passed to the kubelet through the API server


Volumes:

A pod can define one or more volumes, such as a local disk or network disk, and expose them to the containers in the pod, which allows different containers to share storage space. For example, volumes can be used when one container downloads content and another container uploads that content somewhere else.


Service:

Since containers inside pods are often ephemeral, Kubernetes offers a type of load balancer, called a service, to simplify sending requests to a group of pods. A service targets a logical set of pods selected based on labels. By default, services can be accessed only from within the cluster, but you can enable public access to them as well if you want them to receive requests from outside the cluster.

Recent Comments

No comments

Leave a Comment