Daemonset
Daemonset
A Daemonset is another controller that manages pods like Deployments, ReplicaSets, and StatefulSets. It was created for one particular purpose: ensuring that the pods it manages to run on all the cluster nodes. As soon as a node joins the cluster, the DaemonSet ensures that it has the necessary pods running on it. When the node leaves the cluster, those pods are garbage collected.
Daemonsets are used in Kubernetes when you need to run one or more pods on all (or a subset of) the nodes in a cluster. The typical use case for a DaemonSet is logging and monitoring for the hosts. For example, a node needs a service (daemon) that collects health or log data and pushes them to a central system or database (like ELK stack). DaemonSets can be deployed to specific nodes either by the nodes’ user-defined labels or using values provided by Kubernetes like the node hostname
Now that we understand DaemonSets, here are some examples of why and how to use it
- To run a daemon for cluster storage on each node, such as - glusterd - ceph
- To run a daemon for logs collection on each node, such as - fluentd - logstash
To run a daemon for node monitoring on every note, such as - Prometheus Node Exporter - collectd - Datadog agent
- As your use case gets more complex, you can deploy multiple DaemonSets for one kind of daemon, using a variety of flags or memory and CPU requests for various hardware types
Creating our first DeamonSet Deployment
# cat daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: prometheus-daemonset
spec:
selector:
matchLabels:
tier: monitoring
name: prometheus-exporter
template:
metadata:
labels:
tier: monitoring
name: prometheus-exporter
spec:
containers:
- name: prometheus
image: prom/node-exporter
ports:
- containerPort: 80
save and exit the yml file
let run the kubectl command
# kubectl apply -f daemonset.yml
daemonset.apps/prometheus-daemonset created
Getting the basic details about daemonsets
# kubectl get daemonsets/prometheus-daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
prometheus-daemonset 2 2 2 2 2
# kubectl describe daemonset/prometheus-daemonset
Name: prometheus-daemonset
Selector: name=prometheus-exporter,tier=monitoring
Node-Selector:
Labels:
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 2
Number of Nodes Misscheduled: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: name=prometheus-exporter
tier=monitoring
Containers:
prometheus:
Image: prom/node-exporter
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m35s daemonset-controller Created pod: prometheus-daemonset-jz2fr
Normal SuccessfulCreate 2m35s daemonset-controller Created pod: prometheus-daemonset-mq4lj
Getting pods in daemonset
# kubectl get pods -lname=prometheus-exporter
NAME READY STATUS RESTARTS AGE
prometheus-daemonset-jz2fr 1/1 Running 0 3m38s
prometheus-daemonset-mq4lj 1/1 Running 0 3m37s
#kubectl get pods -lname=prometheus-exporterNAME
READY STATUS RESTARTS AGE
prometheus-daemonset-jz2fr 1/1 Running 0 4m12s
Delete a daemonset
# kubectl delete -f daemonset.yml
Restrict DaemonSets To Run On Specific Nodes
By default, a DaemonSet schedules its pods on all the cluster nodes. But sometimes we may need to run specific processes on specific nodes. For example, nodes that host database pods need different monitoring or logging rules. DaemonSets allow you to select which nodes you want to run the pods on. we can do this by using nodeSelector. With nodeSelector, we can select nodes by their labels the same way you do with pods. However, Kubernetes also allows you to select nodes based on some already-defined node properties. For example, kubernetes.io/hostname matches the node name. So, our example cluster has two nodes. we can modify the DaemonSet definition to run only on the first node. Lets’ first get the node names
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ciskubemaster.zippyops.com Ready control-plane,master 24d v1.20.4
ciskubenode1.zippyops.com Ready
ciskubenode2.zippyops.com Ready
We need to add the below entry in the above YAML file
nodeSelector:
kubernetes.io/hostname: node1
How To Reach a DaemonSet Pod
There are several design patterns DaemonSet-pods communication in the cluster:
- The Push pattern: pods do not receive traffic. Instead, they push data to other services like ElasticSearch, for example.
- NodeIP and known port pattern: in this design, pods use the hostPort to acquire the node’s IP address. Clients can use the node IP and the known port (for example, port 80 if the DaemonSet has a web server) to connect to the pod.
- DNS pattern: create a Headless Service that selects the DaemonSet pods. Use Endpoints to discover DaemonSet pods.
- Service pattern: create a traditional service that selects the DaemonSet pods. Use NodePort to expose the pods using a random port. The drawback of this approach is that there is no way to choose a specific pod.
Recent Comments
No comments
Leave a Comment
We will be happy to hear what you think about this post