Kubernetes cluster consists of the master and worker nodes, alongside which etcd manages the cluster state.

there can be one or more master and worker nodes, but only one master node is the leader, performing all the tasks, and other masters follow the leader.

etcd is a distributed key-value store which all the master nodes connect to it. It can be set up in its own cluster or installed alongside master nodes.

Master Node

the master node is responsible for managing the kubernetes cluster and it is the entry point for all the administrative commands that we send through API or kubectl cli. A master node consists of 4 main components:

API server is responsible for receiving and acting on administrative command. In the case of kubectl, it directly sends commands to the API server. API server validates and executes commands and saves the resulting state into the etcd.

Scheduler, schedule works on different worker nodes in terms of pods and services. scheduler knows about the resource consumption of pods and the constraints that admin/operator may have set, such as scheduling a job only in nodes that has the label disk==ssd set.

the Controller manager runs infinite control loops that constantly monitor the objects it is in charge of through the API server to make sure they are in their desired state. if the current state of an object doesn’t meet its desired state, the control loop takes corrective steps to make sure the current and desired states are the same.

Worker Node

A worker node is a machine that runs applications inside pods. pods are the minimum deployable unit of works that can be scheduled. each pod is a logical unit that consists of one or more containers that scheduled atomically at the same time.

A worker node has the following components:

Container runtime is in charge of running and managing containers. there are a few options to choose from. containerd which docker is based on, rkt, and lxd.

kubelet is an agent running on each worker node. It communicates with the master generally using API server. it receives pod definitions and deploys the containers inside it. it also checks the availability and readiness of the containers on the worker node.

it uses the Container Runtime Interface (CRI) to communicate with the container runtime. CRI makes it possible to plug any container runtime that implements CRI.

kube-proxy always watches API server for newly created services or changes and reflects it by changing the iptables of the node it resides in.