Kubeadm
is the tool we are going to use in this tutorial to build our cluster from scratch using the best practices.
It provides kubeadm init
and kubeadm join
commands to configure a master node and join other worker nodes to cluster respectively. Kubeadm
only helps us with bootstrapping a cluster but not the provisioning of machines.
Kubeadm
does not set up a networking solution for us so at the end of this tutorial, we are going to set up a CNI-compliant networking solution ourselves.
Before we begin:
- we need two machines running a deb/rpm-compatible OS as master and worker nodes.
- The master node should have at least 2 core CPUs and 2GB of RAM.
- worker node should have at least 1 core CPU and 1GB of RAM.
- make sure hostnames, MAC addresses, and
product_uuids
are unique for each node.- you can check your mac address using
ip link
command in linux. - to check the
product_uuids
use:cat /sys/class/dmi/id/product_uuid
- you can check your mac address using
- full network connectivity between the machines (public or private both are fine)
- you MUST disable swap in order for the kubelet to work properly.
to do so:
- Identify configured swap devices and files with
cat /proc/swaps
. - Turn off all swap devices and files with
swapoff -a
. - Remove any matching reference found in
/etc/fstab
.
Installing Docker
you need to install docker in each of your machines. kubernetes works better with older versions of the docker. version 17.03 is the recommended docker version to work with kubernetes.
apt-get update
apt-get install -y docker.io
to install docker from Docker repositories for Ubuntu or Debian:
apt-get update
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')
Installing kubeadm, kubelet and kubectl
kubeadm will not install or manage kubelet or kubectl for us, so we need to install them separately
kubeadm
: the main command which bootstraps our cluster.kubelet
: the component which runs on any machines in our cluster and is responsible for running pods and container on that nodekubectl
: the command-line tool we use to talk to our cluster
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
Configure cgroup driver used by kubelet on Master Node
on master node we need to make sure both docker and kubernetes are using the same cgroup driver.
to check the docker cgroup driver:
docker info | grep -i cgroup
in my case output is:
Cgroup Driver: cgroupfs
to check the kubernetes configuration use:
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
as it is seen in my case the Cgroup Driver is not configured.
we need to add: Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
flag to an existing environment line.
And then restart the kubelet:
systemctl daemon-reload
systemctl restart kubelet
now we are ready to initialize our cluster:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
since we are going to use the calico as our pod network plugin to set up our overlay network between our nodes, we add the --pod-network-cidr
, required option, to our init subcommand.
kubeadm first performs some preflight checks and outputs results as warnings and errors. in the case of errors, it will exit. Record the line indicates the join token in the output. We use it to connect our worker nodes to our cluster.
kubeadm join 159.69.4.157:6443 --token hkra9g.gq030nb08c5t03ui --discovery-token-ca-cert-hash sha256:03b62432bebfc268191c1c398c7076eaed8ae46774295d23e23e3b8aa17ff3fe
check out the output for more information. many components of the control plane are deployed by kubernetes itself by the kubelet component we installed first.
copy the config file generated by kubeadm to your home directory to make it available to use by kubectl.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
let’s look at the pods already running in our cluster:
kubectl get pods --all-namespaces
the output should be something like this:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-cloud-kube-master 1/1 Running 0 2m
kube-system kube-apiserver-cloud-kube-master 1/1 Running 0 2m
kube-system kube-controller-manager-cloud-kube-master 1/1 Running 0 2m
kube-system kube-dns-86f4d74b45-5sqbb 0/3 Pending 0 3m
kube-system kube-proxy-bkzdt 1/1 Running 0 3m
kube-system kube-proxy-tpnbf 1/1 Running 0 2m
kube-system kube-scheduler-cloud-kube-master 1/1 Running 0 3m
Some pods are not in the ready state. that’s because we still haven’t installed the pod network plugin. According to Calico’s document, we can set up our overlay network by:
kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
Run watch kubectl get pods --all-namespaces
and wait for all the pods to become ready. Watch is Unix command that runs its argument command frequently and outputs the result to the terminal.
The final output should be as follow:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-etcd-qtp6g 1/1 Running 0 5m
kube-system calico-kube-controllers-685755779f-fvlxv 1/1 Running 0 5m
kube-system calico-node-4qml2 2/2 Running 0 5m
kube-system etcd-cloud-kube-master 1/1 Running 0 3m
kube-system kube-apiserver-cloud-kube-master 1/1 Running 0 3m
kube-system kube-controller-manager-cloud-kube-master 1/1 Running 0 3m
kube-system kube-dns-86f4d74b45-v7dfq 3/3 Running 0 1h
kube-system kube-proxy-s2cp7 1/1 Running 0 1h
kube-system kube-scheduler-cloud-kube-master 1/1 Running 0 3m
Now we can use the join token we previously recorded to connect the worker node to the cluster.
kubectl apply -f \
https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
Run kubectl get nodes to see the nodes in our cluster.
NAME STATUS ROLES AGE VERSION
cloud-kube-master Ready master 2h v1.10.3
cloud-kube-worker Ready <none> 33s v1.10.3