Kubernetes Cluster Installation

Shardul | Apr 2, 2025 min read

Kubernetes (K8s) is an open-source platform designed to automate deploying, scaling, and managing containerized applications.Instead of running apps directly on servers or VMs, Kubernetes lets you define how apps should run, and it manages them for you—like a smart app babysitter that keeps everything running and balanced.

Prerequisites

This guide assumes a freshly installed CentOS 8 system on either physical hardware or a virtual machine.

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more for control plane machines.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines.

Operating System Requirements

In order to reliably run Kubernetes a few changes are needed to the base CentOS 8 install. The following prerequisite steps will need to be applied to all nodes in your cluster.

Disable SELinux

First, you will need to disable SELinux as this generally conflicts with Kubernetes:

setenforce 0 && \
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Disable swap

Swap must be disabled for Kubernetes to run effectively. Swap is typically enabled in a default CentOS 8 installation where automatic partitioning has been selected. To disable swap:

swapoff -a && \
sed -e '/swap/s/^/#/g' -i /etc/fstab

Disable firewalld

In order to properly communicate with other devices within the cluster, firewalld must be disabled:

systemctl disable --now firewalld

Disable root login over SSH

Important: This is highly recommended for security reasons.

Optionally disable root login over SSH.

sed -i --follow-symlinks 's/#PermitRootLogin yes/PermitRootLogin no/g' /etc/ssh/sshd_config

Use iptables for Bridged Network Traffic

Note: This step is only necessary for EL7 and EL8 hosts.

Ensure that bridged network traffic goes through iptables.

cat <<EOF >  /etc/sysctl.d/iptables-bridge.conf
EOF
sysctl --system
Enable routing

Enable routing

cat <<EOF >  /etc/sysctl.d/ip-forward.conf
net.ipv4.ip_forward = 1
EOF
sysctl --system

Installing Containerd

Complete the following steps to install and configure containerd for your cluster.

Load Kernel Modules

Specify and load the following kernel module dependencies:

cat  << EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
modprobe overlay && \
modprobe br_netfilter

Add Yum Repo

Install the yum-config-manager tool if not already present:

yum install yum-utils -y

Add the stable Docker Community Edition repository to yum:

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install Containerd

Install the latest version of containerd:

yum install containerd.io -y

Configure cgroups

Configure the systemd cgroup driver:

CONTAINDERD_CONFIG_PATH=/etc/containerd/config.toml && \
rm "${CONTAINDERD_CONFIG_PATH}" && \
containerd config default > "${CONTAINDERD_CONFIG_PATH}" && \
sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g"  "${CONTAINDERD_CONFIG_PATH}"

Finally, enable containerd and apply the changes:

systemctl enable --now containerd && \
systemctl restart containerd
systemctl status containerd

Installing Kubernetes

Note: Kubernetes v1.32. Kubernetes as its container orchestration system. This section we’ll install the base Kubernetes software components.

The Kubernetes repository can be added to the node in the usual way:

export KUBE_VERSION=1.32 && \
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${KUBE_VERSION}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${KUBE_VERSION}/rpm/repodata/repomd.xml.key
> EOF

The Kubernetes install includes a few different pieces: kubeadm, kubectl, and kubelet.

kubeadm is a tool used to bootstrap Kubernetes clusters. kubectl is the command-line tool needed to interact with and control the cluster. kubelet is the system daemon that allows the Kubernetes api to control the cluster nodes.

Install and enable these components:

yum install -y kubeadm kubectl kubelet

Finally, enable kubelet:

systemctl enable --now kubelet

At this point the kubelet will be crash-looping as it has no configuration. That is okay for now.

Master Node

The first node you will add to your cluster will function as the master node.

  1. All possible topologies will utilize a master node.
  2. To configure a master node, you must first go through the above steps in Manual Cluster Installation.

Initialize the Kubernetes cluster with Kubeadm

We want to initialize our cluster with the pod network CIDR specifically set to 192.168.0.0/16 as this is the default range utilized by the Calico network plugin. If needed, it is possible to set a different RFC 1918 range during kubeadm init and configure Calico to use that range. Instructions for configuring Calico for a different IP range is noted below in Pod Network.

kubeadm init --pod-network-cidr=192.168.0.0/16

KubeConfig

If you want to permanently enable kubectl access for the root account, you will need to copy the Kubernetes admin configuration to your home directory as shown below.

mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To instead set KUBECONFIG for a single session simply run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Allowing pods to run on the Control Plane

Note: This step is optional for multi-node installations of Kubernetes and required for single-node installations.

If you are running a single-node cluster, you’ll want to remove the NoSchedule taint from the Kubernetes Control Plane. This will allow general workloads to run along-side of the Kubernetes Control Plane processes. In larger clusters, it may instead be desirable to prevent “user” workloads from running on the Control Plane, especially on very busy clusters where the K8s API is servicing a large number of requests. If you are running a large, multi-node cluster then you may want to skip this step.

To remove the Control Plane taint:

kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-

You might want to adjust the above command based on the role your Control Plane node holds. You can find this out by running:

kubectl get nodes

This should tell you the role(s) your Control Plane node holds. You can also adjust the command to remove the taint accordingly.

Pod Network

In order to enable Pods to communicate with the rest of the cluster, you will need to install a networking plugin. There are a large number of possible networking plugins for Kubernetes. clusters generally use Calico, although other options should work as well.

To install Calico, you will simply need to apply the appropriate Kubernetes manifests beginning with the operator:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

If you haven’t changed the default IP range then create the boilerplate custom resources manifest:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml

If you have changed the IP range to anything other than 192.168.0.0/16 in the kubeadm init command above, you will need to first download the boilerplate custom-resources.yaml file from Project Calico on GitHub, then update its IP range under spec/calicoNetwork/ipPools/blockSize and CIDR. Finally, create the custom resources manifest:

kubectl create -f /path/to/custom-resources.yaml

After approximately five minutes, your master node should be ready. You can check with kubectl get nodes:

[root@your-node ~]# kubectl get nodes
NAME                           STATUS   ROLES                  AGE     VERSION
your-node.your-domain.edu      Ready    control-plane        2m50s   v1.32.0

Worker Node

To distribute work assigned to a cluster, worker nodes can be networked to a master node.

To configure a worker node, you must first go through the previous pages in Manual Cluster Installation (excluding the master node page).

Joining the Cluster

On your master node, run the following command to get a full join command for the master’s cluster:

kubeadm token create --print-join-command

Run this generated join command on the worker node to join it to the cluster.

Reference