Skip to content

1.1 Cluster Setup and Management

Minimal Kubernetes Cluster Setup on Ubuntu (1 Master/Worker Node + 1 Worker-Only Node)

Official Kubernetes documentation

In this lab, we will set up a basic Kubernetes cluster consisting of two Ubuntu nodes: Don’t worry if you’re new to Kubernetes – we’ll walk you through everything step by step!

  • Node 1: Acts as control plane (master) and worker
  • Node 2: Acts as a worker-only

We will use kubeadm for cluster initialization and assume Ubuntu 22.04 is used. This tool helps simplify the process of setting up Kubernetes clusters manually. This lab will help you understand the steps required to bootstrap a functional cluster without external tooling (like Rancher or Minikube) and provide a practical base for multi-node Kubernetes setups.


1. Preparing the Hosts

Make sure you have two Ubuntu machines ready (physical, VMs, or cloud VMs). Both nodes must be able to reach each other via private IP.

Assign roles:

Node 1: controlplane-node (Master)
Node 2: worker-node

Connect to both Nodes (two terminal tabs) through ssh: Opening two terminal tabs can help you manage both machines easily during setup.

ssh azuser@NODE

The password is Train@Thinkport. The hoste are named: clusterX-<workshopID>-admin-0 and clusterX-<workshopID>-admin-1.

On both nodes:

Update system and install dependencies:

sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https curl containerd

Configure containerd:

sudo mkdir -p /etc/containerd
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Disable swap:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Enable kernel modules:

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

Add Kubernetes APT repo:

sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | gpg --dearmor | sudo tee /etc/apt/keyrings/kubernetes-apt-keyring.gpg > /dev/null


echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
  sudo tee /etc/apt/sources.list.d/kubernetes.list



sudo apt update

Install Kubernetes components:

VERSION=1.32.0-1.1
sudo apt install -y kubelet=$VERSION kubeadm=$VERSION kubectl=$VERSION
sudo apt-mark hold kubelet kubeadm kubectl

2. Initialize the Cluster

Official Documentation

Now initialize the cluster on master-node (Node 1).

We’ll use Calico CNI and CIDR 192.168.0.0/16:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Once done, configure kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install Calico CNI plugin:

kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/calico.yaml
Optional
  • Use another Pod network plugin like Flannel or Cilium if preferred.
  • Calico supports NetworkPolicies.

3. Join the Worker Node

On worker-node (Node 2), copy&paste and run the kubeadm join command shown after init on control plane node:

sudo kubeadm join CONTROLPLANE-IP:6443 --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>
HINT If lost, recreate join command on controlplane:
kubeadm token create --print-join-command

Verify cluster status on master:

kubectl get nodes

You should see both nodes in Ready state.


4. Allow schedules on Control-Plane node

To let controlplane act also as worker-node, we need to activate scheduling on the node

kubectl taint nodes NODE-NAME node-role.kubernetes.io/control-plane:NoSchedule-

5. Optional: Label AS Worker node

kubectl label node NODE1-NAME node-role.kubernetes.io/worker=worker
kubectl label node NODE2-NAME node-role.kubernetes.io/worker=worker

6. Test Deployment

Deploy nginx to test:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort

Find the port:

kubectl get svc nginx

Then open in browser:

http://SERVER-IP:<NodePort>

You should see the nginx welcome page.


Recap

You have:

  • Provisioned two Ubuntu hosts
  • Installed Kubernetes components
  • Initialized the cluster with kubeadm
  • Joined a second node
  • Installed Calico as CNI
  • Deployed and exposed an app

End of Lab