Build a multi node Kubernetes Cluster on Google Cloud VMs using Kubeadm, from the ground up!

RMAG news

Bringing in an initial post i written last year on my other profile here. These are the defined and authentic steps you can use to run a Three, Five, Seven…etc node cluster on GCP using Linux VMs

Choose your Linux Flavor, recommended is Ubuntu 22.04 (Jammy) (This will work on local dev boxes and on Cloud Compute VMs with Jammy)

On a Google Cloud Web Console, pick your desired project that has billing enabled and setup your cli tool (Google Cloud CLI) and create a VPC, Subnet and Firewall to allow traffik.

-Google Cloud Platform (Replace resource names in square brackets without the brackets):
Create a Virtual Private Cloud Network
gcloud compute networks create [vpc name] –subnet-mode custom
Create a Subnet with a specific range (10.0.96.0/24)
gcloud compute networks subnets create [subnet name] –network [vpc name] –range 10.0.96.0/24
Create Firewall Rule that allows internal communication accros all protocols (10.0.96.0/24, 10.0.92.0/22)
gcloud compute firewall-rules create [internal network name] –allow tcp,udp,icmp –network [vpc name] –source-ranges 110.0.96.0/24, 10.0.92.0/22
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
gcloud compute firewall-rules create [external network name] –allow tcp,icmp –network [vpc name] –source-ranges 0.0.0.0/0
List the firewall rules in the VPC network:
gcloud compute firewall-rules list –filter=”network:[vpc name]”

Provision Nodes
Create 3, 5 or 7 compute instances which will host the Kubernetes Proxy, control plane and worker nodes respectively (A proxy is recommended if you are creating 5 nodes or more):

Proxy Plane Node (Optional):
gcloud compute instances create proxynode –async –boot-disk-size 50GB –can-ip-forward –image-family ubuntu-2204-lts –image-project ubuntu-os-cloud –machine-type n2-standard-2 –private-network-ip 10.0.96.10 –scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring –subnet kubevms-subnet –tags kubevms-node,proxy

Master Control Plane Node:
gcloud compute instances create masternode –async –boot-disk-size 200GB –can-ip-forward –image-family ubuntu-2204-lts –image-project ubuntu-os-cloud –machine-type n2-standard-2 –private-network-ip 10.0.96.11 –scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring –subnet butler-subnet –tags vms-node,controller

Worker Nodes: (10.0.96.21+ for the other worker nodes)
gcloud compute instances create workernode1 –async –boot-disk-size 100GB –can-ip-forward –image-family ubuntu-2204-lts –image-project ubuntu-os-cloud –machine-type n2-standard-2 –private-network-ip 10.0.96.20 –scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring –subnet butler-subnet –tags vms-node,worker

Print the Internal IP address and Pod CIDR range for each worker node

gcloud compute instances describe workernode1 –format ‘value[separator=” “](networkInterfaces[0].networkIP,metadata.items[0].value)’

List the compute instances in your default compute zone:

gcloud compute instances list –filter=”tags.items=kubeadm-cluster”

Test SSH Into Google Cloud VM Instance

gcloud compute ssh [compute instance name]

4. RUN THESE INSTALLATIONS ON ALL NODES

a. sudo -i
b. apt-get update && apt-get upgrade -y
c. apt install curl apt-transport-https vim git wget gnupg2 software-properties-common apt-transport-https ca-certificates uidmap lsb-release -y
d. swapoff -a

5. INSTALL AND CONFIGURE CONTAINER RUNTIME PREREQUISITES ON ALL NODES
Verify that the br_netfilter module is loaded by running lsmod | grep br_netfilter.
In order for a Linux node’s iptables to correctly view bridged traffic, verify that net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl –system

6. INSTALL CONTAINER RUNTIME ON ALL NODES
a. mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /etc/apt/keyrings/docker.gpg
b. echo “deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
c. apt-get update
d. apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

6. CONFIGURE CGROUP DRIVER FOR CONTAINERD ON ALL NODES (We will use the more advanced SystemD that comes with Ubuntu 2204)
a. stat -fc %T /sys/fs/cgroup/ (Check to see if you are using the supported cgroupV2)
b. sudo containerd config default | sudo tee /etc/containerd/config.toml (Make sure that the config.toml is present with defaults)
c. Set the SystemDGroup = true to use the CGroup driver in the config.toml
[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc]

[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
SystemdCgroup = true
d. sudo systemctl restart containerd (Restart ContainerD)

7. INSTALL KUBEADM ON ALL NODES

Download the Google Cloud public signing key:
a. sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Add the Kubernetes apt repository:
b. echo “deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
c. sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

8. CONFIGURE CGROUP DRIVER FOR MASTER NODE (Add section to kubeadm-config.yaml if you are using a supprted OS for SystemD)

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
controlPlaneEndpoint: “masternode:6443”
networking:
podSubnet: 10.200.0.0/16


apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

9. CONFIRGURE HOSTNAME FOR MASTER NODE
Open file : nano /ect/hosts
– Add Master Node’s Static IP and preferred Hostname (10.0.96.11 masternode)

10. INITIALIZE KUBEADM only on MASTER
kubeadm init –config=kubeadm-config.yaml –upload-certs | tee kubeadm-init.out

Logout from ROOT if you are stilL ROOT
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11. INSTALL A POD NETWORKING INTERFACE ON MASTER
Download and Install the Tigera Calico operator and custom resource definitions.
– kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml

Download and Install Calico by creating the necessary custom resource.
Before installing remember to change the CALICO_IPV4POOL_CIDR to the POD_CIDR (10.200.0.0/16)
– kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml

12. JOIN WORKER NODES TO CONTROL PLANE
kubeadm join butlercp:6443 –token n0smf1.ixdasx8uy109cuf8 –discovery-token-ca-cert-hash sha256:f6bce2764268ece50e6f9ecb7b933258eac95b525217b8debb647ef41d49a898