Today, I explore how the Cilium network works by integrating it into AWS EKS, which has been quite intriguing. Creating and managing clusters with Cilium improves network connectivity, acting as a network superhero. We can leverage the add-on modules provided by EKS.
In this blog post, we will explore and test how to integrate the Cilium networking add-on directly with EKS. In the next blog, we will dive further into the newer version of AWS EKS cluster creation flexibility, which promises to simplify this integration.
Every EKS cluster comes with default networking add-ons, including AWS VPC CNI, CoreDNS, and kube-proxy, to enable pod and service operations in the EKS clusters. In our cluster deployment, we will follow the documentation where the taints provided control the scheduling of application pods to nodes based on the readiness status of Cilium. As of this writing, this is limited to IPv4.
Here is an example of a cluster configuration:
kind: ClusterConfig
metadata:
name: henry-eks
region: us-east-1
managedNodeGroups:
– name: ng-1
desiredCapacity: 2
privateNetworking: true
# taint nodes so that application pods are
# not scheduled/executed until Cilium is deployed.
# Alternatively, see the note below.
taints:
– key: “node.cilium.io/agent-not-ready”
value: “true”
effect: “NoExecute”
Once the cluster is created the networking add-on AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management via ENI. once the cilium CNI plugin is set up it attaches the eBPF programs to the network devices set up by the AWS VPC CNI plugin in other to enforce network policies, perform load-balancing and encryption.
To confirm the AWS VPC CNI the version 1.16.0 you are using to guarantee compatible with cilium
kubectl -n kube-system get ds/aws-node -o json | jq -r ‘.spec.template.spec.containers[0].image’
602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.16.0-eksbuild.1
Before we go deeper into installing Cilium in the EKS cluster, I’ll discuss another feature, AWS ENI (Elastic Network Interface). It is an allocator and a virtual network interface that we can attach to any nodes on our cluster, necessary for allocating IP addresses needed by communicating with the EC2 instance API. Once Cilium is set up, each node creates a Cilium CRD matching the node name ciliumnodes.cilium.io, which also creates the ENI parameters by communicating with the EC2 metadata API to retrieve the instance ID and VPC information.
now cilium will manage the AWS ENI instead of the VPC CNI, so the aws-node daemonset must be patched to prevent conflicting behavior:
once it has been patched
then we work on installing cillium on the Eks cluster, using helm
then after updating and installing the cilium repo on the local machine, we then use helm install repo
–namespace kube-system
–set eni.enabled=true
–set ipam.mode=eni
–set egressMasqueradeInterfaces=eth0
–set routingMode=native
If you created your cluster and did not taint the nodes with node.cilium.io/agent-not-ready, the unmanaged pods need to be restarted manually to ensure Cilium starts managing them. To do this:
validate the installation
cilium will represent every pod as an endpoint in the cilium agent. we can get the list of end-point by running this command
kubectl -n kube-system exec cilium-cxvdh — cilium-dbg endpoint list
# the second node
kubectl -n kube-system exec cilium-lsxht — cilium-dbg endpoint list
this will list all the end-point for each node
NAME READY STATUS RESTARTS AGE
cilium-cxvdh 1/1 Running 0 119m
cilium-lsxht 1/1 Running 0 119m
$ kubectl -n kube-system exec cilium-lsxht — cilium-dbg endpoint list
Defaulted container “cilium-agent” out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
143 Disabled Disabled 54154 k8s:app.kubernetes.io/name=tiefighter 192.168.92.83 ready
k8s:class=tiefighter
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
147 Disabled Disabled 4849 k8s:app.kubernetes.io/name=xwing 192.168.85.248 ready
k8s:class=xwing
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
735 Disabled Disabled 4 reserved:health 192.168.75.160 ready
1832 Disabled Disabled 1 k8s:alpha.eksctl.io/cluster-name=henry-eks-app ready
k8s:alpha.eksctl.io/nodegroup-name=henry-eks-1
k8s:eks.amazonaws.com/capacityType=ON_DEMAND
k8s:eks.amazonaws.com/nodegroup-image=ami-057ddb600f3bba07e
k8s:eks.amazonaws.com/nodegroup=henry-eks-1
k8s:eks.amazonaws.com/sourceLaunchTemplateId=lt-07908f6a5332b214e
k8s:eks.amazonaws.com/sourceLaunchTemplateVersion=1
k8s:node.kubernetes.io/instance-type=t2.medium
k8s:topology.kubernetes.io/region=us-east-1
k8s:topology.kubernetes.io/zone=us-east-1b
reserved:host
2579 Disabled Disabled 6983 k8s:app.kubernetes.io/name=deathstar 192.168.79.49 ready
k8s:class=deathstar
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
To confirm the pods can be accessed and the labels with org=empire are allowed to connect and request landing since no rules will attached to the pods xwing and tiefighter
Ship landed
$ kubectl exec tiefighter — curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
thank you
till next time