Kubernetes Ingress in a nutshell Part01

Kubernetes Ingress in a nutshell Part01

one of opensource projects for Kubernetes ingress controllers, for ex: nginx-ingress-controller

What you are truly deploying for your services is ingress resources, but ingress Controller is required so that ingress resources come to life.
So please keep in mind that ingress-resource is different from ingress-controller

Install Ingress Controller
firstly, you need to deploy nginx-controller by helm chart

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade –install ingress-nginx ingress-nginx
–repo https://kubernetes.github.io/ingress-nginx
–namespace ingress-nginx
–create-namespace
–set-string controller.service.annotations.”service.beta.kubernetes.io/aws-load-balancer-type”=”nlb”

— the default type of chart service type is LoadBalancer: https://github.com/kubernetes/ingress-nginx/blob/3b1908e20693c57a97b55d8a563da284a5dbf36c/charts/ingress-nginx/values.yaml#L482

— and to define that created LoadBalancer should be NLB, it is defined as annotation at nginx-ingress-controller service:

controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: “nlb”

— to Set SSL/TLS termination on AWS load balancer:
simply this is a way of abstracting TLS handling is to terminate on load balancer and have HTTP inside the cluster by default.
by requesting a public certificate for your custom domain, and don’t forget to record the certificate CNAM under your public hosted zone to validate it.
then use the ACM certificate arn in controller service annotation and define the ssl port as “https”

controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “https”

— Choosing Publicly Accessible
This will configure the AWS load balancer for public access

controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: “internet-facing”

— NLB with NGINX Ingress Controller maybe overwrite client IP, how to retain actual client IP:
you need to have proxy protocol enabled on your NLB and have the appropriate configuration in ingress-nginx.

controller:
config:
use-proxy-protocol: “true”
real-ip-header: “proxy_protocol”
use-forwarded-headers: “true”

–So Finally maybe all you needs

controller:
config:
use-proxy-protocol: “true”
real-ip-header: “proxy_protocol”
use-forwarded-headers: “true”
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:55xxxxxxx:certificate/5k0c5513-a947-6cc5-a506-b3yxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: “*”

— what the chart deploys:
-ingress-nginx namespace
-ingress-nginx-controller-7ed7998c-j2er5 pod
-ingress-nginx-controller service of type LoadBalancer
-ingress-nginx-controller-admission service of type ClusterIP (Validating admission controller which helps in preventing outages due to wrong ingress configuration)
-EXTERNAL-IP -> in turn points to AWS Load Balancer DNS Name which gets created when the Ingress Controller is installed cause of service of type LoadBalancer created by the chart

— In details:
The controller deploys, configures, and manages Pods that contain instances of nginx, which is a popular open-source HTTP and reverse proxy server. These Pods are exposed via the controller’s Service resource, which receives all the traffic intended for the relevant applications represented by the Ingress and backend Services resources. The controller translates Ingress and Services’ configurations, in combination with additional parameters provided to it statically, into a standard nginx configuration. It then injects the configuration into the nginx Pods, which route the traffic to the application’s Pods.
The Ingress-Nginx Controller Service is exposed for external traffic via a load balancer. That same Service can be consumed internally via the usual ingress-nginx-controller.ingress-nginx.svc.cluster.local cluster DNS name.

Create Deployment and Expose it as a service

# create deployment
kubectl create deployment demo –image=nginx –port=80
# expose deployment as a service
kubectl expose deployment demo
# Create Ingress resource to route request to demo service
kubectl create ingress demo –class=nginx
–rule your-public-domain/=demo:80

Refereces:

https://kubernetes.github.io/ingress-nginx/
https://aws.amazon.com/blogs/containers/exposing-kubernetes-applications-part-3-nginx-ingress-controller/
https://repost.aws/questions/QUw4SGJL79RO2SMT-LbpDRoQ/nlb-with-nginx-ingress-controller-is-overwriting-client-ip-how-to-retain-actual-client-ip
https://dev.to/zenika/kubernetes-nginx-ingress-controller-10-complementary-configurations-for-web-applications-ken