Deep Dive: Ingress in Kubernetes

Deep Dive: Ingress in Kubernetes

Harshal Shah
Harshal Shah

I recently talked about Ingress in Kubernetes at the Pune Kubernetes meetup. This post goes over that topic along with steps to demonstrate ingress capabilities.

The British built Martello towers all over the coast inspired by one at Mortella, Italy where in 1794 British warships could do little damage to these towers. The tower’s design was unique and robust enough to hold a strong enemy attack. Load Balancers in my opinion are the same – they are the firefighters of your applications and hold the fort in case of an attack, intended or otherwise.

First, the warnings!

There are multiple ways to expose applications running in Kubernetes pods outside the cluster. So first things first, let’s talk about *how not to expose **your application. **Port Forwarding ***is fine for quick POCs but cannot be used for any serious application as it simply forwards requests from a port o the client machine, directly to a pod. Thus among many other limitations (which we will cover below) it allows communication only between client and the individual pod.

HostPort or Host Network is a mechanism where ports on a worker node are mapped to ports on pods. This means that those ports on the host have to be opened up for outside communication and there cannot be more than one pod per node. Moreover the scalability of such pods can be only be as many as the number of nodes. With newer versions of Kubernetes, the CNI plugin completely ignores the HostPort directive. Read here for more information.

Exposing Kubernetes Applications the right way

Services are the right way to expose an application within or outside a cluster. If there is a public-facing application, there are following options to expose it:

NodePort exposes the service over a port from range (30000-32767) the port value can be provided explicitly in the service spec else it gets chosen by Kubernetes automatically. This allows configuration of physical Load Balancers over a combination of nodes and ports. If the nodes have public IPs and access to node port is allowed, the application can be accessed directly by sending the request to NodeIP:NodePort. This means for each service exposed via NodePort, the node port itself needs to be opened on all nodes. This is not a good thing as we would be exposing ports of our worker nodes to the world.

Service type LoadBalancer is a wrapper over NodePort which simply creates a load balancer in the cloud provider’s infrastructure that support creation of an external Load Balancer. It adds all nodes as the members of the Load Balancer. This is a far better solution but as the number of public facing services grows, each service needs its own LoadBalancer. This decision won’t go very well with your finance department. Moreover, if the public facing service is not accessed frequently, the service still needs to be available all the time.

This is where an Ingress comes into the picture. For more details on above methods, refer to this excellent post by Mark Betz

Ingress Components

When we talk about ingress, we refer to following parts:

**Ingress Resource **is a Kubernetes resource which defines rules that are used by ingress controller to route incoming traffic.

**Ingress Controller **is an application which captures incoming requests and routes them according to rules mentioned in the Ingress resource.

**Default Backend **is another small application that simply catches traffic for which there are no relevant rules defined via ingress resources and presents a 404 page.

How Ingress controllers route a request

Now let’s see how a request gets routed to an internal service via an Ingress Controller. Consider a Kubernetes cluster with all its worker nodes in a private subnet. We have an application running in a deployment exposed via a service of type ClusterIP which cannot be accessed outside the cluster. In order to expose this service outside of the cluster, we add an Ingress Controller which means we add a Deployment of Ingress Controller and a service (There are other resources such as config maps, roles, role mapping etc but we skip them for bravity, you can refer to the demo git repo for more details on each deployed resource).

Now we create an Ingress resource in our cluster which says *For any packet with destination as demo.infracloud.space route it to service demo-go-app-svc *(which is the blue circle in our diagram). The Ingress controller keeps watching for creation/updation/removal of Ingress resources. It immediately notices the creation of Ingress resource. The Ingress controller reads the rule mentioned in the Ingress resource and updates its configuration. Now the cluster is ready to serve requests.

We update our Route53 or similar DNS service to map the name “demo.infracloud.space” to the Load Balancer for Ingress service (Which we did manually but can be automated with Terraform or similar utility. More on this a bit later in article).

Now a request arrives with destination set to “demo.infracloud.space”. The LoadBalancer forwards it to the Ingress service. The Ingress service then forwards the request to one of the Ingress pods. The pod which receives the request will consult its existing routing rules and route it to the internal service. In case a request arrives on the Load Balancer for which there are no rules defined, it gets routed to a default backend that serves a 404 page. There can be multiple internal services to which routes can be created via different ingress resources/rules in a single ingress resource.

We have a demo which works on AWS published in this ingress-demo repo. This demo requires a Kubernetes cluster to be created on AWS (preferably via kops) and then goes on to install an Ingress controller and installs a demo app which can be accessed via different ingress resources. This should serve as a good hands-on practice for first ingress implementation.

ingress demo

DNS update

Remember we had to manually add an entry for the subdomain into Route53 and point it to the load balancer associated with Ingress Controller? So there are two distinct cases here.

Path-Based Routing

If your domain is going to be fixed (demo.infracloud.space) and if you are going to add additional services based on path (Ex. demo.infracloud.space/billing and demo.infracloud.space/auth and so on) then it is just one time addition of pointing the domain to LB and rest of updates will be taken care by ingress controller. You can choose to automate the DNS update based on the number of environments you have, but otherwise above setup should just work fine.

Sub Domain-Based Routing

If you are going to add subdomains for new services, for example, you might have multiple tenants and each tenant wants a unique URL (Ex. sherry.infracloud.space, john.infracloud.space and so on). Then every time you have a new tenant, you will have to add a new entry to Route53 pointing to the same load balancer for ingress controller. You can automate this as part of your workflow using Terraform/Ansible or similar tooling. But we felt this is a problem which is better solved at Kubernetes level. So we thought about Ingress-Route53-Controller (As of this is writing we just have a blank repo, more updates soon)

What about Scaling?

So in request path between the end user and the service, there are three things: ELB/ALB, Ingress controller and the pod that serves the request. The ELB scales well for up to 1M requests per second (With some pre-warning to AWS in case of sudden bursts). The pods should be scaled well with multiple replicas and HPA. Now how do we make sure that ingress controller pod is scaled well. We can scale the replication controller & HPA so that there are enough numbers of pods serving the ingress routing requests. Another strategy is to use a daemon set of ingress controller pods so that a large number of pods are handling the load.

Secondly, if your traffic is more than what a single ELB can serve (More than 1M requests per second) – then you can have more than one LB and associated ingress controllers. In this configuration, you can split traffic based on load or evenly spread across load balancers.

ELB/ALB

For now, we have used an ELB (classic) in the demo, but it is possible to use an ALB as well. We will update the demo soon to talk more about this so keep watching the Github repo.

Conclusion

To summarize, an Ingress controller-based setup can act as a load balancer for multiple services and save on management and cost of using multiple load balancers. This setup is more secure and also more scalable. There are also some interesting projects like the CoreOS ALB and Traefik ingress controller.

Looking for help with Kubernetes adoption or Day 2 operations? learn more about our capabilities and why startups & enterprises consider as one of the best Kubernetes consulting services companies.

Infracloud logo

Adopt Kubernetes Faster with InfraCloud's Expertise

Consult Kubernetes Experts

Posts You Might Like

This website uses cookies to offer you a better browsing experience