Ingress: fully-fledged web service exposure.
I have mentioned this before and it's high time I elaborated on it. We're now ready to discuss a very specific kind of service exposure in Kubernetes - one targeted at web traffic and the implementation of which relies on most of what we've been looking at until now.
Ingress resources matured recently and are now served by a dedicated API. An Ingress object - first and foremost - describes a set of rules that define the routing of incoming requests towards backend services. (It can also define TLS termination.) It is meant to be used in conjunction with HTTP traffic, albeit most "ingress controller" implementations can pass traffic through and therefore cater for other use-cases as well.
An ingress controller is an instance of software (yes: a container in a pod in a namespace) performing a reverse proxy function, acting as a point of entry by implementing the rules defined by Ingress objects. It is therefore also a client of the cluster API: it processes Ingress resources. Much like network policies without a pod network implementation supporting them, Ingress resources do nothing without an ingress controller. This situation resembles CRDs and their custom controllers extending a cluster - except the API group representing Ingress resources is in the box already. There is, however, no Ingress Controller alongside it. That is a deliberate design choice.
The Ingress API provides a stable interface, but implementations do vary. Just like CNI plugins, there are a number of ingress controllers available and they all perform the above, albeit differently: they have different software products at their core and the integration tooling - the glue - will also have been written in different languages and ways. They all share a basic set of features, but some can do more than that. Some of them are commercial software products, but most are FOSS.
Simply put: ingress controllers process and handle incoming requests by analyzing and matching them against Ingress rules and directing them to backend services that will, in turn, serve them. This involves stateful inspection of HTTP traffic with optional header modification as well as prior termination of TLS. (Pass-through - aka "TCP mode" - will do neither, of course, leaving processing of the verbatim stream to the backend. We don't do this to web traffic, but it can still prove handy.)
This will only ever happen, though, if the requests actually do reach the ingress controller. To give you an example: in a typical scenario this technically means that the ingress controller is externally accessible, while the backends are not. They will all be services - duly represented by Service objects - only different ones: the ingress controller may well be exposed via a LoadBalancer-type service listening on ports 443/tcp and 80/tcp, bound to an public IP address that domain names matching any certificates configured for its Ingresses resolve to, with the web services themselves sitting behind ClusterIP-type services that the ingress controller can reach - including a targeted network policy explicitly allowing that through the default denial protecting a cluster. Web services acting as Ingress backends reside in private networks, are accessible only by an ingress controller terminating encrypted connections on their behalf and implement no encryption themselves.
Clusters may also have multiple ingress controllers, in which case Ingress resources need to be either annotated or assigned to ingress classes in order to be bound to specific ingress controllers, which also need to be configured accordingly. (Unless we want all ingress controllers to implement all Ingresses - however, if the ingress controllers in question are different software products, conflicting status updates may cause problems.) A default ingress class - if one is specified - may be applied to Ingress objects without an ingress class designation. (The same abstraction concept may be observed in the context of dynamic storage provisioning.)
What kind of traffic does this all apply to? In short: all kinds. There may be various reasons for making all incoming traffic traverse an ingress controller instead of exposing services directly, security considerations and cost implications in public clouds among them. The real power of stateful analysis and the value added thereby will only be perceived in the case of applications utilizing the HTTP protocol, whatever they are.
You don't have to have an ingress controller in your cluster if you don't use Ingress objects to route requests and you probably understand by now that you can expose a web service directly, so you don't actually have to use them. Hopefully you also see that they exist and are widely employed for good reasons. (Most real world scenarios involve them.)
This concludes the above topic, but not the series - please stand by.