I've been mentioning connections, networks and policies every here and there in the previous articles and it's high time I told you of the role they play and how that happens.
Allow me a brief retrospective so that we'll see where this ties in: we've discussed applications comprising containers running in pods on cluster nodes and the inevitable necessity of establishing network connections to them for consumption. We see that in the case of modern applications designed with a microservice architecture in mind or even monoliths of multiple tiers these pods will need to be aware of and connect to each other.
We understand there are services defined that facilitate proxying requests to pods. We remember Kubernetes runs containers in pods which get IP addresses assigned by the cluster. Containers in the same pod are by definition colocated - run by the same host - as pods can't span nodes. It's only logical that containers running on the same host can communicate with one another... but what if we have multiple hosts? Were nodes to remain network boundaries, coerced colocation would be extremely unwieldy as well as a tedious chore. Fortunately, the implied hindrance was recognized and addressed appropriately.
This problem is generally solved in container orchestrators by creating a virtual network all containers are connected to, usually via overlay networking. There are various means to this end, though, and Kubernetes follows its usual modular approach, employing plugins adhering to the CNI - Container Network Interface - specification. Such a plugin - of which quite a few exist, each with their own quirks and perks - shall be found deployed in the cluster much like a storage provisioner, as discussed earlier. (Those relate to the CSI, though.) As opposed to storage provisioners, however, a single CNI solution is to be chosen. Whichever one the cluster has, it is responsible for the actual implementation of what is generally referred to as the pod network. This network facilitates connectivity between pods regardless of the nodes that run them. Kubernetes imposes certain requirements concerning implementation in order to maintain its network model - while these are useful to know, we needn't pay truly close attention unless we're about to develop one of our own.
One must give serious thought to this choice, depending on requirements like encryption (in-flight security in the encapsulation layer), multi-homed containers (seldom needed, but indispensable in certain scenarios) and - last, but not least - network policy support. Because having solved a problem usually one will have had another introduced and this one is no exception: cluster-wide interconnectivity between all pods raises a valid security concern. For developers as end users, though, these are environmental considerations of a cluster possibly suitable for their purposes, following decisions already made elsewhere.
Kubernetes implements network segmentation via so-called network policies to regulate the flow of traffic in the pod network, provided the implementation (provided by the CNI plugin in use) supports it - otherwise (like an Ingress resource sans an ingress controller) they won't do a thing. (I haven't said this for a while: more on that - why that is so - later.) Simply put: NetworkPolicy objects define directional sets of rules determining which pods can access which pods or non-resource URLs.
Sometimes authn/authz and/or RBAC isn't applicable or feasible and even if it were, it would prove less ideal than, say, outright denial of connectivity to the endpoint in question. It's generally advisable to follow a whitelisting approach regardless of the security capabilities, though, because you can't really rule out malicious intent with a clear conscience and "wherever there's a will there's a way" is no joke either. It's therefore considered prudent to apply a default-denial policy and a tight-fisted distribution of exceptions in order to allow only what is necessary, accepting the price of an administration overhead in exchange for a relatively justified peace of mind.
I'll be back with more - stay tuned! (I will probably also delve deeper into cloud-native security topics from a Kubernetes perspective in dedicated articles later. Until then we'll be fine observing the mere tip of the iceberg.)