Koncepts

Now that we know how to speak to Kubernetes, let's start looking at the terms it will understand.

I've made the initial assumption that You already know containers and I'm not backing out of that. (In an ideal world you'll also have heard about namespaces, control groups and capabilities, but I'll generally avoid having to take deep dives throughout this series so that lack thereof is no hindrance.) An application container is still an instance of an image downloaded from a registry, providing a set of files that populate a virtual filesystem where a process based on an executable therein is run. Kubernetes changes none of that.

The shocking truth is that unlike You, Kubernetes doesn't even know - let alone do - containers. What it deals in instead bears the name "pod" and is the lowest level of granularity in its world. (You may think of it as an abstraction layer, introduced to provide a unified, implementation-agnostic interface and scope.) Pods, on the other hand, primarily comprise containers and Kubernetes knows how to negotiate with the container runtimes it supports to manage those on its behalf.

Kubernetes also doesn't change the fact that containerized applications - containers for the sake of brevity - are provided network access by the host running them. Pods are assigned IP addresses by the cluster so that their containers are reachable over the network. These addresses are not static, they are assigned dynamically from subnets specific to hosts and are all part of the same network. They're internal to the cluster, inaccessible from outside.

You can't run anything outside pods in Kubernetes, and a pod makes no sense without containers - it's therefore safe to state that a pod will have at least one container. Regardless of the number of containers inside, a pod is to be considered a single instance - in certain contexts a replica - of your application. All containers within the same pod will share the same IP address and therefore need to bind distinct ports to listen on the network. Pods will also contain the volumes their containers require - such a volume may be mounted by multiple containers in the pod. (Internally, containers in the same pod might communicate over the loopback interface of the pod or sockets placed on shared volumes.) Pods can not span nodes, which means all of a pod's containers are co-located i.e. running on the same host. Hopefully the now-dissolving mist reveals that in order to achieve fault tolerance one needs to run multiple pods - replicas - of the same application on separate hosts. Loss of a host running a pod implicitly means losing all containers within that pod - were that our single instance, we'd have a service outage.

Speaking of services and outages: I mentioned earlier that objects have statuses and pods are no exception. Status is determined as a result of executing probes against containers to figure out whether the functionality they implement is running and operable. These can execute commands in containers, test open ports or issue HTTP requests and are optional, but omission thereof shall make the cluster assume pods are always ready and that simply isn't true to say the least. With crucial internal mechanisms pertaining to availability of your applications relying on this information it's in your best interest to implement probes.

There sure is a Kubernetes equivalent of simply running a container - it involves defining a Pod object and the container within. Ultimately a computer in the cluster with a container runtime installed shall have to download this image and start the container. In this case we'll end up having a pod run by one of our nodes: a running instance of an application. Should a container in a pod started in this manner fail, it can be restarted automatically. This pod, however, shall remain bound to the node it was scheduled to (i.e. started on) - strictly speaking such an instance isn't considered a replica as it's stand-alone with nothing to take further care of it. A pod is a low-level building block that doesn't have the logic required for the feature marketed as "self-healing" implemented. Pods are usually defined by higher-level constructs with the ability to facilitate scheduling employed in their stead.

Pods are ephemeral. They come and go and as a rule of thumb you can't rely on their presence or the IP address you've last seen them have. You can, of course, query their current address from the system but that address is likely subject to change. The application distributed in the image you've had one of your cluster nodes download and start a container from is running somewhere in your cluster. You shouldn't want to know where exactly it runs - which node it's on, what address it has - unless there's a problem and while developers might have ways to discern that, actual end users definitely lack the means as well as the inclination. They only want to consume an application with no access to the underlying cluster itself, probably not even aware of the latter. This is addressed by the concept of "exposure" in Kubernetes, which involves creating a Service object and facilitates service discovery by assigning static addresses, ports and names that abstract pods. Requests from consumers should target services by name and will ultimately be responded to by pods serving as endpoints backing the service in question. The list of endpoints for a service is maintained by the cluster and comprises the IP addresses of its pods known to be ready to service requests. A service with no endpoints can service no requests and is therefore to be considered down for all intents and purposes.

Hang on, there's more. I'll be back!

Previous Post

Kubespeak

Next Post

Kubernetized

You've successfully subscribed to Think Inside The Box
Great! Next, complete checkout for full access to Think Inside The Box
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.