Konfined

Most of the hardware ends up being part of an infrastructure serving a shared pool of resources. Dedicated hardware is most often limited to personal-use devices and while usually not shared, even those are connected. Networking and multi-tasking foster efficient distribution of resources as well as simultaneous use of the same system by multiple operators. These users will have varying degrees of privileges, resource allocations and even different uses for these systems, therefore isolation is crucial. The concept of multi-tenancy in computing environments is nothing new and the general availability of so-called "clouds" had made that clear. Much like any other, a Kubernetes cluster usually has multiple tenants, serving developers working on different products and running a plethora of loosely-coupled applications. The feature implementing this is called a namespace.

First off it's important to understand that this term is now used specifically in the context of Kubernetes - as important as that is for containerization, we're not referring to the namespacing feature of the Linux kernel. It's often said that Kubernetes namespaces are to be thought of as virtual clusters an actual cluster is split up into. I don't really endorse this wording but I see how it might help understand what it's actually about, so that's that.

Second, a namespace is not a hard boundary - at least not by default. It provides the facility one can apply various controls to, but does not technically separate instances of workload, meaning they'll all end up in the same network, albeit with DNS names in different subdomains. One can, however, specify namespaces when defining network policies to regulate the flow of traffic between them and resource quotas can also be applied to namespaces to govern the kind and amount of resources they can accommodate. RBAC is normally employed to enforce authorization (control of privileges) for namespaces.

Third, while most object definitions require a namespace, not all resources are (i.e. can be) namespaced - first and foremost this includes namespaces themselves. (Some systems that are built around Kubernetes ("the platform for building platforms") have introduced a new abstraction via the concept of projects, which are logical containers - organizational units - for namespaces and implement an additional layer of control as well as grouping. There was also some SIG work done on nested namespaces that hasn't been incorporated yet and hasn't seen wide-spread use.) Nodes - which are also represented by resources - also belong in this "non-namespaced" category, as do ClusterRoles, but let's not get ahead of ourselves in an untimely manner. (Read: more on that later. Yes, again. Bear with me.)

There's always a namespace called "default" which will be considered selected or specified whenever an explicit selection is omitted. It's considered a good practice to not use this namespace in multi-tenant environments. There's usually another named "kube-system" where the control plane components and their related applications run. This is the cluster administrator's domain and is generally off-limits for anyone else. Most application deployments will involve prior creation and population of their own dedicated namespaces in order to maintain operational security and a sustainable maintenance model. Service accounts are typically application-specific and constrained to resources in their respective namespace. Individual users are normally grouped and will have access to whatever they need to work with exclusively. Cluster administrators remain the necessary, timeless evil.

There are multiple strategies for namespacing depending on use-cases and usage patterns as well as system implementation. A single cluster may host multiple development stages but otherwise remain dedicated to a single organizational unit (i.e. development team) or even application, or it may represent a stage of development for multiple applications. (Ephemeral clusters might also be employed in CI environments, rendering most of these concepts meaningless by existing only for the duration of a targeted build job.)

Since it's easy to encounter situations in which one needs to use multiple clusters, it's good to know about contexts. The configuration of your cluster clients defines contexts that bind users to clusters and the current namespace therein. You'll need at least a single context configured in order to be able to work and that implies a cluster with its API endpoint URL and at least one user specified in addition. A context, therefore, tells the system which cluster we're working on as which user, in which namespace. Switching between these is a frequently-occurring task one should be familiar with - fortunately great tools exist that can lend us a hand with such chores. (I'm using kubectx + kubens and kube-ps1.)

That's all for now, but there's more to say and I'll be back to say it. Stay tuned!

Previous Post

Konnected

Next Post

Kontrolled

You've successfully subscribed to Think Inside The Box
Great! Next, complete checkout for full access to Think Inside The Box
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.