It is a container orchestrator: a cloud-native application platform with a high level of automation - but for this to make any sense a few other terms and phrases need prior clarification... I'm afraid I can't avoid getting slightly long-winded. Bear with me!
What does "cloud-native" mean? Typically, modern software that is packaged as an application container image with a wide range of configuration options, providing observability out-of-the-box. Such applications usually follow the microservice design pattern i.e. they do only a few things - probably even just one - but they do those well, fostering re-use wherever similar capabilities are required. (Why this is beneficial from the SDLC perspective is out of scope in our context.) Multiple instances of the same piece of software are executed simultaneously all across such an environment, serving as versatile cogs of various machinery - probably at a fairly large scale. And they're not alone - such applications usually comprise a number of tiers. This lends itself to the purpose of being used in a similarly modern environment i.e. a so-called "cloud", where pretty much everything is software-defined, scalable, observable and API-driven.
In general terms Kubernetes is an application cluster - it doesn't do a thing unless made to run actual, containerized applications, henceforth referred to as workload - but it excels at that. It's often said it abstracts an entire data center (i.e. lots of interconnected computers) behind an API (actually a set of APIs, but that doesn't make much of a difference) into a homogeneous surface powered by all the processor cores, memory and storage therein, enabling developers and operators to deploy applications talking to this API alone - not needing to care about implementation or maintenance because Kubernetes will take care of scheduling, connecting, monitoring, scaling and operating this workload as long as the resources fit into the constraints of the cluster - hardware or otherwise. This entails composing a declarative definition of how our application needs to be run and handing that to the cluster via its API, but not much else from that point onward as the cluster takes over. It ensures your application is available for consumption as best as it can, enduring all sorts of transient environmental and internal faults in a suitably implemented environment. (Developers of a traditional, specialized skill set are encouraged to consume Kubernetes itself as a managed service, so that deployment and operation of the cluster itself is of no concern and one can delve right into the API reference and end user documentation of popular clients to make use of it.) You need to learn how to talk to this API and what to tell it, but with that behind your back, you'll have made a very powerful friend. It's no rocket science, but you do need to get your bearings. (I personally advise a bottom-up approach.)
Kubernetes as a system is highly modular and immensely robust - it is, however, no universal panacea, nor shall it turn everything into nails only because it's a first-class hammer. It's been designed to cover a number of scenarios and application operation use-cases, all of which have a few things in common: of these, scale is pre-eminent. You don't need Kubernetes for low-volume scheduling - it can do that, but there are other tools in the box with a more suitable learning curve and price tag. However, while you may not need Kubernetes to run your workload, you certainly can have a comfortably tiny matchbox of a Kubernetes cluster to spin up and play with in order to take it for a test drive. I'm emphasizing this since not needing it might well be the case - not knowing it, on the other hand, is a no less definite handicap for IT personnel these years.
Stay tuned - I'll be back to provide a closer scrutiny of what we're dealing with here in a series of related articles.