Kelsey Hightower from Google was kind enough to participate in a fireside chat with me and my colleague Nicolaj Græsholt during The DEVOPS Conference 2022. Below are some thoughts from Kelsey and me based on that conversation. Be sure to catch the whole discussion and a lot of great material we could not fit in here by watching the full video.

From container orchestration to a control plane

Many of us think of Kubernetes as a container orchestration platform. You say I need three copies, this much memory, and some CPU. And then you are off to the races with some shiny new containers.

But Kubernetes has evolved with the web to become something much more. Think about how the web has changed: not everyone hosting a web server needs a website. But we can still use the underlying HTTP protocol for transferring information around like a REST API.

When you think about it, what separates Kubernetes from previous tools such as Puppet, Chef, or Ansible is the concept of state management. This plays on top of promise theory: you articulate what you want to happen and publish that as data. This is different from scripting, where you write something in Ruby, Python, or Bash and have it executed on the machine.

So you describe a database, you give it to the API server, and its job is to spawn some controller, which is watching and understands the data and gives you a database. The control loop and control plane used to check on state and compare what was critical for container orchestration are now being used in a much broader manner, and this has become the superpower of Kubernetes.

This expands the use cases of Kubernetes to a slew of new and exciting things. Tools and solutions like Crossplane are leveraging the control plane functionality fully. Then containers are almost an afterthought.

Signs of maturity: catching the Distro Fever

Along with this transformation, Kubernetes is maturing, just like Linux did back in the good ol’ days. 

If you install Kubernetes from scratch the way you used to make your own OS distros, we have some news: there is a better way. Like there are many prepackaged flavors of Linux such as Red Hat, CentOS, Debian, and SUSE, there are now prepackaged Kubernetes such as GKE from Google Cloud, EKS from AWS, and AKS from Azure. Or there is OpenShift if you want to remain a bit more independent.

There is no need to spend time and effort to build your distro from scratch anymore: with the ready-made "distros," you can get to business and develop your platform much faster.

Taming silos with an API

Let's face it; the world has always been siloed. It has always been like, "give me your code, and we'll run a bunch of scripts and logic, and somehow it'll end in the box." And if something broke, you gathered together to troubleshoot it, and you'd do that repeatedly for the longest time.

You need to break the silos, right? Well, yes, but when you think about it, silos are bad only when there is no way they can communicate with each other. It does not feel so isolated when there is an API, even if you are in a silo.

A lot of stuff still works the same way it worked 15 years ago. You write some code, run some tests, create some artifacts, and then let people describe what zones they need. What Kubernetes really changes is the last mile. This means you don't need to teach all your developers Docker, Vagrant, Puppet, or Ansible. Those tools are just a means to an end.

The idea is to allow someone to declare what they need from you and take that information and converge it under the covers. Kubernetes makes that easy. This way, the silos do not matter nearly that much.

This is like having a contract about asking for services/systems, which implicitly defines how to interact and what to expect. Having this definition feels like there is a promise or a contract with an SLA, and life becomes much more manageable.

Contrasting this approach with something like Terraform may make it easier to grasp. With Terraform (we are simplifying a bit, we know), you'd write some Terraform code in a file on your laptop, and it would only run when you executed it. When your laptop or the central server on which you invoked Terraform is shut down, nothing happens. Terraform can’t do anything until it is invoked again.

Kubernetes takes a different approach. There is always a controller, 24/7, looking at your desired state and trying to converge it with reality. The system feels self-healing as it understands that there is no point where you do not want the declared state to be true.

Kubernetes hearts pipelines (and vice versa)

One of the great things about Kubernetes is that you can go from Infrastructure as Code (IaC) to Infrastructure as Data (IaD). 

While the difference may not sound huge, you get some significant benefits, such as the ability to build a pipeline.

With code, it's hard to build pipelines because one tool can't understand the syntax and the scripting language you were using before. But when you turn infrastructure into data, you can do really nice things, such as adding policies in the mix, which is a significant improvement in managing resources effectively and securely.

Stateful or not stateful – is it a question?

Some people would like to wrap everything inside Kubernetes. And, to be sure, even stateful services can run inside Kubernetes, but Kubernetes can only meet you halfway.

Most SQL databases, such as Postgres and MySQL, like stability; they are not designed to tolerate shifting IPs and random machine failures. If you want to run those types of databases in Kubernetes, you’ll need some help from an operator such as Vitess, a database clustering system for horizontal scaling of MySQL.

The other option (when available) is to use managed services for databases and configure your applications running in Kubernetes to use them. Consuming databases as a managed service can also lower the overhead of having several, which encourages keeping your blast radius small; minimizing the fallout should things go wrong.

Remember: Kubernetes is only as good as the infrastructure it’s running on

A question we hear sometimes is whether Kubernetes can be run on-prem. The answer is yes, sort of. The real question is: should you?

It really depends on your needs and your infra. Kubernetes is only as good as the infra it's running on. On-prem Kubernetes can only be as good as the underlying compute platform it’s installed on top of. If you want to run Kubernetes on-prem, ask yourself whether you are in a position to provide services on the same level as cloud providers. Kubernetes assumes an IaaS layer below that provides machine automation, including access to storage and networking. So, don’t overlook the boring stuff.

Adding on to Kubernetes 

Many people would like to see additions and improvements to Kubernetes, which is like having your cake and eating it.

One common request is a more versatile networking stack. Kubernetes is opinionated in this respect; for example, it likes one dedicated IP per pod. We only recently got to the point where you can have dual IP stacks for IPv4 and IPv6.

Kubernetes is also not meant to be a platform but a way to build platforms. Pods and RBAC are core concepts that are pretty solid and mature. The additional parts, like admission controllers, advanced network controllers, etc., are the parts that are still evolving as we learn.

The incredible CNCF landscape buffet menu

While on the topic of adding on to Kubernetes, you should not forget the CNCF (Cloud Native Computing Foundation) landscape. It is s a very comprehensive resource (that just might be the understatement of the year), and there is a lot of great stuff there.

But you must remember that it is just a colossal buffet menu, not a list of recommended or 'should be included' items. Kubernetes was a set of decisions on how to do things. CNCF has multiple options for similar things, many of them still being developed and maturing.

Kubernetes allows people to define new workload types, such as highly available Redis clusters, using operators, but unfortunately, people tend to build operators with a hard dependency on Kubernetes. Instead, operators should be designed independently of Kubernetes: the logic of, e.g., creating a Redis cluster should be portable and exposed through an API, which can then be integrated with Kubernetes using CRDs. This way, things stay more flexible.

So, what is dope at CNCF?

At the end of the chat, we wanted to put Kelsey’s “memefied” dope rating system of dope, pretty dope, extra dope, and super dope through its paces and asked Kelsey to rate some interesting CNCF projects. Here are some of his picks (for more good dope, watch the video…):

Vault (secrets management): super-dope

ArgoCD (declarative gitops) super-dope

Keda (event-driven autoscaling with a plethora of plugins): extra dope 

The super-dopest project of all (other than Kubernetes itself) is Open Policy Agent (OPA). It doesn't get enough credit for what it does. Along with Infrastructure as Data, OPA allows us to apply policy before things are checked in and applied.

Quick recap

We discussed many topics around Kubernetes and where it is in 2022 during the fireside chat, but if we had to summarize everything into three key points, they could be something like this:

  1. Don’t think of Kubernetes as just a container orchestration platform anymore. It has evolved to become a control plane that you can use in a much broader manner – and this has become the superpower of Kubernetes.
  2. Kubernetes helps you transition from Infrastructure as Code to Infrastructure as Data and truly unleash the power of pipelines.
  3. While you can run even stateful services inside Kubernetes, think twice whether that is wise. Decreasing overhead with managed services can help keep things simple.

Overall, we are stoked about how far Kubernetes has come since its inception in 2014. And we are excited about how far it can go in the years to come!


Published: May 24, 2022

DevOpsCloud native