Skip to main content Search

Kubernetes on the Edge

In this episode, Darren Richardson and Pinja Kujala are joined by Sofus Albertsen, a DevOps Advocate and a Kubernetes expert, to discuss Kubernetes in Edge computing, highlighting various topics including PlanetScale and GitOps.

[Sofus] (0:02 - 0:10)

All of a sudden, all of the same technologies and the same tools are now available right out at the forefront of your business.

[Darren] (0:14 - 0:22)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

[Pinja] (0:22 - 0:32)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

[Darren] (0:37 - 0:47)

Welcome back to the DevOps Sauna. Today, we're joined by Sofus Albertsen from our Copenhagen office, our DevOps advocate and Kubernetes expert. Hi, Sofus.

[Sofus] (0:47 - 0:49)

Hey there. Good to meet you. 

[Darren] (0:49 - 0:51)

And of course, we're also joined by Pinja.

[Pinja] (0:51 - 0:54)

Hello and welcome, Sofus. I hope you're doing well today.

[Darren] (0:54 - 1:07)

Yeah, the sun is shining. I mean, then you can't be anything other than happy. Okay, Sofus, we hear you've been working with something kind of interesting over in Copenhagen.

Why don't you tell us about your work with bringing Kubernetes to the Edge computer?

[Sofus] (1:08 - 1:59)

Yeah, so bringing back the story just a bit, we have quite a lot of competencies within Kubernetes in the Denmark organization. And we can see that the adoption of Kubernetes in general over multiple different customers is very good and impressive. But where we can see that we're sort of lacking these double standards, or not double standards, but common standards, is where things get hard, right?

And Edge is one of the places where things really do get complicated and hard. So we sat down a year ago to think about how we can actually bring something that is built for PlanetScale and then bring it to tiny, tiny Edge devices and locations? And is it actually a good idea?

Just because you can doesn't mean that you really should, right? But we've tried that and with great success, I think.

[Pinja] (1:59 - 2:22)

So if we really go down to the basics of Edge computing, and somebody like myself, I've heard the term, I am aware of Kubernetes. And at the same time, we talk a lot about cloud native movement. So, if we really dig down deep into the definition of Edge computing, what is it, Sofus?

What does it mean?

[Sofus] (2:22 - 3:26)

Because Edge computing has not been defined by academia, there's a lot of different definitions out there. The one that I think the best of is that if you look at your IT landscape going from the cloud, where you have infinite resources, infinite bandwidth, and all that, as long as you have money, then all things can happen. And then all the way to your Philips Hue light bulb at the other end, which is an IoT sensor.

Then, in between those two, you have this Edge layer, where you have limited compute. It doesn't need to be a small compute, but it just needs finite resources. And it needs to be rather close to the source of wherever it gets consumed.

So, Edge can mean anything from your Philips Hue Bridge, which connects your light bulbs. There, I don't think that you will run Kubernetes anytime soon. But in an industry perspective, it's more like having one, two, three, maybe a couple of servers running at a place closer to where the data gets generated.

And that's where we focus on our Edge journey here.

[Darren] (3:26 - 3:43)

And I think typically it's mostly, I wouldn't say isolated, but it mostly focuses on industry, correct? So it's getting closer to things that physically interact with environments, with systems, with control systems for manufacturing, that kind of thing? Sure, yes.

[Sofus] (3:43 - 4:22)

I think the reason why I'm a bit hesitant around the definition is because AWS, for example, has an Edge location in Copenhagen. And yes, it's not a fully fledged data center as Stockholm or any of the others, but they're still massively over-provisioned in terms of how I look at what an Edge location is. So Darren, I take your definition.

And yes, definitely. It needs to be close to where things are happening. And usually you don't have a server room.

You don't have all these luxuries of a fiber connection and double backups and all these different things. And that's where things get tricky.

[Darren] (4:22 - 4:46)

Yes, it kind of feels like the Edge location would be more like part of a CDN, so like content distribution. To me, Edge is also, if we're talking about something like smart cities, Edge is sitting in the apartments with the people. That's where you have that infrastructure.

It's not a small data center, instead of a large data center, with what I think we could both argue is the wrong label.

[Sofus] (4:46 - 5:49)

So for me, if you look at Edge from a technical perspective, you can see that Edge has three different scenarios. It can just be remote, but still have full internet connection, still have all the bells and whistles. And for me, it's irritating to place a switch 3,000 kilometers away from you.

But if the internet connection is there and all of that, then it's easy enough. Then you have another scenario where you have flaky internet connection either because it's dropping because of quality or simply because you are moving. The Edge location can be a moving target just like a container ship or a windmill farm where you don't necessarily have a stable internet connection for.

And then last but not least, Edge scenarios can also be totally air-gapped environments where you do not have an internet connection or you can't rely on it and need to then manually interact with it, reaching this air-gapped environment every time you need to either get data out or data into the system.

[Darren] (5:50 - 6:04)

That's interesting. I think the first definition, they kind of get more clear as they go along with the first definition being, you know, still kind of blurring the lines a little. So let's talk about the Edge native meetup that just happened in Denmark.

Yeah.

[Sofus] (6:04 - 7:19)

So again, looking at all of this and looking at how companies originally have put software out on these Edge devices, it seems to me, at least from a cloud native perspective, that we have reinvented the wheel over and over again. We've created custom solutions either by individual companies or to buy a specialized kind of software to deploy things on Edge. But it still means that we are lacking standards, we are lacking the right way of doing things, and we don't have this operational, what's called patterns, ingrained in how we do things.

I could see that there is a merger within the emerging market of putting Kubernetes on the Edge, going like one or two years ago, slowly starting up. And now we try in Copenhagen and in Denmark in general to get the companies that are interested in looking into these solutions, trying to figure out a different way of deploying and maintaining software, to get them together to have a discussion. That's also the reason why we're calling it a meetup or an experience group, simply because we want to share knowledge from the get-go.

It doesn't necessarily only need to be people that have succeeded in this before.

[Darren] (7:19 - 8:21)

We also want to have the stories for people that are still researching this. There is actually, I would say, an interesting shift, because we did hear a lot about the move to cloud, and now with the move to Edge, in some cases, it seems a little bit like trying to pull it back. So we may have this seesaw effect.

But I think the idea of putting software on the Edge is not new. But can you start to talk about why Kubernetes? From what I know of Kubernetes, I'm not an expert like yourself, I've used it on occasions, but what I know from Kubernetes is it's been described as quite a...

I think for people to understand about Kubernetes, it was based, if I'm correct, off Google's Borg platform, which Google used to run everything. And that's the scale we're talking about for Kubernetes, right? That it's designed to run software at that scale, and you're now talking about bringing it down to extremely small, limited hardware.

So it begs the question, why?

[Sofus] (8:22 - 10:34)

Yeah, so again, I mean, just because you can doesn't mean that you should. So yeah, you're definitely right. I mean, the tagline of Kubernetes is PlanetScale, right?

That's how it was engineered and architected. But when you really come to think about what you gain in a PlanetScale orchestration system, many of the same things are true in a very narrow-Edge device as well. Like you want to have a way for you to get high availability.

Still, if you are in a ship, you want to make sure that just because one of your hardware pieces dies, that the others will take over flawlessly, right? That's the same in PlanetScale and the same as MicroScale or EdgeScale, right? The other part is that you also want this reconciliation of the software, saying, okay, if something goes wrong, we need to pull ourselves back into a state that is working one way or another.

And I think the third one for me is that all the operational know-how that is built into Kubernetes, with all the tools as well that is beautifully integrated with one another, many of them we can still use at the Edge as well. Now, one of the things that have been problematic, at least from my point of view, is that when you look at the Edge, the people who are operating things physically out there don't necessarily have a PhD or even a Master's degree or anything like that within computer science, right? They might be sailors, they might be oil rig workers, they might be people that have a totally different set of skills.

So, having them going into a Linux terminal and doing an app.get upgrade on a node would be insane, right? That won't fly. So, we need a way for us to do easy maintainability once it's out on the Edge.

We can do whatever we want as long as we're having it in our headquarters. But when we ship it out, we either need to have remote access to it or an easy way to maintain it afterwards. And that's, to me, where some of the new interesting operating systems that are built solely for running Kubernetes get into play.

[Darren] (10:35 - 11:42)

Okay, so there's one question I think at least would have occurred to me to actually, to be truthful, it did occur to me to ask at some point when I was working with Edge-based systems, actually in the exact way you describe on vessels. The first question I asked was, Why Kubernetes? Why not use any kind of image registry? Because they're simpler, just use Docker.

And I think one thing we need to add here is scalability. So, we need to think about the Edge’s kind of restricted and limited, but we do need it to be scalable as well. So, if you're thinking, why not just use a Docker image like I was when starting this, you run into this situation where it's great to orchestrate on a single node, but when you start orchestrating across multiple, and it might just even be like four nodes located different parts of a ship for different sensors, that's where you start needing Kubernetes, even though it does add the complications.

The simplicity of something like Docker just doesn't hold up when scaling it, in my experience, to multiple systems.

[Sofus] (11:42 - 13:24)

Yeah, Docker is awesome for local development, and Docker images are also awesome for packaging, but it's not awesome for orchestration. And as you said, I mean, you can either, when dealing with resilience, you can either buy premium hardware that has redundancy built in, so you have hardware rate, you have multiple power supplies, you have multiple backup systems of power, etc. You can go that route, and we have done that for multiple decades, right?

Going in, making sure that we have the best hardware possible. But if you look at the notion we're taking or the angle we're taking in data centers, we're actually going away from these premium-built hardware and then go over to say, okay, we want the resiliency, we want default tolerance, we want all of these things, but we want it on a cheaper level and therefore being able to scale more. And that's, I think, where Kubernetes comes in because much of this has been ingrained in software instead of in hardware.

So by deploying things with Kubernetes, you do get resiliency just built into the software instead. So you can use, I mean, there might be, what's it called, regulatory rules that state that you need to have specific hardware. That's super fine.

But if you don't have that, you can actually just have something in common, like Intel NUCs running things on the Edge, and they are so much cheaper than what you normally would deploy out there. And if one of them dies, well, you just need to have an operational way of just plugging in a new one. It gets auto-provisioned, and all is just working.

That's the beauty about Kubernetes and the Edge scenario.

[Pinja] (13:24 - 13:42)

If we think about the topicality of this and Edge computing and just like, I've come to think about the immutable operating systems and just the maintenance of them. And we know that Kubernetes is over 10 years old. So why has this become a thing now?

What is the reason why it's so topical at the moment in 2025?

[Sofus] (13:43 - 15:56)

Yeah, so again, it's not to talk about the cloud, but we can put a lot of learnings from what we did in the cloud and then put it back into the Edge part, right? The reason why we have managed Kubernetes on every cloud provider out there is because it's hard to start up, and it's troublesome to keep maintained, right? So that's a clear win for people to just say, I want an AKS or an EKS or whatever it is that we want out there.

Two clicks of a button, and then you get a cluster. You don't even know all the intricacies that are setting things up. So if we go on the Edge, we don't have that luxury.

So we need to take all these different kinds of things into consideration. And when building Kubernetes, we're actually building a platform to run things on top. So come to think about it, we don't need an entire operating system that can do exactly the same thing underneath.

We don't need SSH. We don't need all these different kinds of libraries and all of that. And that's where some of the immutable operating systems come in because they have a super minimalistic way of going about it.

So one of the flavors that I personally love is Talos. It has 12 binaries and that's it, right? So that's keeping Kubernetes afloat.

That's the 12th one. And you'll only have two different packages that you can upgrade. You can either upgrade the operating system or you can upgrade Kubernetes.

That's it. So instead of going with a regular Ubuntu, which again, fantastic operating systems, nothing wrong about that. But the minimalistic part there is that it's only 12,000 packages or something like that, right?

So going from 12,000 packages that you need to maintain and make sure that everything works because if you upgrade one of them and your computer breaks, then you're stuck, right? Where on the other side, if you're only having two, then testing the combinations there is just so much easier. With immutable operating systems, you also do get this dual bank setup where you don't upgrade the operating system that you're running.

You are actually writing a new version of the operating system over to a different partition in your computer. What that means is that if that one is faulty, by whatever means, your computer just restarts and boots back up in the old one so that you at any given point in time always have a ready partition with your operating system that works.

[Darren] (15:56 - 16:34)

And these are like great things from a compliance standpoint, like an availability standpoint. But as a security professional, the idea of an operating system with only 12 packages makes me happy. Like the massively reduced tech.

Kubernetes was already, for a lot of people, a bit of a mystery. So having a backbone which is so hardened just to the absolute essentials, that sounds like a real win. Not even just for Edge, just for Kubernetes in general, to have things running on Talos instead of running on Ubuntu.

As you say, a great operating system, massively expanded attack surface.

[Pinja] (16:34 - 17:22)

And I'm thinking this from a, if we think of a whole development organization perspective of this, like the cost of development, and let's say the business case of things. Why should somebody in business, for example, be interested in maybe moving into this? So again, you're saying it's only 12, right?

So that's already, the maintainability is perhaps one thing. But if we think of the hardware that we need to be able to manage with this, for example, if we need to deploy on relatively small hardware, you mentioned the light bulbs previously. We were talking about switches as well.

So what is the relationship between having the installation, let's say the micro level installation, with the Kubernetes setup in, like let's say, the business case around it. Does it allow us to do even smaller things?

[Sofus] (17:22 - 19:16)

So I don't necessarily think that we go into a race to go into a microcontroller and have that running Kubernetes because Kubernetes, one way or another, is a large platform, right? So I think you need to separate those two things and say, okay, do you have sensors? You have microcontrollers, directing doors, pumps, valves, sensors, whatever you have, right?

Those are hard to program, hard to deploy, hard to update. And Kubernetes won't necessarily help you with that part because they are embedded in some kind of device, right? But they are all emitting data.

And the more we are getting into matter protocol and all other things where everything is connected, then we need an intermediate step for aggregating data, for doing local compute, either because it takes too long time to go to a real server or because we can't or any of the other different scenarios that we talked about before, right? So we need something there. One of the talks that we had at the meetup was from a company called Yusk.

Maybe many of us know or sleep with it during a regular occasion, right? And hearing them saying, okay, now we, like before, we had different kinds of means of deploying things, but it was always troublesome to deploy something new at a store, right? Because a store is also an Edge location.

It also has the internet, right? It also has sensors and all of that. But if you look at that and then all of a sudden put Kubernetes on top, then the developers can have the exact same way of developing for the cloud as for the edge.

So all of a sudden from an operational point of view and from a software development lifecycle point of view, whether or not you're deploying to Google or to your Edge device, the deployment process is the same.

[Darren] (19:17 - 19:34)

You raise an interesting point there about the business case where we think, at the start I was just thinking you basically have two, you have industrial and then you have smart homes. But the idea of stores being Edge locations massively expands the reach of something like this.

[Sofus] (19:35 - 20:27)

I mean, if you look at how many cameras that are in a typical supermarket, having all of them carrying their video feed directly to a centralized server, some places that is feasible, but in other places you don't have that. So you want a centralized place and then you can have a specialized appliance for video. But what if you want to do real-time analytics around behavior, around travel paths, around all these different things?

Then you don't need to upload all your video to a centralized server. No, you can just deploy that analytics software on-prem, on the Edge, and then have that crunching the numbers and sending the result back to your centralized place. Vastly limiting the need for a connection that is high bandwidth and probably both costly and maybe also unmaintainable.

[Darren] (20:28 - 20:52)

It raises the idea of one of our favorite topics on this podcast, too, the idea of more locally running AI. Because while it's probably not possible to train models in Edge situations, they can certainly be trained in the cloud and deployed on the Edge, so that you can, I'm guessing, those are the kind of analytical models you'd be running on, like real-time analytics using AI. It also opens up that as a potential use case.

[Sofus] (20:53 - 21:30)

Since you have Kubernetes as the underlying platform, I don't want to call it an operating system because it is not an operating system. It's a platform where you can apply your applications to it. But it's a standardized way of doing it.

Then you can easily deploy, test something out for let's just say 10% of all your stores or 10% of all your fleet of whatever it is, looking at things. You can do A-B testing, you can do canary deployment, all these different things that we know and love from the cloud and web. All of a sudden, all of the same technologies and the same tools are now available right out at the forefront of your business.

[Darren] (21:31 - 21:57)

Let's talk about what I think is kind of a, let's say, something that goes hand-in-hand. It's kind of been a buzzword over the last few years, I would say, but GitOps is a thing that comes up quite regularly. Now, my experience is that GitOps is kind of mandatory for Edge-based deployments, but Sofus, you've been working perhaps with these stores that are more sensibly connected.

So, shall we talk for a moment about GitOps? Sure, yeah, let's do that.

[Sofus] (21:57 - 24:03)

So basically, yeah, I mean, GitOps is, hopefully, it has gone over the hype curve now and actually gone into real partition. I think from the start off, it was a brilliant way of doing management of fleets of units wherever you have it, right, and have a centralized place where you can see, okay, what is it that we want to deploy? So again, if we go back to these scenarios where we have fully connected, flaky internet, and air gap, then fully connected and flaky internet, both of them have internet at some point, right?

So there, things like Argo CD, which is a GitOps tool, is perfect for it. It might be that you need to tweak it for some things. For example, again, one of the companies that we have talked with are installing this on shipping vessels, and when they are at harbor, they have a fantastic internet connection, but as long as they are out at sea, they might be going down to 2 kilobytes per second, right?

And imagining doing anything, like downloading any other than the Hello World application through that kind of satellite connection is just not feasible. So they need to make sure that every time they upgrade something, that they have everything they needed on-prem and on the Edge in order for them to do the upgrade. So they need to separate the download of a new version of their entire tool stack from deploying it.

And there, you can just, again, still go in and utilize tools like Argo CD and other tools that are out there, like an image puller, where you say, okay, the first job that you are going to do is that you are going to pull down the image. You are not going to start it yet, but you need to pull down all the images before you even start anything. And when you have pulled down all the images, you are very welcome to start them, even though you are at high C, because you know you have everything you need in order for you to do this package.

So, Argo CD or GitOps in general is, I would say, is a perfect slash almost mandatory for you to do when you are doing Kubernetes on Edge. Except when you are doing AirGapped, because AirGapped does not have a pull connection like that. So that is a different one that we can go into if we like to.

[Pinja] (24:03 - 24:18)

But in other situations, there is a pull-based way of working instead of the push would work. So I guess, as you say, when you are somewhere out in the sea, so let's say, for example, the hard-to-reach areas, this would work in a perfect way, perhaps.

[Sofus] (24:19 - 25:52)

Yeah, and again, if you are having, like, GitOps is only doing things when you are having an internet connection, right? So it is trying and trying and trying. And if you do not have an internet connection, then, I mean, nothing changes.

Even though you said in your Git repository, I want to have the newest version. Well, if there is nobody who is asking the repository for any changes, well then, changes are not going to be made. When we come to the air-gapped scenarios, it is a little bit harder there because we do not have that direct connection.

But luckily, people have still tried to deploy and work with Kubernetes in these air-gapped solutions. And it is not a company, but an organization called Defense Unicorns have actually done this. So basically, the US has quite a few submarines and apparently, some of them are also running Kubernetes.

So they needed a way for them to deploy new things while being, like, maybe it is water-trapped and not air-gapped, but basically, they do not have any internet connection. And they have created tools that work beautifully with OCI images, which is the foundation of Kubernetes to how you run images. Beautifully with Kubernetes itself and all that.

Making sure that you have disconnected the responsibilities of the DevOps engineer or the release engineer from the sailor who needs to do the upgrade at the submarine. And while you are fixing it in a submarine space, you have actually fixed it everywhere where you have an air-gapped solution. And since it was a university collaboration, all of the tools are available in an open source.

[Darren] (25:52 - 26:09)

It is beautiful. I do not think it gets more extreme-edged than a submarine. Perhaps, other planets, like, maybe if they are trying to do GitOps on Mars, then that might be a little more extreme.

But yeah, once you have solved it for submarines, at least everything else on this planet should be possible.

[Sofus] (26:09 - 26:14)

Yes, I think that could be a good tagline. Kubernetes works on this planet.

[Darren] (26:15 - 26:23)

Okay, I think that is all we have time for today. Thank you for joining us, Sofus. It has been a delight to talk to you.

Thank you so much for having me. And thank you, Pinja.

[Pinja] (26:23 - 26:26)

Thank you. And thank you, Sofus, so much for joining us here today.

[Darren] (26:26 - 26:29)

And we hope you join us next time on the DevOps Sauna. Goodbye.

[Pinja] (26:34 - 26:39)

We'll now give our guest a chance to introduce himself and tell you a little bit about who we are.

[Sofus] (26:39 - 26:47)

Hello, my name is Sofus Albertsen. I am a DevOps Advocate and a Kubernetes Expert. And I live in Copenhagen, Denmark.

[Darren] (26:47 - 26:50)

I'm Darren Richardson, Security Consultant at Eficode.

[Pinja] (26:50 - 26:55)

I'm Pinja Kujala. I specialize in Agile and portfolio management topics at Eficode.

[Darren] (26:55 - 26:57)

Thanks for tuning in. We'll catch you next time.

[Pinja] (26:58 - 27:06)

And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna SessionsCloud native