The Cilium project - best known as a networking plugin for Kubernetes - has been around for a few years now. But they just released a service mesh functionality.
In this podcast episode, we've invited Liz Rice who is the Chief Open Source Officer at Isovalent, creators of the Cilium cloud native networking, security, and observability project, for a technical conversation around Cilium Service Mesh.
The order in which things happen as you bring up a pod can get messed up if you have sidecars. And maybe you haven't just got your service mesh sidecar: you might also have logging and security tooling. There's a whole series of different tools that have traditionally been implemented using sidecars, and they can get in each other's way.
Welcome back to DevOps Sauna. We've got Marc and Andy here today.
We're just back from summer holiday in Finland. It was a glorious summer. And just before we left, we recorded a podcast with Liz Rice talking about Cilium. What's been happening with Cilium, Andy?
We wanted to get this podcast out now because Cilium has just released 1.12, which includes the service mesh, which is what we were speaking with Liz about.
This one made me nervous. I'm used to talking about people and cultures. And I can talk about a lot of technical things, but this went all the way into the kernel. And it was a little bit outside my comfort zone, but I think we had an exciting conversation. How did it make you feel Andy?
Generally, I'm more comfortable with the technical things than you are and dealing at a slightly lower level is my jam. But I do have to admit that talking with Liz brought a little bit of trepidation because she's just such a powerhouse.
All right, without any further ado, let's tune into the podcast.
Hi, it's Marc, and welcome to DevOps Sauna. We have a really interesting guest today. We have Liz Rice, the Chief Open Source Officer at Isovalent. Hi, Liz.
Hi. I may have come in a bit early there.
Completely cool. Cilium has recently expanded their network overlay provider to become a full- fledged service mesh running without a sidecar. And we're really excited to have you on the program here today. Andy has been really excited, but to me Cilium has been this flower-child tree hugger, gluten-free thing that we put into our baking, but I don't think that's why Andy is quite so excited. I can tell Liz you're really excited as well. So please tell us about Cilium.
Yeah, the Cilium project has been around for a few years now and it's probably best known as a networking plugin for Kubernetes. It provides networking connectivity, and security, and observability for connecting your pods. And it's based on a kernel technology called eBPF. Essentially, we can run parts of Cilium within the Linux kernel and that enables us to do a lot of networking things very efficiently and to be able to observe network packets incredibly efficiently and from that build this identity-aware picture of how different Kubernetes components are connected, how your different application pods are connected, how different services are communicating together. And more recently, we've extended that to support a lot of service mesh features. So around the end of 2021, we released a beta program for Cilium Service Mesh. And that's been really exciting to see the reaction to that.
And so my introduction to Cilium was I was trying to deploy a billing system for a telcom provider inside of Kubernetes and all the things that come along with that. And one of the requirements was all the networking traffic had to be encrypted. So we were using Istio for our ingress and we just said, "Okay, let's make this MTLS enabled." And everything went fine, except we had sidecars everywhere and the sidecars had to start up before the network traffic would open and the applications didn't always like that. And they wouldn't come up on time. And we had a lot of issues with resource constraints due to the sidecars everywhere and timing of things coming up. And it just didn't go as we had hoped. So I started searching around and what can we do and I found Cilium. And I heard, "Okay, hey, I can get rid of the sidecar and still have an encrypted network. Fantastic". Put it in and it worked as advertised. Everything was great. And then I noticed that well wait a second. They also have this thing called Hubble, which comes with Cilium. What do I get? Oh, look at this! And with Hubble I was able to see this application is sending this packet to that application which is sending this packet to that endpoint and I was able to visualize everything that was happening in the network and it brought so much value. And then I used it, I loved it, it was fantastic. And then I heard about this beta. And I started playing around with that a bit. And I'm really excited for this to come out of beta.
This is so great to hear. Thanks, Andy.
Have you had many other reactions to people using the Service Mesh beta?
Yeah, I think a lot of the reaction, we've had very much echo of what Andy said there. So people find sidecars resource hungry. If you have an instance of the network proxy in every single pod, then every one of those proxies has its own copy of routing information. Every single one has its own executables. It all adds up. And if the routing information is substantial, if you've got a large set of endpoints to connect to, then having that routing information duplicated in every single pod can be a significant amount of memory. But I think I was even more surprised at the number of people who were more concerned about the complexity of sidecars, and exactly what you were referring to there, Andy about how the order in which things happen as you bring up a pod can get messed up if you have sidecars. Maybe you haven't just got your service mesh sidecar, you might also have logging and security tooling. There's a whole series of different tools that have traditionally been implemented using sidecars and they can get in each other's way.
Exactly. You can't just say, "Let's put the sidecars in a container, so it starts up first because it needs to run all the time." So you have to somehow figure out how to set your application to wait for the other containers to be ready before that container starts. And it's a headache.
Absolutely, yes. This sidecar container might need the networking connectivity in place before it can do what it needs to do to get started. So it can get quite complex quite quickly. And I think the issues that people run into operationally, and also just making sure that they've got those side cars injected. It just creates more headache for people operationally. And so I was really surprised at the level to which people just said, "I just want to get rid of sidecars because they're just too operationally complex." So having one instance of your proxy per node seems to appeal to a lot of people from an administrative operational perspective.
And you mentioned that this is built on EBPF, can you open up a little bit what that is?
Sure, yeah, EBPF stands for Extended Berkeley Packet Filter. And to be honest, that acronym is completely useless, it doesn't really tell us anything about what EBPF does. So EBPF allows us to run custom programs in the kernel. We can load programs into the kernel, and attach them to different events and then they run dynamically whenever those events occur. And those events could be things like a network packet arriving so we can inspect or maybe even manipulate that network packet from within an EBPF program. Or we can actually attach to any function call across the whole kernel, but we often see attaching to system calls. So you can see when user space is interacting with the kernel. There's a huge array of places that we can hook into. Some of them have been used extensively for metrics. There's a whole series of tools. Brendan Gregg at Netflix did a whole load of pioneering work to show how you can observe what's happening across the whole system using these EBPF tools. And now we're seeing that observability extended more into security tooling as well. We've been working on another sub-project in Cilium called Tetragon that extends that observability into, "Okay, let's look at potentially malicious events from a security perspective." There's lots of exciting things that we can do with EBPF networking metrics, security, observability in general. Andy, you mentioned Hubble and the fact that we can show you exactly what's happening with every network packet. And that's possible because of EBPF.
Yeah, I was explaining this to a client and trying to get them to let me share the value of Cilium with them. "Let me install this, pleas., I promise you'll like it." And trying to explain to them how the EBPF modules work a bit. And they said, "Well, this sounds like something so new and fancy. I don't know if I trust this loading stuff into the kernel." "Well, have you ever used TCP dump?" "You will use it all the time." "Exactly. That's exactly what we're doing. We're just building on that exact idea and expanding it a bit to give us these other visibility hooks." "Oh, I get it now." That was the thing for them.
I think it's a reasonable question for people to ask. It's a pretty new technology to a lot of people. And the idea that you can change the behavior of your kernel, I can see why that raises questions. People should be asking questions about well, "Does that have consequences?" I think a big part of the answer to that is the eBPF verifier. So when you load an eBPF program into the kernel, it runs through this verification process that ensures that that program is going to be safe to run because you do not want to crash the kernel, that would be a very bad day. So the verifier ensures that the program is going to run to completion, that it's not going to dereference any null pointers or anything like that, and that it's accessing memory that it's supposed to access. And that verification process makes it much safer in general or it's much more likely to be safe than most kernel modules. No disrespect to kernel modules, but kernel modules don't go through anything like this kind of verification process. And that's one of the reasons why I think eBPF has really taken off because it's so much safer to run.
Yeah, too often we look at something and well, I need to load a kernel module. Well, I know a kernel modules are so yes, this is fine. And then you come up with eBPF. And this is injecting something into the kernel that, well, I don't feel comfortable with that. But if you look at what it's really doing, it's even better than a kernel module.
Absolutely. And the other beauty about it is that you can do it dynamically. So if we load an eBPF program, it instantly gets visibility into whatever the event is that it's attached to. So it can observe pre-existing processes. You don't have to restart any of your applications because the kernel already was aware of those processes and your eBPF program is hooked into that. So this dynamic ability to just start measuring things or start affecting things or even mitigate security issues that we can do with eBPF. The fact that we can load them dynamically is a really huge bonus. It's also one of the interesting things about the sidecar model or moving away from the sidecar model. If you want to instrument something using a sidecar, you're going to have to restart that pod so that you can inject the tooling as a sidecar container. Whereas if we have something running using eBPF, we just have to start the eBPF tooling. And we may need to point it out like, "Yeah, we want you to look at all the pre-existing processes." But it can access everything that's happening on that machine.
Right. Because it's running in the kernel of the host that by definition has visibility of everything. And we just load that, "Please look at these specific bits and tell me what's happening."
Could you describe for me a use case for that?
So I think observability tooling would probably be the most straightforward example of that. Say, for example, you there's a project called Parca that is using eBPF to generate flame graphs showing how CPU utilization and I believe they also do memory resource utilization. And you can start that and it can instantly instrument pre-existing processes. You don't need to modify your application and you don't even need to restart them. And you can start getting these resource usage graphs, which I think is a very visual way of seeing the power of eBPF as a technology. I suppose other examples exist across Cilium. We're able to run Hubble and pick up the networking traffic. We use Hubble in conjunction with Cilium, but you can start Hubble after your network connections are already in place. And Hubble will start showing you those network flows. You didn't need to restart anything for those flows to be picked up by Hubble, much like TCP dump. You can use that on a pre-existing network connection. Well, I guess pre-existing network interface.
So monitoring observability, dependencies, things like this as needed?
Not just running all the time?
Yeah. And you can add and remove that observability. If you want to get more detailed information, you could run an eBPF based tool. You don't have to necessarily run all the tools all the time, you can turn on the things you need. This speaks, I think, a lot to the work that the BCC project did and BPF trace where this is a lot of Brendan Greg's work. You can run those tools as needed on your existing production machines to debug issues that you're seeing to find out what the causes of, I don't know, whatever issues, latency issues, or something that you're seeing,
If we kind of zoom out now from eBPF back to Cilium. So Cilium is using exactly this eBPF technology and giving us the network observability, the network security, network policies, etc. In KubeCon, I attended your talk and it was really good by the way. Thank you for that, about why Cilium's expansion to becoming a full service mesh. And you started with this section about why Cilium was basically 80% of a service mesh already. It's not fair to say can you re-give your talk in audio format, but can you walk us through that a little bit?
Yeah, sure. So I joined Isovalent, it's coming up about 18 months ago now. And one of the conversations that I had with Thomas Graf, the CTO, quite early on, I remember him saying, "Well, yeah, we're already an 80% service mesh." Okay, let's talk through what that means. And if we think about what a service mesh is, well, one strong requirement that certain people have from service mesh is observability. We have Cilium, we have Hubble, we have very good visibility into all the network flows looking for connectivity between services. Well, that's networking. I think when we think about how services connect to each other in Kubernetes, you don't need a service mesh to have one service talk to another service. They can find each other and they can communicate with each other, it becomes much more about things like ingress capabilities about routing often at layer seven at the application layer. So routing, ingress traffic to different back ends based on what path has been requested, or what protocol, what headers are involved in those requests. All that path routing it's eventually going to boil down to load balancing. We're going to have to decide which back-end to send traffic to.
Cilium already had load balancing in the form of qproxy, a load balancer that basically says, "Here's a request destined for a service, which back end pod should I send it to?" And that's very similar to what's happening with service mesh. We're taking requests and we're sending them to different back end pods. Security, the encryption, and that's another thing that we already had in Cilium. At the network layer, we have WireGuard and IPsec for encryption. And there are some things that we didn't necessarily have already. The things we didn't have was mutual TLS and the layer seven traffic management capabilities, like retries or canary rollouts, but those things are handled in a lot of service meshes by what? They're handled by a proxy and for many service meshes, it's Envoy. And we were already using Envoy for layer seven visibility so that we could do things like layer seven network policies. So we already had that proxy capable of doing all of these layer seven traffic management features. We just weren't configuring it to do those things yet. So if we put all those things together, and we say, "Well, we already do load balancing, we already do resilient connectivity, we already understand about Kubernetes identities. And we can do network policy based on not just layer three, four, but also layer seven, we have all this observability built in, what else do we actually need to do?" And an awful lot of it was really just about, "Okay, how can we how can we configure the proxy to set up the data plane that we need?"
So in the service mesh beta, we introduced the Cilium Ingress. So you can create Kubernetes Ingress that's marked as a Cilium type ingress, and it will automatically set up the underlying load balancer, which ingresses typically do anyway. And it will set up what we call Cilium Envoy config, which is the programming of Envoy to handle that ingress traffic, and that might involve things like, based on the path that's been requested, or the protocol type. Where do I want to route that traffic to? So those are the probably the biggest things that we introduced with service mesh to the ingress and the Cilium Envoy configuration. And I guess we already had the envoy proxy built into the Cilium agent that runs per node. So it was a natural choice for us to say, "We can run one proxy per node, then. We're already using it for getting this layer seven visibility, why can't we just extend that?" We had to figure out how we were going to program different listeners for different traffic. If you have multiple ingresses or multiple envoy configurations that's going to configure different listeners with an Envoy. And so far so good, I think it's a model that people are responding to very positively, like you Andy.
Hi, it's Andy. Earlier this year, I was able to go to Valencia and see KubeCon Live. And I saw a talk from Liz about this very topic. And it was a good one, which prompted this podcast. Here is the link to that talk if you're interested or you can contact us at Eficode, and we'd be happy to tell you more.
And one thing I want to dig out or clarify a little bit, you say that one thing that's missing is the MTLS. Correct me if I'm wrong, but what you mean by that is really the application pod to application endpoint full chain, basically pod to pod container type of MTLS. Because already the Cilium overlay network is encrypted from node to node. So we already have an encrypted network, but you're just talking about ratcheting it up a little bit higher to the pod level.
Exactly. So network layer encryption already existed. This is more about if you have one application talking to another application and setting up HTTPS connection and needing the identity, typically a certificate, to set that up. And that's something that service meshes will often allow you to automate. So in Cilium service mesh today, we don't have that MTLS automation, but we do have the network layer encryption. And we have a very exciting vision of how this is going to evolve and I think this will be something that we see coming out over the next few months. Hopefully, the next Cilium release, where the idea is to separate the authentication part, the bit where we say I recognize the identity of each end, one end recognizes the other end and vice versa, to separate that authentication process that happens in TLS from the encryption part. And if we can continue to use the network layer encryption, but use the certificate or the identities that have been exchanged, and insert those certificates into the kernel layer to do network layer encryption. I think it gives the equivalent functionality of MTLS. It's not strictly MTLS because we're not using the TLS part of the encryption part. We'll be doing that in the network layer, but it fulfills those two halves of cryptographic authentication and the subsequent encryption of traffic in a really neat way.
The other nice thing about it is that we see that being configured. It could be a certificate manager that configures those certificates, it could be integrated with spiffy for managing those identities. We can be agnostic to that control plane for identities, which is similar to the service mesh approach that we're taking. We're programming essentially a service mesh data plane. And we have ingress as an example of a control plane that says "Okay, how do we want to program that data plane? How should traffic be routed?" But ingress is not the be all and end all of how people want to configure service mesh. And the next step is to integrate with other control plane interfaces, which could be service mesh interface, SMI, could be Istio, could be Linkerd configurations. All these things are possible. It's really a case of where the community takes us. But to make it easier for people to use their existing control plane with the Cilium service meshes the data plane, fulfilling what the control plane wants to program.
Right. I'm maybe a little bit stuck on this security and encryption bit that we were talking earlier. But I love the way you explain that implementing in a slightly different way but achieving the same objective.
Yeah, and I don't think it's the first time this has been done. I'm struggling to remember the name of this, but there have been approaches for separating out this network, the authentication part and the encryption part. So it's a proven technique.
It is, yeah. I remember when I was growing up, we had this huge, huge TV, and it took up so much space in the living room. And I was always like, "Why can't this be smaller?" But physics, you have to have the CRT display, you just simply can't make it smaller. And then suddenly, we come up with LCD screens. And well, physics changed, because it's a completely different technology. So instead of having a CRT, we have an LCD, it's really slim and small. So instead of saying, "Well, let's change how physics works," we just change how it's done. And I love this idea that MTLS means a certain specific thing, but what it's trying to accomplish is secure communications based on authenticated identities. So instead of doing it the same way that's been done before you go back to those fundamental principles of what do we need to do, and have a better way of accomplishing the same thing, which gives you so many other benefits.
Absolutely. And one of the reasons I think this will be beneficial is because we're pushing more of this into the kernel, the network layer encryption can happen within the kernel, which will be really efficient. And this is part of, I think, the really exciting evolution that eBPF enables. If we go back I don't know how many decades, but a long time ago, people used to write their own TCP stack or their import a TCP library into their application so they could do TCP connectivity. And nobody expects to do that now. Of course, we expect that to be handled from the kernel. And why shouldn't that direction of travel continue? Why shouldn't we see more of this networking functionality handled by the kernel as the norm. And eBPF means that we can do that in almost like a piecemeal fashion. We don't have to have the kernel support these as upstream features because we can add these capabilities using eBPF and people can opt into having that functionality in the kernel. And I think that's really exciting. And then how we can see that evolution happening more quickly because eBPF enables it.
And of course, I know what I remember saying, but how do you see performance benefits by moving things to the kernel instead of sidecars?
Yeah, we see significant improvements there, performance benefits to be had at the networking layer, even if we don't worry about service mesh. And the reason why the using eBPF for networking in Kubernetes environment improves performance is because we shorten the network path that packets have to take. So if we imagine a packet arriving at the physical interface to a machine, and it's destined for one of the pods on that machine, and in a traditional environment, it would need to be routed by the hosts networking stack based on the IP address of the destination pod. And that kind of routing path would lead it to a virtual Ethernet connection between the host and the pod. The pod has its own networking namespace. And that namespace is connected to the host using this virtual Ethernet connection.
You can picture it like there's a physical Ethernet connection between those two things. And then inside the pod, there's another networking stack that the packet has to traverse to get to the application. But with eBPF, in each of those networking stacks, there's a whole load of things going on and a whole load of possible routing decisions that might be taken in a generic all-purpose networking stack. In practice in Kubernetes, we can we can look at that address, we know that it's destined for a pod, or Cilium has set up that IP address to be associated with network identity. We know where to send it. And we can take it directly from the network interface pretty much as soon as it arrives on that machine. And we can send it directly into the networking namespace of the pod. We don't have to traverse the whole networking stack on the host. So you massively shortcut the networking path that that packet has to take. So we already see some pretty significant improvements in performance, just with straightforward Kubernetes networking. Then if we think about service mesh, the traditional path that a packet would have to take with a sidecar gets even more convoluted because it arrives in the network namespace for the pod, it has to go through the proxy. So it goes through the networking stack all the way up to user space, where the proxy handles it, back down into the network stack, proxy is going to send that packet on to the application, it has to go all the way through the networking stack in the pods network namespace to reach the application.
Incidentally, during that period of time or it's on one machine. But if you're using MTLS between two different pods, it's encrypted between the proxies. It's not encrypted between the actual endpoints. So your traffic while it's traversing the network namespace within the pod is unencrypted. So this is another benefit of the approach that we'll be taking with this network encryption as soon as the packet arrives from the application into the network namespace and immediately gets encrypted. That's another improvement that we're going to see. Not just efficiency, but also that the packets will be encrypted for a greater amount of their lifetime. When we need to go to the proxy for layer seven termination, we still have to go to user space. And that's true in Cilium service mesh or any service mesh, it's going to have to go to that proxy. But if we have two proxies in two different sidecars, it necessarily has to make that transition twice whereas if we only have one proxy per node, we only have to potentially make that transition the one time and that can a be pretty significant saving.
So we're shortening the route, improving how the encryption is done and of course, running it on the host kernel instead of container of a virtualized kernel and whatnot. So we're getting all kinds of benefits there.
I might want to stop you there. Rewind you there for a second. So one of the interesting things about containers is that they don't have their own kernel. They have a network namespace in the kernel, but it is still a host kernel, but it's the hosts, kind of "I am assigning you this network namespace pod, and I'm going to handle packets on your behalf within this namespace."
So we've talked a lot about benefits, is there some risk mitigation as well? Are there risks to this approach? We talked about a few things that people might think are risks, kernel modules versus eBPF and things like that. But are there some other risks that are either being mitigated or that people might think that perhaps are not really risks from this?
Yeah, so one, I guess, question that people have asked about the per node proxy model, is whether that makes that single proxy a single point of compromise from a security perspective. If all of your traffic is passing through that proxy, does that increase the security risk? And I think there is some legitimacy to that argument because you have got one component, but what one mitigation and one kind of philosophical point I think I would make about that. And the mitigation is, we are not required to only have one proxy per node. So there is no reason why you couldn't run additional proxies. We've not had a strong requirement for this yet, but having a proxy per namespace might be an interesting compromise that allows you to achieve a lot of the efficiencies, but allows you to keep some separation between different applications if you're worried about that point of failure. I think, philosophically, the thing that people think about when they think about these isolation layers, they forget that there is one kernel. There is only one kernel on that machine on that host. And all of your traffic is going through that kernel. And we trust the kernel. And, of course, occasionally, there are security vulnerabilities in the kernel and that that is obviously a problem. But by and large, we expect the kernel to be able to safely handle traffic from all of our components.
Now, the proxy is a complex piece of software. So I don't think anybody could responsibly say there's never going to be any security issues in it. Of course, there will is software. Software always has bugs, it's inevitable. But it's a very well used and well hardened piece of code, it's increasingly being used and hardened. And why should we philosophically think it's okay to have one point of failure in the kernel, but not okay to have one point of failure in user space? And the answer might be because I want to really, really minimize my risk, okay? Well, have multiple proxies that can be a balance here. I don't think we need to be looking at that single piece of software that does have a mechanism, has this listener mechanism for isolating different traffic? We already see proxies used in ingresses that may be handling traffic for a number of different components. So there's plenty of places where we do already use proxies to handle traffic that is destined for or comes from different applications. I don't think we're dramatically changing the landscape by saying, "Well, we could use one proxy within the network, as well as at the edge of the network."
Cool. We are starting to get towards the end of our time. Is there anything else you'd like to tell us or is there anything coming? At the time of recording, we are in the middle of June. This will probably be coming out a bit later, but is there anything you'd like to tell us before we sign out?
Is this a good time to talk about the release coming out?
All right. As we're recording, we are in the closing phases of putting together Cilium release 1.12. There have been a couple of release candidates already. So we're pretty close to getting the stable GA version of that release out of the door. So expect by the time this is published, it will be ready and available. And that includes the stable version of Cilium Service Mesh Ingress, which is, I think, one of the most important components. So very excited to see what happens when people start putting that into production environments. It'll be really great to hear how people get on with that.
All right. Would you like to give us a summary of what you learned, Andy?
So what we talked about was starting at the low level with eBPF. How eBPF lets us inject things into the kernel and get observability at the kernel level. Cilium is reusing that idea and technology to give us visibility and controls of the network, and allowing us to get better performance from the network as well as better control and invisibility as just mentioned. Now, the service mesh functionality, which is coming out is basically just polishing that a little bit, adding the ingress capability, and adding us the control plane mechanisms to run this as a full-fledged service mesh. And get all of our eggs in one basket for good or bad, but have one place to run all these things and get all the benefits and take care of all the service mesh needs. This has been really interesting and really good to go through this.
It has. It's been wonderful. I've learned a lot. I have two final questions, Liz, something that we started to ask all of our podcast guests.
Remember when you were a child, what was the first thing that you remember that you wanted to be when you grow up?
So my earliest memory of this is people asking me whether I wanted to be a doctor like my mother. And I knew I did not want to be. No, definitely not. So I had a much clearer idea of what I didn't want to be. And then as soon as I started getting exposed to computing, we actually had a ZX80. It was super early computer and as soon as I started on that, I thought this is where my life will be. <laughs> Obviously, I occasionally thought, "No, but I'll go and be an astronaut." But in reality, I think I pretty much knew computing was what I wanted to do.
All right, you've almost answered the second question as well, but I'll give it a try anyway. So it could go either way. Was there a point in your life where you realized that you needed to take a different path? Or was there a different point in your life maybe than that where it crystallized that you're on the right one?
I spent the first, I don't know, at least decade of my career after graduation working on essentially network protocols and all this low-level computing. And there was a point where I thought, I just want to understand more. I want to step away from all this detail that nobody in the real world knows about and I want to look at more consumer-facing things. So I worked for a while for Skype, and I worked at a music recommendation company called Last FM. And after that, I worked in some startups around recommendations, like TV recommendations. And I did at some point realize, you know, what, I'm more interesting getting back to the lower level technology. I learned so much looking at things from that consumer perspective. And there were a few years where I wasn't writing code myself. And I learned a lot, but I definitely realized what I'd been missing when I came back. It was containers that brought me back to proper technology again.
Wonderful. That's a great story. Thanks again, Liz. This is Marc and Andy and Liz Rice, from the Eficode DevOps Sauna podcast. Thank you, and see you next episode.
Thanks for having me.
Thank you for listening. If you would like to continue the conversation with Liz Rice, you can find her social media profile in the show notes. If you haven't already, please subscribe to our podcast and give us a rating on your platform, it means the world to us.
Check out our other episodes for other interesting and exciting talks. Finally, before we sign off, let's give our honored guests a moment to introduce themselves.
I say now, take care of yourselves, and feel free to reach out to Andy and I for any topics or people you would like to hear more about.
Hi, my name is Liz Rice. I am Chief Open Source Officer at Isovalent, which is the company that originally created the Cilium project.
My name is Marc Dillon. I've been in Finland for somewhere more than 17 winters now. I've been building products my whole life. And I believe that if you build the people first, then you have a strong culture that builds the best products.
Hi, my name is Andy Allred. I've been in Finland for over 20 years already. I started my career in the US Navy and nuclear-powered fast attack submarines doing all kinds of cool tech stuff, and learning that the tech is there to serve a mission which people have. And then I've spent my career in IT and mostly telecoms, figuring out how tech can serve the mission of people and support the processes and the people in their jobs.