This rock-solid intro to containerization and orchestration and why neither are going anywhere any day soon serves you practical tips and food for thought. Thanks for joining the pod, Severi. Severi’s one of Eficode’s Senior Consultants currently based at a telecom service provider.

Heidi (00:00):

Hi, listeners. I hope you're having a lovely summer and you've found your favorite local water feature to weather this heat wave we're having in Europe. I'm Heidi Aho, the content writer at Eficode and host of our DevOps Sauna podcast. Thanks for tuning in.

Heidi (00:20):

Like the hardcore things that we are, we like saunas in the summer as much as we do in the winter, as there's nothing better than a dip in the lake following a sauna sesh. Similarly, even though things have wound down a little in the office as schools have broken up for summer, DevOps doesn't hibernate in July. In fact, the summer break is a great time to read and listen for pleasure and that's very much what we're serving you today.

Heidi (00:45):

Today, we're joined by Severi Haverila, a senior consultant at Eficode and all-round good guy. A little birdie from our Dutch office told me Severi was the guy to talk to about containers and orchestration. I've invited Severi up to the sauna lounge with me and Dila this morning to spill the tea. So, let's get cooking. Tell us about yourself, Severi. What do you do at Eficode?

Severi (01:08):

Hi, Heidi, and thanks for having me here. It's always a pleasure to visit the sauna.

Heidi (01:13):

Likewise.

Severi (01:15):

Yeah. My name is Severi and I started at Eficode in 2015. And back then, I was just coming back from my exchange year in Vienna and a couple of my friends were working at Eficode. And then somehow, I ended up here as well.

Heidi (01:32):

Brilliant.

Severi (01:33):

And originally, I actually started in the software department, but somehow, pretty randomly, I ended up working more in the DevOps field. So, my first project was more in software, but after that, I've actually been more involved in DevOps and DevOps stuff. And for the past two years, more or less, I've been working basically full-time with containers and container orchestration technologies.

Heidi (01:59):

Nice.

Severi (01:59):

And besides that, I've been talking at a few meet-ups and conferences as well, and also keeping trainings in the field of containers.

Heidi (02:11):

Brilliant. So, let's dive right into all this container goodness. Could you explain containers in 30 seconds?

Severi (02:20):

I can try. Don't put the timer on though.

Heidi (02:24):

I was reaching for my phone. This is a loose 30 seconds.

Severi (02:24):

All right, perfect.

Heidi (02:25):

Like a subjective 30 seconds.

Severi (02:28):

Okay. So, containers are basically a uniform way to package your software so that you can easily deploy to different environments. And what it basically means is that you can easily be sure that if something works on your development environment, if you're running it in a container, it should be working in production as well, and also in the test environments. So, that's containers.

Heidi (02:55):

Fantastic. And I think that was 30 seconds. That was 30 seconds. Now, let's move on to orchestration because the two are related.

Severi (03:04):

Yes.

Heidi (03:04):

Could you do that in 30 seconds? Orchestration, go.

Severi (03:07):

Let's see. Well, containers are great and if you're running on your local laptop or running a few services, you can really easily do it with just running Docker, for example, and Docker Compose. But what about when you have to run something in production? You have multiple servers where you're running your workloads on, then container orchestration comes into play.

Severi (03:30):

Basically, it abstracts all the infrastructure from you, so you don't have to think about which particular machine your containers are going to be run on. All of that is handled by the container orchestration. And also, all other things, for example, rolling back to a different version of your application or doing updates to your application, that's also handled by the container orchestration system.

Heidi (03:53):

Fantastic. Okay, let's move on to the benefits of containerization. Why would companies do this?

Severi (04:00):

The thing that you can be sure of is that if something works on your own machine, it's going to work also in the development environment and also in the production environment. All the dependencies that you have running, they are the same. There's less things that can go wrong and can be different depending on the environments.

Severi (04:21):

But also, I think one really important part is how you can onboard new developers to your development team. And if you're running containers, basically the only thing people have to do is install Docker, for example, and then run Docker Compose up. And then they're good to go. The whole microservice architecture is up and running.

Severi (04:44):

So, you don't have to think which version of Java should I have running on my machine, which version of Golang should I be running, and stuff like that. So, it's really easy to document your whole application and automate it at the same time.

Heidi (05:01):

Cool. That sounds quite silver bullety to me, like this will solve all your woes, but are there cases where containers aren't the right solution for a company?

Severi (05:13):

Well, yeah it's true. Containers are pretty awesome, I have to agree with that, but of course, if you're working with some legacy monolithic applications, those might be difficult to run in containers. For example, when you think about containers versus VMs, applications that are running in the virtual machines, they are more stable, let's say.

Severi (05:38):

Containers are more they come and they go. If one container goes down, you could just create a new one. And if your application cannot handle this, then you might be better off running your application in, for example, a virtual machine, just to give you an example.

Heidi (05:55):

Okay. So, it's not all rosy on the containerization front. Are there any other pitfalls that you've seen in your work with containers?

Severi (06:05):

Well, of course, when developing software to be run on containers, you have to actually realize that the software is running in containers. So, you cannot still live in the world where you have this one server where your application is going to be run. When working with containers and microservices and stuff like that, you have to build your applications to be pretty fault-tolerant.

Severi (06:31):

So, for example, if your application consists of multiple containers, let's say, three containers and these containers need to talk to each other, what happens if one of the containers goes down? Does it crash the whole system or can the rest of the system still function? And stuff like that. So, you have to always think that, what happens if something fails?

Heidi (06:59):

And could you provide some tips, or what are the ways that in-house teams can create these fault-tolerant systems?

Severi (07:08):

Yeah, of course. Well, the first thing is what I already mentioned, is that you have to always have in mind in which environment your software is running. It's running in containers. And you have to create your software in a way that one of the services might go down for a couple of seconds, for example, and the rest of the systems should be able to handle that.

Severi (07:31):

But also, how you can do it is, for example, if you're doing requests to the other services, you should have some timeouts in place for the request. You should do some retries. So, maybe the first request could be a quick one. And if it fails, then you could have a longer timeout for the second one and try it again, and maybe try a third time, and so on.

Severi (07:53):

And also, maybe one of the services at some point, really, doesn't start behaving. So, you could implement circuit breakers, which basically means that the other services don't even send any more requests to the broken service because they have realized that it doesn't work. And after a while, they start trying it again and again. And if it starts working, then it comes back into play.

Severi (08:18):

And what you also can do is that you have to think which services are the critical ones and which are not. So, for example, if you're running a web shop, you could... Let's say your shopping cart service is broken, but it doesn't mean that the whole website should go down, right? Quite often, the users just want to scroll through the products and see what kind of discounts you're having and stuff like that. And they don't necessarily put anything to the shopping cart. So, it's totally fine that maybe the shopping cart doesn't work for some time, but everything else works.

Heidi (08:55):

Okay.

Severi (08:55):

But then again, of course, you can have stuff that's more critical, like if the whole product inventory is gone, you don't have any products in your web shop. So, it's useless at that point, but... So, you have to always balance the pros and cons of the service and how critical they are. And if something doesn't work, then how can you mitigate the risks?

Heidi (09:18):

But that sounds a little scary if your customers aren't able to buy from you for hours at a time. Is that a real risk you're talking about? Are containers that risky?

Severi (09:28):

It's not about the containers. It's about the software that you're running in the containers.

Heidi (09:33):

Okay.

Severi (09:34):

If your software has a flaw, then it doesn't work. It doesn't matter if it's running in a container or not. But at least your customers can scroll your system and look at the products that you're offering. The other option would be that customers couldn't open your web shop at all.

Heidi (09:52):

Cool. I mean, uncool, but cool to know that. Okay. Let's dig deeper into orchestration. I'm curious, is orchestration the same as automation here?

Severi (10:08):

I would say it's not. I mean, kind of is but it kind of isn't. Let's say automation can mean many things. It can mean automating, for example, your testing and automating your deployments and stuff like that. And container orchestration, it has many things that include automation. It has many parts that have automation, for example, doing updates to your software. You don't have to manually change the running containers in the system. The orchestrator will handle that for you.

Severi (10:44):

But then again, you can also build automation using container orchestration. So, you can create CI/CD pipelines and such using the container orchestration system you have. So, I'd say automation is a really big thing and container orchestration helps you automate some things.

Heidi (11:03):

Wow! Thanks for that. Automation is the backbone of DevOps. So, super interesting to hear how orchestration and automation relate to each other. Now, let's move on to new tech. And what kind of new tech are you looking forward to in the container and orchestration field?

Severi (11:21):

Yeah, there's a lot of stuff happening and new things keep on coming up all the time, but I think one of the really interesting things is that people are able to create or extend the Kubernetes API, for example. So, they can create custom resource definitions. And one good example of this would be that if you wish to run databases in Kubernetes, for example. And previously, this could have been pretty difficult. So, for example, running a cluster database, it might have needed some work or it might have not been even feasible to do.

Severi (12:03):

But using these custom resource definitions, for example, there's a project called Cube DB, which basically automates the database creation for you. So, you don't have to think about all the complex logic that's involved. You just have to basically tell Kubernetes that I wish to have one ready database running, which has three instances, and it should be clustered. Then, I would like to have some backups every now and then and stuff like that. So, you don't have to implement the logic yourself. Somebody else who knows more about that stuff has already done it for you. And you just have to define what you want, basically.

Heidi (12:41):

And have you test-driven that already?

Severi (12:43):

I've tried it. I haven't used it anywhere in production or anything like that, but I've tried it. And it's not ready yet, but it looks really promising.

Heidi (12:52):

Fantastic.

Severi (12:54):

And another thing that comes to my mind as well is instead of running containers, running functions, you can actually run serverless functions in Kubernetes as well. At least there's a project for that. But I haven't test-driven that myself, but it might be an interesting thing to actually use your existing Kubernetes clusters to run functions, so not running containers per se.

Heidi (13:21):

Great. So, I know what you're doing during your summer break now. You mentioned serverless.

Severi (13:27):

Yes.

Heidi (13:27):

That's definitely getting more popular. Now, the question I have is, are containers last season or should companies still be investing in containers?

Severi (13:40):

Yeah, it's a good question. Again, there's no silver bullet. And actually, in the background, when cloud providers are providing you with serverless possibilities, they are using containers in the background. So, containers are not disappearing anywhere.

Heidi (13:55):

Okay. So, serverless isn't replacing...

Severi (13:57):

Yeah, it's not replacing. I mean, of course, for some things serverless might be the right solution. Let's say you have an API that has pretty infrequent requests coming in. It might be a bit costly to keep a container running all the time just for serving these requests. But then again, if you have an API that gets requests constantly and gets a lot of load, so to say, then it actually might cost way more to run your service serverless than in a container.

Severi (14:33):

But again, it really depends. And one drawback that serverless has is that it might take some time... If your service hasn't been used for a while, there's a thing called a cold booth, which basically means that in the background, the system has to pop up a container that has your code running in it. So, it might take more time to give a response to the request made to the service. Whereas in containers, the container is already running there. So, there's no need to wait.

Heidi (15:03):

Nice. We are having a whale of a time and we're nearing the end of our interview. Thank you so much. So, next, I want to talk about Docker. Are they still going to be the tool of choice for containerization?

Severi (15:19):

Yeah. That's a difficult question to say. I'd say if you are starting with containers now, you should go with Docker. I mean, I'm sure that's a good tool of choice. But there has been a lot of standardization in the field. And for example, in Kubernetes, you don't have to run Docker anymore. You can run many other... There are many other options for the container runtime available.

Heidi (15:46):

So, could you name some of those alternatives?

Severi (15:49):

Yeah, sure. So, for example, CRI-O and containerd are some. And then, there's Rocket, for example, from CoreOS.

Heidi (15:58):

Nice.

Severi (15:59):

So, there's plenty of options available, but I think at this point, Docker is still the one to go with. And probably in the future is going to be more like... You might have some specific needs for how you want to have your runtime. Maybe you want the containers to start faster or something like that, some different criteria for your runtime.

Severi (16:23):

And maybe there, you could end up choosing a different runtime than Docker, for example. But changing these should be pretty easy because there's already standardization available. So, it basically only requires you to install a different runtime. You don't have to recreate all your containers, let's say.

Heidi (16:41):

Okay. So, you're not locked into one provider there.

Severi (16:45):

No, no. And the same thing goes for creating the containers. Now, at least I've done actually everything with Docker, but there's, for example, a project called Builder, which should give you more flexibility when creating the containers and managing the layering in the containers, for example. But again, I think for starting, Docker is a good choice. And you never know, if you have some specific needs later on, then you might have to change something. But I don't think you can go that much wrong with Docker either.

Heidi (17:21):

Yeah. It's not like you can't change later down the line.

Severi (17:24):

Yeah, exactly.

Heidi (17:26):

Fantabulous! So, we have reached our final question. Thank you so much for joining us again.

Severi (17:33):

Thank you.

Heidi (17:33):

If someone would like to start learning more about containers and orchestration, where should they start?

Severi (17:42):

I think Docker at least has really good tutorials on their website. And Kubernetes, I don't know about tutorials, but at least the documentation is pretty good. So, I think the best way to learn about this is just to get your hands dirty and then start doing. Getting Docker and Kubernetes up and running, for example, on your local laptop, it's pretty easy nowadays for Mac or Windows. You just need to install Docker. And by a click of a button, you can have a Kubernetes cluster running on your own laptop as well. So, that's, I think, the best way to get started.

Severi (18:18):

If you don't even want to install anything, there's even websites where you can train Docker or Kubernetes in a web browser. So, that's possible as well. And if you want to get more advanced, you can maybe start using one of the cloud providers as well. And it doesn't even cost you anything in the beginning because you get free credits for trying things out.

Severi (18:42):

So, if you're afraid that it's going to cost you a lot, if you haven't used Google Cloud before, then just create an account and try it out. I think you can run a Kubernetes cluster for free for at least half a year or a year or so. It shouldn't be that costly either.

Heidi (18:57):

So, the world is your containerization oyster.

Severi (19:01):

Exactly.

Heidi (19:03):

Get out there and play around. Are there any books you'd recommend?

Severi (19:05):

Yeah, actually, there are. There's books by an author called Nigel Poulton, and he has written books about both Docker and Kubernetes. The other one is called Docker Deep Dive, and the other one is The Kubernetes Book. And I think both of them are great. I actually read them myself like a year ago or so. And I could actually learn something when reading them, and it was a good memory refresher as well, if you can say so.

Heidi (19:34):

Nice. We are book-ending this podcast with some chat about books. Severi, thank you so much for joining us.

Severi (19:43):

Thank you very much.

Heidi (19:45):

And we hope to have you on the podcast again.

Severi (19:48):

Hopefully.

Heidi (19:49):

Thank you so much for joining us. Do follow Eficode on social media. If you're curious about what we're doing and what we're up to, Dila is my partner in crime in that she edits the podcast and does the recording. And she also launched a wonderful campaign earlier this year called Humans of Eficode. They are these visual stories about the lives of Eficodians, and we dive deep into what makes them tick. So, not only what they do at work, but what their passions are. So, do check that out at www.humansofeficode.com, and we'll see you soon. Bye.