Skip to main content Search

Blameless Postmortems and Incident Response: How DevOps teams learn without blame

devops sauna podcast

In this episode of DevOps Sauna, Pinja and Stefan unpack what a good incident response actually looks like, from the moment an issue is discovered to recovery, communication, and learning afterward.

They dive into real-world incident management practices: defining incident severity, assigning the right roles, running mock incidents, validating backups, and communicating clearly without panic. The conversation also tackles one of the hardest topics in engineering culture: blameless postmortems and psychological safety.

If you care about DevOps, security, platform engineering, or building resilient teams that learn from failure rather than hide it, this episode is for you.

[Pinja] (0:03 - 0:21)

Are we really able to have blameless postmortems to actually figure out that this doesn't happen again and we learn about it? 

 

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

 

[Stefan] (0:22 - 0:31)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

 

[Pinja] (0:37 - 0:45)

Hello and welcome back to the DevOps Sauna. I am, as per usual, joined by my co-host Stefan. How are you doing?

 

[Stefan] (0:46 - 0:55)

All good, just sitting here enjoying the world, nothing going on, no critical incidents running or anything. Could it get any better?

 

[Pinja] (0:55 - 0:57)

Okay, don't jinx it.

 

[Stefan] (0:57 - 0:58)

It'll be fine.

 

[Pinja] (0:59 - 1:04)

Yeah, you know the meme where it's the dog that is sitting in the middle of a fire and he says, it's fine.

 

[Stefan] (1:04 - 1:05)

It's perfect.

 

[Pinja] (1:05 - 1:08)

Yeah, it's fine. Everything's going okay. So let's talk about those incidents.

 

[Stefan] (1:09 - 1:14)

We've gone ahead. Was it Azure, AWS, and Cloudflare being down? We still like Google to join the game here.

 

[Pinja] (1:15 - 1:44)

Yeah, let's not jinx it, at least for now. But let's see what happens between Christmas and New Year's. That's a very fun time to recover from an incident because not that many people are working.

 

You might need some procedures to work on those. Between Christmas and New Year's last year, there was a data cable cut in the Baltic Sea. The response time was extremely quick, even though there was that really awkward time.

 

So what are the odds of that incident happening during the downtime between Christmas and New Year's?

 

[Stefan] (1:44 - 2:00)

Quite high. It's always between Christmas and New Year. There is a hacker conference where all of the major players in the field have people standing in the back of all of the rooms with their phones ready to call back home and say, oh, we've just heard about this vulnerability.

 

You need to fix it now. And it always happens between Christmas and New Year.

 

[Pinja] (2:01 - 2:16)

Somebody's on call, waiting anxiously for that call. But what is the procedure to follow? So you have now a critical incident in your hands.

 

What do you do? Go. Do you A, pretend that you didn't see it?

 

Always a nice choice. Yes. Just get out.

 

[Stefan] (2:16 - 2:16)

Get out.

 

[Pinja] (2:16 - 2:22)

Yep. Just close it. You didn't see it.

 

B, do you blame Carl or Annie? Always a good choice.

 

[Stefan] (2:22 - 2:23)

Always Carl.

 

[Pinja] (2:24 - 2:24)

It's Carl.

 

[Stefan] (2:25 - 2:31)

It's always Carl's fault. It's him pulling out the cable or whatever he does. Such a bad employee.

 

[Pinja] (2:32 - 2:39)

Yep. Do you make a big announcement and cause panic? I've seen that happen as well.

 

Yeah. Or all of the above or something else.

 

[Stefan] (2:39 - 3:02)

It's good. We recently had a, was it one of the data centers that backed Danish banks that went down for three or four hours during one evening. We were quite open about it, but everybody was raising this critical infrastructure.

 

How can this happen across multiple banks? I guess that's a bit above critical incidents in a single company. National incident perhaps?

 

[Pinja] (3:02 - 3:15)

Yeah. Something like that. But it's always fun to see somebody respond, a company responds to this news like, thank you for finding this out.

 

We take this security incident, we say this, whatever incident is very seriously, we're on it.

 

[Stefan] (3:15 - 3:15)

Yeah.

 

[Pinja] (3:16 - 3:20)

But if we talk about first, what is an incident? Because it can be many things.

 

[Stefan] (3:20 - 4:14)

It really depends. I've been in companies where you grade your incidents in P1 to P4, where P1 is the highest and most critical one, and you put expected response times to them. But in reality, people always try to downgrade it because then you don't need to include as many people.

 

It really varies. If you get a critical vulnerability in a piece of software, well, first of all, you need to figure out, is it exploitable? Because we hear critical vulnerabilities all the time.

 

But if you look at the scoring system for that, like the CVSS, you might see it has a high score, but is it something that could be exploited in our company? Is it something critical where it is? Is it something non-critical where it is?

 

Can we just power it off until Monday and figure it out? How do we go about this? It could be your whole data center being gone in flames as well.

 

That would be a major critical incident. That's an oops. If you're just hosted in that data center.

 

Yeah.

 

[Pinja] (4:16 - 4:35)

When you just see, quote-unquote, just a vulnerability or a thread that can always turn into a vulnerability quite fast, and if you ignore that thread for a moment, as you say, if we just categorize it with P3, P4, and we don't touch it, it might be something really big really soon.

 

[Stefan] (4:35 - 5:42)

The good old lock for shells that happened years ago. People are like, yes, all right, we don't have a lock for shells in our outer layer. We're all good.

 

The perimeter is safe. And then 30 minutes later, people figure out how to tunnel through your firewalls and exploit it behind the big fence you have. You cannot look at it that easily.

 

You need to have people who know application security. You need to have network engineers, firewall engineers, or whatever grade you have. You need to put the right players into an incident.

 

You need to be so good that you actually run mock incidents as well, because you don't know how to run an incident unless you've tried it before. I had no idea what I was doing in my first incident. Luckily for me, it was a training incident I was going through.

 

I had some knowledge from my military time. So just for lack of better words, calm down, sit down, relax, get an overview, go figure out what's the details here, and respond. Don't panic.

 

I've seen people panicking, starting to shut down servers or shutting down their firewalls. Calm down. You don't know how bad this is.

 

[Pinja] (5:42 - 5:55)

So no matter how good your instructions are or how well prepared you are, you always need to know that it actually works in practice. So having mock incidents, it's kind of like a fire safety drill on one hand, right?

 

[Stefan] (5:56 - 7:00)

Yeah, exactly. You need to go through the steps like who do you include in which stage and so on. It's even okay that you announce you're going to be running a mock incident, because else people might panic and do something really weird, and they might even talk about it in public all of a sudden.

 

You can just see the media knocking on your door if you're important. But knowing who to talk to, how to coordinate things, who has which roles, just the roles of an incident is so super important. Even in fire drills, there are certain roles that need to be fulfilled.

 

Yes, you might be the CEO of the company. The fire incident manager doesn't care about if you're a CEO or whatever you are. If you don't know what you're doing, if you don't announce that you're a scene commander or incident commander, he's just going to wave you off like, please get to the side.

 

We're trying to do work here. If you can announce and say like, hi, fire department incident commander, I'm running as the incident commander for my company side. Here's the list of the people who are employed.

 

I've already stroked out the people who are not working today. These are the people we know where they are. We are lacking X, Y, and Z.

 

You hand him that paper, he'll be like, good. I know who you are. I know you're doing your job.

 

Move along.

 

[Pinja] (7:01 - 7:05)

An incident commander is not the technician working on the incident, right?

 

[Stefan] (7:05 - 7:05)

No.

 

[Pinja] (7:06 - 7:08)

That's not the person to be assigned for that role.

 

[Stefan] (7:09 - 8:01)

It's okay if somebody spots an incident and he starts raising the incident, he might automatically be the incident commander, but you should have a plan very fast. And I'm like, all right, are you going to be working on the incident? All right.

 

If you are going to do that, if you're trying to solve this, find someone else, have your team lead, your engineering manager, whoever is prepped on this, make sure he actually knows that he's going to have this role. Because I've seen people being pulled into the role like, congratulations, you're an incident commander. All right.

 

I'll just lean back and see what happens. No. As an incident commander, you do have a big responsibility.

 

You need to make sure you have check-ins. You need to make sure that the rest of the people know what they're doing. You need to coordinate.

 

All right. So who's working on this incident? Only you?

 

All right. I'll call someone else because you should not be sitting alone with this. Nobody should be sitting alone in an incident.

 

Should I bring your cake? Are you hungry? Are you thirsty?

 

Can I bring you water? The thing's as simple as that.

 

[Pinja] (8:02 - 8:33)

Yeah. And then we come into the responders and just have that contingency plan. How many responders do you actually need?

 

Have you planned that they have the correct rights? Do you know what they actually need to do? So it's all about that.

 

And it might be so that when you go into a practice, a mock incident session, before you do that, you might not even know that, oh, Carl and Annie are actually missing these rights, but Kim, who should be doing this and that, they don't even have the correct procedures or the correct documentation to do it.

 

[Stefan] (8:33 - 8:59)

It's always good to spot the bus factor of one because all of a sudden you figure out Kim is the only one with access to production and he's on vacation this week in some far away country with no reachability. So you can't even get hold of him all of a sudden. So nobody has access to production.

 

How do we fix it? We don't know. He was actually running our access management.

 

So tough luck. We're just screwed. Let's wait until Kim comes back from vacation in three weeks.

 

[Pinja] (9:00 - 9:24)

Correct. And let's talk about communication. So in the beginning, I asked the question, what do you do when you find an incident, whether there's a critical vulnerability, if something's down, what is the lead time, from discovery, to fix the incident itself and to actually communicate it further?

 

I would like to also tie this into the discussion around psychological safety. Do I have the courage to raise my hand?

 

[Stefan] (9:24 - 11:06)

Yeah, it's the most important thing. Many companies run with very strict structures and you're only allowed to include X, Y, and Z, and it has to be X, Y, and Z who talks to set and so on, but that doesn't work in practice. If you set it too strict, nobody will be talking to each other and they'll be hiding in their own corner.

 

No, it doesn't affect us. It's the other guys. We don't want to interfere.

 

To some degree, that's good because you don't want too many chefs in the kitchen either. You want to have the correct amount of people. You want to make sure that communication is continuous.

 

If you go into an incident, you don't say anything for a half an hour, people will start guessing and they will start guessing on all of the weird things. You can see the internet is always blowing up. It's like, it's probably a DNS again.

 

Well, if you're responding like we know it's our databases, we know we have an issue here, they're not responding as fast as they should. Give some grains of the truth. Don't try to give the full explanation because that belongs in root cause analysis and post-mortems, but you need to give some education.

 

You're working on this, you're getting wiser through this incident. You make sure you have internal and external communication in place. The place I work, we always had our chief success officer.

 

He would always be the external communicator on all incidents. He would only be asking me questions when I was the CTO because you don't want your external communicator to start talking to all of your engineers and disturbing like, how far are we? What's going on?

 

No, no. You need some funnels and sometimes you need to filter the truth because you don't want to expose too much info in an incident. There are things you have to expose, but don't give up too much.

 

It's a good old, don't show me your underwear principle. I don't need to see that.

 

[Pinja] (11:07 - 11:13)

No, there needs to be clear instructions. Who's leading? Who's in charge of what?

 

What is the channel that you report the incident in?

 

[Stefan] (11:13 - 11:15)

Everybody needs to know that.

 

[Pinja] (11:15 - 11:22)

Yes. Annie and Carl find it. Do they actually just close the lid of the laptop and just go about their day and leave for the weekend or something?

 

[Stefan] (11:22 - 11:23)

Have a nice weekend.

 

[Pinja] (11:24 - 11:39)

Have a nice weekend. Oh, by the way, on Monday, then they might, with a cup of coffee in their hands, they might say that, oh yeah, there was this funny thing on Friday, but so they need to know where to do it. The psychological safety level needs to be on that high level that they don't feel that they're going to be blamed.

 

[Stefan] (11:40 - 11:43)

Rule number one, don't ever blame anyone for reporting an incident.

 

[Pinja] (11:44 - 11:46)

Correct. Because otherwise you're not going to find them.

 

[Stefan] (11:46 - 12:04)

Exactly. Like I've been in a company where somebody had hit double click, like an extra file from some arbitrary random email. It started encrypting files.

 

He just shut down his laptop and went home for the weekend. Then next week, somebody pokes me like, when did we start encrypting files on our network drives? Like we got what?

 

[Pinja] (12:04 - 12:05)

Oh no.

 

[Stefan] (12:06 - 12:19)

And then we just spun off like trying to fix that. And somebody, oh yeah, this guy actually asked us about some help on Friday due to his laptop having some issues. So all of a sudden, we had five people who just shut up about the incident.

 

So that wasn't fun.

 

[Pinja] (12:20 - 13:06)

Yep. That's one part of the organizational psychological safety that you need to cover. Number one, you need to, of course, understand that you need to file a report.

 

You need to understand where to file it. You need to have the courage to do it and the organization to support that. And there are many things that a mock incident can actually reveal for you that if you don't have anything of that in place, but also if you don't follow the process all the way through, you might be getting into the spot in this fire safety drill of yours that, oh, we just fixed it.

 

That's it. We don't have to go further in this drill and this mock. But the advice we're giving here is that when you're doing mock incidents, which you should, please make sure that you're following the process to the last period of the instructions and your guidance.

 

[Stefan] (13:06 - 14:02)

Yes. No names mentioned, but I've been at places where you ran your full process right until you restored your backups, because that would just be too intense to do. So we don't recover our backups because it might cost us a fortune.

 

Okay. So when do we ever test our backups? If you don't test your backups, you don't have a backup.

 

We had a guy in a company I worked for, he was sent to a customer, they have had some sort of, I think it was a server crashed or something like that. So they wanted their systems to be recovered. Five minutes later, the guy comes back from a customer.

 

Well, that's down the drain. We can't do anything for them. When are you supposed to recover stuff for them?

 

Well, the backup file was zero bytes. And yes, I did open it in a binary reviewer just to be sure that it wasn't like a mangled file header or something like that. It was zero bytes.

 

It was a small company, so they had hired the neighbor's son to fix the backup and he had no idea what he was doing. So yeah, you need to validate your backups always.

 

[Pinja] (14:03 - 14:15)

And keep in mind, also include the external communication might be a recovery package that you send out to your customers, to the other users. Make sure that you, maybe a question, root cause analysis. Should we bother?

 

[Stefan] (14:15 - 15:13)

Oh yes, you should. Especially these days where you have a GDPR, you have NIST 2 and all these things, you can get a humongous fine if you don't do your root cause analysis. I've been in a place where we had an incident and we actually spent three or five days just to figure out who was locked into the system at these different points in time, who could see what, which customer could see other customers' data and so on.

 

Luckily, it was business to business, so we weren't really in the field of fines. But still, if we didn't tell that up straight and have a list of people where we can say like, all right, X, Y, and Z have seen this. We have seen like three people from your organization locked in.

 

They didn't see anything they shouldn't. Sort of like giving some comfort to our clients that we are in control with this. We know exactly what you guys saw.

 

We know exactly who saw what. And we can actually go back and say like, all right guys, we know some other clients saw your data. We will do whatever we can.

 

Do you have any further questions? We will happily collaborate with you to go into this process and help with your internal post-mortem if needed.

 

[Pinja] (15:13 - 15:37)

So speaking of fines and NIST2 and GDPR, the business continuity side of figuring out how to recover from an incident from the moment of discovery to that you send recovery files and communicate externally. So it's not usually what we've seen. The business continuity is unfortunately not always part of the overall plan.

 

But if you don't do it in mind, it will actually be catastrophic for the business.

 

[Stefan] (15:38 - 17:24)

Imagine you have everybody working on-prem. Nobody has a laptop. Your building burns down.

 

What do you do? Do you have your backup externally? All right, you do.

 

It's on like good old tape backups. Do you have a spare tape machine so you can actually recover? Or was the only one hiding in your server room?

 

Can you actually go and fix hardware for everyone? Let's say you have a hundred employees and you need to figure out how do we start working on Monday since the building burned during the Christmas dinner on Friday? How do we continue?

 

Do we even have a business on Monday? Because you cannot just go out and say like, hi, I would like to buy a hundred machines and we need to set them up. Well, yes, you can buy a hundred machines fairly easily.

 

But when you boot them up, what are they connecting to? If you run everything on-prem, how do you fix that? Like, yes, you might be able to recover your backup.

 

All good. But then you need to restore access management. You need to restore it if you're a model for Windows as well.

 

So you need to restore your active directory, everything. Like it could easily be cloud resources you needed to give access to again, if they're sort of tied into things. So this whole plan of continuity is so important.

 

And in some cases these days, most people run around with laptops. It's easy to connect to cloud resources, but you still need to have a plan. If let's say everything goes down south and we're like, the cable is broken to the US, what do you do?

 

If all your resources are living on servers in the US. It's a good old story from COVID where we all figured out silicon chips or chips based on silicon were impossible to get. So we were sort of like living in a small black hole for a while, waiting for them to arrive.

 

Could our business actually sustain that? And as you said, like, yes, fines might be in play. And this too, it's 7 or 10 million euros or 1.4 to 2% of your revenue, whichever is highest.

 

[Pinja] (17:24 - 17:26)

And isn't GDPR’s?

 

[Stefan] (17:26 - 17:27)

Yeah, that's even bigger.

 

[Pinja] (17:28 - 17:29)

GDPR’s even higher. Yeah.

 

[Stefan] (17:29 - 18:02)

Yeah. Up to 4% of your gross revenue. Yeah.

 

It might be revenue or profit or whatever. I can't remember, but it's big, big numbers. I think it's revenues because it's actually bigger than your profit.

 

So imagine being hit by a 4% revenue fine. How do you fix that? Like in these days with everything looking as it is in the whole landscape of economy at the moment, everybody's under pressure to make money.

 

And if you get this fine on top, what on earth are you going to do? How do you sustain that? Are you just going to shut down the laptop and, well, that was it, guys.

 

Sorry, we're a thousand employees. Sorry. You need to find a new job on Monday.

 

[Pinja] (18:02 - 18:40)

Bye. So in order to avoid all of this, of course, you need to have good security practices. This episode was not about talking in detail about good security practices.

 

You need to have your continuity plan. What do you do in an incident? What do you do when you have a critical vulnerability?

 

How do you communicate about it? But also after all this is said and done, how do you recover as an organization? And let's say a few words about postmortems.

 

We talked about root cause analysis and psychological safety. So let's combine those two things. Are we really able to have blameless postmortems to actually figure out that this doesn't happen again and we learn about it?

 

[Stefan] (18:40 - 20:08)

Well, that depends if we have psychological safety in our company and if we have a good open culture. If you don't, the first thing you're going to hear in the postmortem meeting is like, why did you do X, Y, and Z that caused this issue? That's not what you're going into a blameless postmortem.

 

It's so important. It says blameless postmortem, and it should be blameless. If somebody starts asking or saying like, all right, at three o'clock, I deployed this that made everything go down, like, all right, rewind.

 

At three o'clock, we deployed this incident because it might be one guy who pressed the button to deploy, but before him, there were reviewers, there were committers, there are multiple people part of every change in software these days. If there isn't, you're a fairly small company or you might want to rethink your structures. Just having a second opinion and everything.

 

There's a reason all of the highly regulated industries have a four-eye principle. There needs to be four eyes on everything that goes out. It cannot be, well, you can circumvent it in case of an incident where you need to fix stuff, but then it should be tracked that you actually skipped it.

 

I've been with a client, they had full visibility into who approved what, and they actually followed up on it. So why is it this only team that has all of these deployments where people are approving their own changes? Why do you do that?

 

Well, our change process is super complicated and it's annoying and it never works. All right, let's have a discussion about what doesn't work. So you need to like, make sure you follow these things.

 

Even though you hate compliance, it's still there for a good thing.

 

[Pinja] (20:09 - 20:59)

It is really important, again, for the psychological safety perspective to understand who is so-called allowed to bring this stuff up. I often like to reference the Westrum organizational models where you have the pathological organizations where in the case where somebody is telling you about an incident or something, how do you approach the messenger? In pathological cultures, you so figuratively, hopefully, you shoot the messenger basically.

 

How dare you say this? There's bureaucratic, which is better than pathological, but then they might get neglected. But then there's the best alternative, which is a generative culture where you try to train the messenger.

 

For example, in this incident, Carl, Annie, or Kim might discover something. How do we train people to understand, identify, and communicate it effectively? That's always what I'm looking into.

 

[Stefan] (20:59 - 22:02)

And have system support for erasing your incidents. We have an incident command in our Slack. It's easy to raise an incident.

 

It will go directly to security and they will start assessing everything and start grabbing the right people for this. I've been in other companies where we had a Slack command as well. It will ask you a few questions and most of them you could actually skip, but it needed a name for it and a very short description and which channels should be here and should it be publicly available in the company.

 

Might be in a security incident with PRI data and so on. So you want to keep it a bit secure and only include security and the people you assign. Having that process in place where you can set it up easily.

 

You might go into an incident like, so let's have a short brief meeting here. What's going on? What are you seeing?

 

What's happening? All right. Well, we're seeing this trace so-and-so and all right.

 

We've seen this pattern before. It's not an incident. It's just random scrapers hitting us or whatever.

 

Being able to defuse and deescalate things, it needs to be in place as well. Sometimes we get that there is a deescalation of things. We always think it always escalates.

 

[Pinja] (22:02 - 22:51)

Yeah, but there's also the important thing to understand that sometimes it might be a real deal. So it's better to be safe than sorry in this case. And I've also seen, unfortunately, organizations where the bureaucracy, the hierarchy of the organization has been preventing critical information from being spread in that organization.

 

Because, for example, in this case, there was a manager who was managing a team lead who then had a team under this team lead. And the team members, the so-called lowest level employees, were not allowed, or the big manager was not encouraging them to reach out to this person directly, but always via the team lead who was there in the middle. So slowing down the chain of communication, because you don't want to deal with the so-called low-level employees, is also a risk.

 

[Stefan] (22:52 - 22:57)

That sounds a bit like the military. There's always a chain of command, and you cannot skip the chain of command. You're not allowed to.

 

[Pinja] (22:57 - 22:59)

No, there is no skipping the line of command.

 

[Stefan] (22:59 - 23:41)

That would make every incident horrible. When you're in an incident, speed is key as well. But you also need to go slow.

 

If you're always in panic mode, like you can see some companies, as you said earlier, they can fix the incident. Yes, you've applied a fix. Now what?

 

Have you solved the underlying problem? Have you just closed the incident because you don't really want incidents to happen? We've had customers that had a long history of X, Y, and Z amount of incidents, and they wanted to reduce them.

 

If you set that number of incidents inside the organization, we'll be like, we're not allowed to say how many incidents we have a year. Why not? If we all know how many incidents we have, we're all going to be pitching in to reduce that number.

 

Nobody likes to have incidents.

 

[Pinja] (23:41 - 24:02)

Nobody likes to have them. But maybe as a final note today, if something you take away from this episode and this discussion between Stefan and I, don't go away from an incident, whatever it is, without a good learning experience with you in the back pocket. You might avoid it in the future.

 

You might be faster in responding. It might be something else, but don't miss this opportunity to learn something.

 

[Stefan] (24:02 - 25:04)

To do your postmortems well, a lot of people go into a postmortem and they just list what happens, and then they write two or three actions to follow up, and that's it. No, you want to look at the different stages. What caused the incident?

 

When did that happen? When did we figure it out? All right, what's the time between those?

 

What can we do to reduce the time between making the action that caused it to us detecting it from detection to fix? How can we reduce that time? So focus on how we can eliminate the span of time in between the different stages of an incident?

 

We could probably do three or four episodes on postmortems, but there's a lot of opinions out there, and some people think going into a postmortem is just like bringing X, Y, and Z people, and we talk it through, and everybody's happy. No, you need to have learnings from the postmortem, and you need to go out and apply them. Many people go into postmortems.

 

We need to have a postmortem. They write a document, put it in a drawer, forget about it. You need to bring those learnings back, or else you're going to be in the same situation at another point in time, and if it affects your customers, you are going to look stupid.

 

[Pinja] (25:04 - 25:20)

That is true, and on that note, I think that's all the time we have for this discussion today. We want to apologize, of course, to Carl, Annie, and Kim for using you as examples. We have nothing against Carls, Annies, and Kims, but somebody needed to take the blame.

 

We didn't do blameless in this episode.

 

[Stefan] (25:20 - 25:24)

Yeah, it shouldn't always be Alice and Bob. It needs to be other people sometimes.

 

[Pinja] (25:24 - 25:27)

It needs to be somebody else. Stefan, thank you so much for joining me today.

 

[Stefan] (25:27 - 25:28)

Thank you.

 

[Pinja] (25:28 - 25:39)

All right. Thank you, everybody, for joining us, and we'll see you next time in the DevOps Sauna. We'll now tell you a little bit about who we are.

 

[Stefan] (25:39 - 25:44)

I'm Stefan Poulsen. I work as a solution architect with focus on DevOps, platform engineering, and AI.

 

[Pinja] (25:44 - 25:49)

I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.

 

[Stefan] (25:49 - 25:51)

Thanks for tuning in. We'll catch you next time.

 

[Pinja] (25:52 - 26:00)

And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

 

Published:

DevOpsSauna SessionsSecurityPlatform engineering