From copilots to autonomous agents: The agentic shift in DevOps
AI copilots are everywhere in software development today. But the real transformation is only beginning.
In this episode of DevOps Sauna, Pinja and Stefan explore the Agentic Shift — the move from AI assistants that respond to prompts toward autonomous agents that can pursue goals, make decisions, and act independently inside complex systems.
They discuss what it really takes for organizations to move from AI copilots to AI coworkers, including data readiness, governance, security, and organizational maturity. Because deploying autonomous agents is not just a technical challenge — it’s also about responsibility, guardrails, and understanding the risks of letting AI operate with increasing autonomy.
You’ll also hear how concepts like domain-driven design, bounded contexts, and incident response automation may shape the first real use cases for autonomous AI agents in DevOps environments.
If your organization is experimenting with AI today, this conversation will help you understand what the next phase of AI adoption actually looks like.
[Pinja] (0:03 - 0:21)
Unfortunately, still in 2026, we see that not everybody has an AI strategy. Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.
[Stefan] (0:22 - 0:31)
Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.
[Pinja] (0:38 - 0:45)
Hello and welcome back to the DevOps Sauna. I am joined by my co-host, Stefan. How are you doing?
[Stefan] (0:46 - 0:48)
All good, Pinja. It's almost springtime.
[Pinja] (0:48 - 0:53)
We're counting for days, for spring to start. The sun is out, but it's still cold.
[Stefan] (0:54 - 1:01)
To have sun in it would be gloomy and gray and wintry in Finland. Like, I still have massive piles of snow here.
[Pinja] (1:01 - 1:05)
Yes, it is wintry, it's cold, and we have a lot of snow, but today it was sunny.
[Stefan] (1:06 - 1:19)
Ah, nice. Well, according to the weather report, all of the snow is going to be disappearing over the next few days. But a lot of happy people since snow is going away, and it's getting hotter than muck because we don't know how to do winter anymore, apparently.
[Pinja] (1:19 - 2:04)
No, it's difficult to do winter. Hey, to our topic today. So, in the previous episodes, we have talked about the trends for the software development lifecycle for 2026.
A couple episodes ago, we covered six trends that Stefan actually wrote a blog post about. And in the previous episodes, we have now done deep dives into each of these areas. And this marks the last one of those.
So today, we'll be talking about what to do if you want to go from co-pilots to autonomous agents. Because we claim that you won't get very far if you just focus on the technical capability, but you need something like organizational maturity and maybe a plan on how to do it as well.
[Stefan] (2:04 - 2:08)
Well, just give that task to AI. They'll figure it out. Like, it'll be fine.
[Pinja] (2:08 - 2:17)
And to be honest, we did a test. We did ask. We asked you and I on how to do it.
And the answer was quite decent. But if you wish, that could be a start.
[Stefan] (2:17 - 2:36)
Like data readiness, governance, change management, different steps to implement, and then sort of like a very high-level abstract operating model, like, yes, but what do we do in reality? So we got like the steps right up until the agents would be coming into play, which was okay because I like it asking us questions.
[Pinja] (2:37 - 3:01)
Yeah, it kind of works. But the thing is, having AI-assisted work has become a commodity now. It's done, okay?
So we claim we should go beyond that. So in the past couple of years, we've been using LLMs. AI systems have become just business as usual in IDEs. So what does it mean now if we look at the dawn of modern AI?
And is it a more of a passive approach?
[Stefan] (3:01 - 3:34)
Yeah, I think like the whole like, if you've got AI-assisted development, like having something like GitHub Copilot, you ask it to do some things, it does stuff. And well, we're there, it's an IDE. Then they pushed the agent mode that was sort of like starting to branch further out in your code base, could handle multiple files, get some more like the context of what was going on.
It really made sense. But it's still us giving orders, asking questions, like there's no autonomy. It doesn't really do magic for us.
And like, we are humans. We want magic. We want massive amounts of magic.
[Pinja] (3:36 - 4:04)
And to be fair, the tooling is already supporting this. If your organization is taking this next step, and you are ready, and you have the prerequisites to do that in a good way. But like, for example, what do we want?
We would like to have the agents actually progress a given task. And we want to have that actually reach the defined intention and goal without our intervention. So in reality, that might be actually what we call moving from co-pilot to co-worker.
[Stefan] (4:05 - 5:34)
And exactly as you say, like giving a good defined goal or intentions, you figure out the rest. Good luck. See you in a week or an hour or whenever it's done.
Like when we look at it from a co-pilot to co-worker perspective, like the co-pilot is us like pushing stuff to it, where the co-worker would actually like look at it, pull in whatever is popping up. Like we could pull man's term. We could have an event queue that just pops things into thin air.
And whenever an agent sees something that it finds within a scope, it would just like pull it in, do stuff, and magic happens. Because it's sort of like listening in on what we're doing, a bit like people have been using Siri and Gemini. Or, oh yeah, I think it's called Gemini with Google now.
They decided to kill high Google. But taking those steps where it actually reacts to what is going on instead of us like pushing a task to it. Then instead of going in like linear mode, where we sort of have the storyline, we're building up the context to be bigger and bigger.
It sort of loops, looks like the U.S. The Air Force has this pattern of like it's called OODA, like Observe, Orient, Decide, Act. Like it looks at what it has done. Is it there yet?
It decides. If it decides that it's not good enough, it acts on it. It's like using that pattern to iteratively improve the outcome for us.
And then like if we use the co-pilot, we create a chat or the session, all good. When we move into a new chat, everything is lost. We have no persistent memory.
If we have like an autonomous agent, it starts to get a feeling of how we are or just...
[Pinja] (5:34 - 6:08)
Yeah, and I've used Gemini, I've created a Gem. Like I think they're called Gems, right? Where you can basically teach that.
It still is not autonomous yet when I go into the chat mode, but I've given the context for like one specific project. I would always like to assume that I, when I say this, I mean that, without actually writing the whole prompt every single time. But at the same time, it is only within that gem or that agent, basically remembering that context, but it's not building upon different chats so much yet.
[Stefan] (6:08 - 6:28)
Yeah, but it would need another step even, because when you use this gem, it needs to remember what you were building in, like on top of that gem and like, all right, there's something interesting here. I'll save that for later. And like, because you're getting a new gem every time, you're not like, it doesn't really learn.
But it would be something like that, but just an extra step on top of that, which would be super nice to have.
[Pinja] (6:29 - 6:40)
And like we still need, with a more passive approach and prompting, we still need the human review. But when we move more to the autonomous sphere, it's more about the target coverage, right?
[Stefan] (6:41 - 7:27)
Yeah, it sure is. Like, again, give it the intention, give it the goal, give it like the necessary, what do we call it? Like sub goals, targets, whatever we call them.
As soon as you cover this, you're done. Instead of us like reviewing the output of the work, it actually did. So like giving it some freedom and flexibility to figure out what to happen.
And honestly, if it's really well, we wouldn't really care about how it did it. As long as it covers our targets, then we're happy. And when we talk about that, it makes sense when we talk about task level or system level, it's building a system for us instead of handing it a specific task.
So it's reactive, it's taking decisions, it remembers what it's doing. It's trying to achieve our goal on a system level. That would be the co-worker instead of a co-pilot.
It would be fantastic.
[Pinja] (7:28 - 7:51)
It is already a possibility, but of course, we need a supervised, some kind of supervised setup for this. We would like to have little human intervention. The human needs to be somewhere in the loop, but how much is the question?
How can we optimize that? So is the organization ready if autonomous agents are coming? So because many of the organizations already have AI somewhere in the organization.
[Stefan] (7:51 - 7:57)
Yeah, of course they do. Everybody has AI. Like every product even has AI features.
We all know that.
[Pinja] (7:57 - 7:58)
It's a must.
[Stefan] (7:58 - 8:01)
That's what the world wants.
[Pinja] (8:01 - 8:02)
It is.
[Stefan] (8:02 - 8:45)
But actually it shows up like when people do research, it's like somewhere around 80% says they have AI. But what does it mean to have AI? It's like, is it somebody doing experiments with Claude Code?
Is it a widely adopted co-pilot? Is it using AI for personalized content for your customers? What does it actually mean when you say you have AI?
Is it having bought the full Microsoft suite and you have a co-pilot button in your Excel spreadsheets? Is that using AI? Well, AI is there, but are you using it?
We need to go a bit further than that. And like we could ask AI to do it, but then again, AI would probably give us yet another lie because it also wants us to be happy.
[Pinja] (8:46 - 9:24)
That's true. And like, if we really want to enjoy and get the gains from the AI super speed, I don't think many actually really know enough of what are the prerequisites for this. And we wanted to remind you of an episode that we recorded earlier.
It was in the autumn of 2025 when we talked to Lofred Madzou and we talked about AI governance and what are the stepping stones for an organization. But we, unfortunately still in 2026, we see that not everybody has an AI strategy. AI strategy does not mean that you say that we need AI now and we're going to get AI.
It's not one AI, please. To say just that.
[Stefan] (9:25 - 9:29)
Oh, that would be so easy. Just pop down to the shop, buy one AI, you're happy.
[Pinja] (9:29 - 9:31)
Yeah, because I think a couple of years ago, it was the same.
[Stefan] (9:32 - 9:49)
Yeah. The question is still like, what do you want to achieve with AI? And that is not saying we want AI.
It's like, what do you want to achieve? It's just like, honestly, it's the same thing we did when we started doing this like agile transformation. Like you need to ask, what do you actually want to achieve?
[Pinja] (9:49 - 10:08)
So previously it was, at first it was like, hey, can we have one agile, please? And then asking for, hey, some DevOps here to our organization without defining what it is. But now it's the one AI.
I can have some AI here, please. And so there is more than tech here yet again to develop on.
[Stefan] (10:08 - 10:48)
But part of it is tech, like quality, data quality. That is a big, big topic when you want to do AI. And I talked to a lot of people in different arenas and one of them kept going on like, yes, you want AI, you want to do machine learning or whatever.
You will not get anything that is better than the quality of the data you're training it with or supplying it with. Like you don't have to train your own models. Like please don't.
Unless you have something very specific and very well set context, then it might make sense. But if you train a model on subpar data, you will get subpar results. If you guide an LLM with subpar data, you get subpar results.
[Pinja] (10:48 - 11:02)
And this has been, I think the analogy has been there already with agile since the beginning. I don't want to use any swear words, but if you put poor quality in, you get poor quality out. But everybody knows what, saying that I'm referring to and the same with DevOps, right?
[Stefan] (11:02 - 11:03)
Oh, yes.
[Pinja] (11:03 - 11:11)
And the same with the data quality here. So it's an unskippable step that organizations have to go through.
[Stefan] (11:11 - 11:40)
Can you use AI to improve your data quality before you go all crazy? Maybe. Like I've seen it work in some cases where you have some completely garbage data and you actually manage to structure it, give it a better quality by running AI on top.
So you might need a pre-project for making sure the data quality is good so you can actually run like full-scale AI on your business afterwards. But again, it's very context dependent. As good consultants, we always say it depends.
[Pinja] (11:41 - 11:43)
It is the first line in the book.
[Stefan] (11:43 - 11:44)
Oh, yes.
[Pinja] (11:44 - 12:16)
The guide to how to become a consultant. When we move on to autonomous agents and letting them roam so-called free, even more important than before with your not so autonomous, your co-pilots is the governance. And in the previous episode, we talked to our colleague Nora Fosse.
She's a GRC expert here at Eficode. And we would talk about is the organization really ready for this step? Because moving to the autonomous sphere requires much more than when you work with co-pilots.
[Stefan] (12:17 - 13:28)
Yeah. Like who's responsible for stuff that is being created? Like do you have to make a mark on it?
Can you just like, oh, it looks good. Let's do it. Can it reach out and read all of the PII data you might have?
Like can it write to datasets? Can it read from datasets? Is this like a boundary it can't cross?
Do we need human intervention here? Are we like a highly regulated industry? Do we need to take other things into account?
But we see all of the fun results, like the memes that are roaming, like deleted data, production data, or generating bogus analytics. Honestly, I don't know if they're true, but it's still like a fun meme to have. I saw one the other day where a company had made decisions on analytics data for three months, and they figured out that their AI engine had generated this bogus data.
Well, it's easy to convince people not to trust AI, but in reality, I would rather have somebody raise their hand and say like, hey, that's me. I did this project, and this is the war story. Then when you can sort of build a governance model around it, make sure like, all right, so this is how we verify and qualify that the output from AI was good enough or within the guardrails that we have actually set.
[Pinja] (13:28 - 14:06)
And we're not going to get too much into the trust part right now, because we already talked about that with Nora in more detail in a previous episode. But it is important to understand that with autonomous agents, you also need to change the authority structures. Like, for example, who's designing the guardrails, and who's approving if we need an agent's rollback, as a couple of examples.
And we need to figure out, do we use new kind of roles to handle these? For example, your architects, your SREs, does it change the role that they're doing right now when we move into the autonomous sphere?
[Stefan] (14:06 - 14:52)
It's definitely not that AI is taking all of our jobs. It's more like, how can we superspeed these different roles with AI? Because the whole discussion of AI taking over and getting us all fired, that story is long gone, I think.
It is still roaming, like all bad stories will keep roaming, but that's not what happened in reality. We have one big question for the governance, who has the final responsibility, and who is accountable for the outcome? When we talk to Nora, as she said, it can actually go up to a board level where the board is accountable for this, and you can get personally fined or even imprisoned in some cases if it's really bad.
So you need to remember, you have a responsibility and you are accountable for whatever's going on inside of your organization, no matter if it's security or AI or whatever it is.
[Pinja] (14:53 - 15:20)
Some people compare autonomous agents to junior developers, for example. And so much so, we're not trying to put blame on the junior developers or anything, but in the same way, an organization has the responsibility and the accountability for the work that the employees are doing. So if you don't have the governance structure in place, if you don't do the security training that everybody loves so much, you're going to be in trouble.
[Stefan] (15:20 - 15:25)
So we're going to run security training for autonomous agents now, or is that a new business area?
[Pinja] (15:26 - 15:30)
Let's look at that. Yeah, maybe in 2027. I don't know about that.
Let's look at the trends.
[Stefan] (15:31 - 15:32)
That's going to be a new thing.
[Pinja] (15:32 - 15:33)
For next year.
[Stefan] (15:33 - 15:35)
Oh, that's trends 2027.
[Pinja] (15:36 - 15:51)
Exactly. This is how fast we're going right now. But let's take a brief look into domain-driven design as well, because there is that cross, that's really the relation to domain-driven design as well.
So looking at the boundaries, right?
[Stefan] (15:51 - 17:21)
Yeah. Like if you want to build autonomous agents, you need to figure out the boundaries for them. And as you say, if we look into domain-driven design, you have something called a bounded context, which is sort of like whatever leaves this in a good transactional state.
And outsiders can only read through an agreed interface, like contracts, APIs, whatever we want to put up. If we think about letting an agent roam, we need to make sure it roams inside our bounded context so it doesn't destroy other people's stuff. It only destroys our own stuff because we have a responsibility for everything being in a good state within our bounded context.
And I think that's a good setting for an agent. Like you can do whatever you like, but you can only do it within these walls. Then between the different bounded contexts, you might have an autonomous agent that can reach out and talk to these interfaces, but it cannot go in and do stuff inside the bounded context.
So I think I haven't seen a lot of articles around this like how you want your agents to roam and how you put off all of these guidelines and guardrails and whatever we need. But I think there is definitely something that we should think about regarding these bounded contexts and saying like, all right, so what is the context we allow our agent to be autonomous inside? Can it be an agent outside?
Maybe, maybe it's a different agent if it's just like the interface to our context. So I think that's an interesting thing to see if that's actually going to be pivoted in. It's probably going to be called something completely different because we're not really that good at inheriting concepts between different aspects of software engineering and infrastructure and so on.
We want our own terms.
[Pinja] (17:21 - 18:01)
Yeah, just for clarity's sake, if nothing else. But another one that might have a different meaning or connotation in different areas of classification. So again, we talk about the responsibility, the accountability, and now like having the boundaries set with the bounded context.
But say we have the autonomous agent generating code and again, also tying this into GRC and our discussion with Nora. So we need to accept to some degree that they'll all be autonomously generated code in production if we are using autonomous agents. So that's, again, going into what kind of criticality allows it.
[Stefan] (18:02 - 18:41)
Yeah, like do you dare put an autonomous agent generating code into your accounting system? Maybe not, because you can actually get in trouble if your accounting ain't right. Can it handle personal recommendations or whatever?
Let it auto-generate some stuff and go forward and have fun. That might be OK. If we're in some given businesses, we might be running into legislation that doesn't allow us to do these things, like we could be hit by NIST 2, which is sort of like the critical infrastructure.
Do I dare let an autonomous agent push code in a critical infrastructure setting where I need to respond and make sure my country runs well? Maybe not.
[Pinja] (18:42 - 18:43)
Maybe.
[Stefan] (18:44 - 18:51)
It's like, who's going to respond to this at 2 a.m.? And what will you do? Because you have no idea what it did.
[Pinja] (18:52 - 19:16)
And speaking of vulnerabilities, we could take a whole day talking about the vulnerabilities and autonomous agents. So to keep things on a little higher level today, I don't want to say that we don't want to talk about vulnerabilities because they would be so last season, but they would take on the whole conversation. But it's not like they necessarily create new vulnerabilities if you use autonomous agents, but they might amplify the existing ones.
[Stefan] (19:16 - 20:29)
All of the concepts are sort of the same. Yes, it might have a new name because it's a prompt injection, but a prompt injection is to AI what cross-site scripting is to a web application. Like when you look at your security team, they already know like replay attacks and so on because they sort of abstracted the terms up to be something they understand, which is super good because yes, you need specialized knowledge into what can happen with an AI, like prompt injection is a bit different than cross-site scripting.
The same goal, the same technique sort of because you inject something, you make the system act differently and you get an output from it. Sort of the same thing. So your security professionals are not going to be out of work.
They're just like, they need to relearn some terminology. They need to know, like some of them will need to have a speciality and like, all right, so how do you actually cope with this in an AI setting? But the principles, thought patterns, everything is sort of the same.
It's still the same when you think about the risks and what can happen, blast radius and so on. So it's actually nice that we're seeing people trying to define lists of what can go wrong and they map it into these security terms. So everything is not lost.
[Pinja] (20:29 - 21:00)
No, it's not lost. But at the same time, we need to keep in mind that security is still very much needed and looking at vulnerabilities, trying to like, as I said, they might be amplified because you, we have the autonomous agents roaming around. And again, we're not saying don't use autonomous agents, but get the stuff together and in a good position before you do that, not to get into the place where somebody calls you at 2am and says, oh, we have a big thing going on here.
[Stefan] (21:00 - 21:10)
Yeah, there's nothing fun about being woken up at 2am by somebody saying we have an issue. I've tried that too many times and it's never fun, no matter how much you're paid for being on call.
[Pinja] (21:11 - 21:30)
Exactly. One thing to talk about is that we hear so much that somebody has now a fully fledged agent orchestration set up in place, but we really talk about the practice here. Only a few of them actually are open and honest about the complexity and success of those stories, right?
[Stefan] (21:30 - 23:36)
We see a lot of orchestration frameworks showing up, but it's hardly like a good case study around it. It's usually, well, here you can build all of your agents and we'll orchestrate them. There's several vendors out there that do this.
I think that still the fun one is Steve Yegge, which most people actually know for something completely different. He's gone into this like full whack AI world together with Gene Kim, but Steve Yegge has focused on something he calls Gastown, that is like the big orchestration agent buildup. He wrote a fantastic article around it where he sort of explains everything.
The terminology is a bit odd because it's related to Mad Max, so you need to have a mental mapping of Mad Max characters to what you would sort of imagine in the setup. But at some point he even realizes he needed an agent to check up if the other agents were stalling and then he just needed to have an agent poking them and say like, hey, get back to work. Or sometimes it needed to restart it or he would have an agent to like span everything and see if all of the other agents were healthy.
Like there are so many complexities in this, which sort of goes back and smells a bit like Kubernetes where you have an orchestrator and you have liveness probes and health probes and everything. So it's actually a fun read, but as he starts the article with like, you shouldn't do this unless you have plenty of money because he needed to create an extra Claude account to actually make sure it had enough tokens to run. And like he was pretty honest about everything in it.
The downside is he published all of the code, he used AI to generate all of this code, but I read a few other reviews where they couldn't make this code work, even though it was full of documentation, everything, they just couldn't get it to spin up. And then it comes back to this discussion of like, if agents are generating code, can we actually recover in case of an incident? Like, do we have a disaster recovery plan here?
Is this so bespoke that it doesn't really fit into anything? So, yeah, it's fun. And I've talked to a lot of people where we talk about agent orchestration and then the discussion goes into how big is an agent, which is sort of like going into how big is a service in software development?
So the same patterns again.
[Pinja] (23:36 - 24:23)
The same patterns. And it might be something that we will see get easier over time. But right now it's because we're talking about software development life cycle, which is a really complex system as it is.
So I'm really thinking about the companies that we see and everybody, most likely everybody, has some kind of legacy components. So if you're like trying to add autonomous agents, again, I'm not trying to discourage anybody or doing this, but just we need to be careful in getting the stuff right before we go in. And legacy stuff, legacy components is one of those things, because in some cases you will see and AI agents will, autonomous agents will reveal these bottlenecks, which I think is going to be a good thing in the long run.
[Stefan] (24:24 - 25:31)
Yeah, like you can have a service that has been running for, let's say, 10, 14, 15 years. It's running fine. It's doing its work.
It's making your money. All of a sudden you add AI and something super speeds on the side of it. All of a sudden it just caps to the side and, oh, it just couldn't handle this load.
What do we do now? Well, it was written like 10, 15 years ago. Nobody ever expected that the load would be like this.
So you need to react to this or even better, your AI might actually be able to figure this out before it goes into production or you have like an agent that sort of mitigates this and puts a cure in front of it, making sure it only has this like throughput. Yes, throttling things is never nice because we want to be efficient. But if we have something we haven't touched for many, many years, it's scary.
It's super scary to go into an old code base and start editing it, especially sometimes like it might have been written by a guy who left the company 10 years ago. How much do you know about that? Like we know, everything is well-documented.
Everything is shining, unicorns all over the place. In reality, there's so many things running that nobody knows anything about.
[Pinja] (25:31 - 26:25)
Yeah, and we know that not many organizations, if we really think about it like an adoption curve, not everybody is in the maturity right now with autonomous agents yet, but the great maturity will come. So beware with your organizations as well. But at the end of the day, like we do need some kind of holistic view on when we include AI, just like any kind of modernization initiative or project that you're doing, anybody, any organization is doing.
And as mentioned so many times already in this episode, we're applying many of the same concepts and terminology and principles as we have with any kind of technological implementation, for example, security and vulnerabilities. But for some reason, we still see that these principles are being forgotten when applying this to autonomous agents.
[Stefan] (26:26 - 26:58)
Yeah, and we're in like the weirdest industry ever because we find a new field and we sort of skip all of our knowledge and we just move on because something is new and fancy. I think I've heard a few people saying AI is good for software engineers, bad for software developers. And then you go into this discussion between like what's the developer, what's an engineer, where the developer would be him, like his average job is just like creating websites while the engineer is like constructing the engine to create websites.
And so it just shifts like where we are in this whole setup. But that's a discussion for a completely different day on what we mean with things.
[Pinja] (26:58 - 27:12)
Yeah, but like I'm now starting to think, if we keep on forgetting the context, does it mean that we're now the co-pilots and the non-autonomous agents as human beings? We skip the context altogether. We forget the history, right?
[Stefan] (27:13 - 27:19)
Maybe we are. Are we the captain or are we the guy running on the deck of the boat? Time will tell.
[Pinja] (27:19 - 27:43)
That would be a totally different kind of conversation who's running the show here. But hey, all in all, if we look into organizational maturity, because that is a key thing to look at, are you ready for the next step? Because that's where we're headed.
And we really think that it's going to help and speed up development organizations and SDLC anyway. But do remember that we're working with complex systems.
[Stefan] (27:43 - 27:53)
Yeah, and not everybody needs to be a front runner. Like you might not be going out of business for not accepting autonomous agents all over the place. It might be OK.
You don't have to do the same as everybody else.
[Pinja] (27:53 - 27:54)
And at the same pace.
[Stefan] (27:55 - 28:36)
It's totally OK. Like take it at your own speed at your own maturity level, your own journey. Like everybody hates when we say journey, and it's about the journey.
Yeah, well, you need to take it in the appropriate steps where you can actually follow the whole execution of everything. Like we still see people to other people, like really aged systems, but it works. It runs.
It makes money. It's easy to maintain for them. Well, why not?
Well, just keep an eye on modernization and do it in small steps on the side, because at some point, things will go away and you need to modernize. So it's like it's the old discussion of tech debt, like always paying a bit off on your debt before you go bankrupt all of a sudden.
[Pinja] (28:36 - 28:53)
If we look at the first areas of SDLC, if somebody is thinking, where can I start from? So the first areas of SDLC where we have seen autonomous agents in play have actually been incident response and cyber reliability, but also security. So things that we've mentioned already here in this discussion today.
[Stefan] (28:54 - 29:51)
I love the security bit because nobody will ever question security for shutting anything down. They might complain about it. You might be inefficient, but usually there's so much leverage between security saying, we closed this because we got insights into something that was going on.
So of course, we get autonomous agents and security as one of the first things because they shut stuff down. Everything is OK. But incident response and cyber reliability, it's more about instead of running playbooks, it can actually pull on best practices for, let's say, issues with a part in Kubernetes.
It will try to restart it. It will maybe redeploy it. It will do different things.
It might seem like, oh, it has an external dependency. That external dependency is not responding. All right.
So now we know why all of these more like autonomous agents can roam and see best practices. It's all good. We might even have documents where you can look up what we usually do.
Then we might actually get some value out of the runbooks people have been writing for ages because people tend to not use the runbooks in practice.
[Pinja] (29:53 - 29:53)
Exactly.
[Stefan] (29:54 - 29:54)
Yeah.
[Pinja] (29:54 - 30:00)
On that note, I think that's all the time we have for this topic today. Thank you for joining me, Stefan, once again.
[Stefan] (30:00 - 30:02)
Thank you, Pinja. It was a pleasure as always.
[Pinja] (30:03 - 30:12)
And thank you, everybody, for tuning in. We'll see you next time in the sauna. We'll now tell you a little bit about who we are.
[Stefan] (30:13 - 30:18)
I'm Stefan Poulsen. I work as a solution architect with focus on DevOps, platform engineering, and AI.
[Pinja] (30:18 - 30:23)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.
[Stefan] (30:23 - 30:25)
Thanks for tuning in. We'll catch you next time.
[Pinja] (30:25 - 30:33)
And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.
Published: