The six trends in SDLC: from copilots to autonomous agents and more in 2026
In this episode of DevOps Sauna, Pinja and Stefan break down the six key trends shaping the software development lifecycle in 2026.
From the move from AI copilots to autonomous agents, to governance, regulation, cloud sovereignty, platform engineering, and AI cost optimization, this episode offers a practical look at what these shifts mean for modern organizations.
A must-listen for CTOs, C-level leaders, and anyone working across DevOps and platform engineering.
[Stefan] (0:03 - 0:08)
The tricky bit is, do we actually have the data to support the autonomous agent?
[Pinja] (0:12 - 0:21)
Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.
[Stefan] (0:22 - 0:31)
Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.
[Pinja] (0:37 - 0:45)
Hello, and welcome back to the DevOps Sauna. I am joined by my co-host, Stefan. Hi, Stefan.
[Stefan] (0:45 - 0:49)
Hello, Pinja, and welcome back. It's a new year. It's a new us.
[Pinja] (0:49 - 0:54)
It's a new year. New us, everything is new. No, it's not so much.
[Stefan] (0:55 - 0:59)
Well, something that was actually new in Denmark in January, we actually got snow this year.
[Pinja] (0:59 - 1:00)
Oh, wow.
[Stefan] (1:00 - 1:17)
Unfortunately, everything turned soggy, wet, and rainy. But one day the weather had a fun flaw. So it was like minus two to four degrees, and one island said like minus 50.
Somebody actually did a typo with an O in it. Like, please remember your gloves and hats.
[Pinja] (1:19 - 1:22)
Okay, that would be something new for Denmark.
[Stefan] (1:22 - 1:29)
Oh, yes. I think I've tried minus 18 on the max, but 50. I think the record is 32 or something like that.
[Pinja] (1:29 - 1:41)
That's pretty cool as well. And by the way, we're talking Celsius to anybody who might be confused by this. Yeah, we've had a lot of snow in the Helsinki area where I'm situated.
It's been a couple cold days. I think the most we had was minus 20 degrees Celsius.
[Stefan] (1:42 - 1:44)
Which is summer in Finland, right?
[Pinja] (1:44 - 1:50)
Pretty much, yeah. It's midsummer. Anyway, nice snow.
Again, like midsummer. In that sense, nothing new here.
[Stefan] (1:50 - 1:51)
Back to your shorts again.
[Pinja] (1:52 - 1:57)
I sat next to a guy on the bus yesterday who was wearing shorts, and it was minus 10 degrees.
[Stefan] (1:58 - 1:58)
Oh, dear.
[Pinja] (1:58 - 2:38)
But enough of the weather. It's New Year, and at Epicode, we strive to get and gather trends on what's happening in the software development lifecycle and in the world of IT and DevOps for every single year. And this year is not unlike the others, so there's a blog out.
If you want to go check it out, it's at epicode.com, and you can find it under blogs. It is called The Six Trends for CTOs in 2026. It's a shift towards autonomous SDLC, so software development lifecycle.
And if you go and check this out, you might see a familiar face who was the author. Stefan, that was you, right?
[Stefan] (2:39 - 2:43)
Do I know the author? Not sure. Maybe it's me.
[Pinja] (2:43 - 2:44)
Maybe it's you, yes.
[Stefan] (2:44 - 2:46)
Yeah, with some good support internally.
[Pinja] (2:46 - 3:12)
Not just Stefan, but Stefan gave his face and your stamp of approval for this. There are some of them that are not so surprising amongst these six trends, but we wanted to open up these six trends a little bit more and go through what's in it for the business. We call it The Six Trends for the CTO.
Other C-level people, please listen in. This is very important for y'all as well, and basically anybody who is working in the realms of software development.
[Stefan] (3:13 - 3:21)
No matter if you're a director, VP, whatever you are, this still holds up. It hits multiple levels.
[Pinja] (3:21 - 3:22)
It does.
[Stefan] (3:22 - 3:31)
Even singular people in operations, platform engineering, please check up on it because some of this stuff will actually be moving inside of your organization this year.
[Pinja] (3:31 - 4:00)
And with that, we kick off with our number one, the global AI governance, mitigating the shadow AI risks. So we know that AI integration is now basically the industry standard across all stages of the software development lifecycle. So it's not just creating code with AI, but with this and the speed that everything is moving, we have a very high need to set up the governance structure the right way from the get-go.
[Stefan] (4:00 - 5:06)
Everybody wanted to be in on the rush with AI, so we bought all of the tools in the world, and now we don't really know where everything is. Well, many people do. If you're a bigger corporation, you know that governance compliance is important, but we see everything from 10 different tools running in different departments.
Maybe you want to slim that in, get a good offer on whatever you're buying. And if you do this wrong and you set too strict of a policy, you will have shadow AI, just like you had shadow IT with people bringing all sorts of weird software that you shouldn't really be using. But you need to hit that sweet spot in the middle where everything is governed, but it's also with a kind of flexibility for people.
Like internally, we just got an update on our AI policies. Now we have white, gray, and black lists, and some things are sort of in the gray list. But if you need it for, like, we go out to multiple customers, so they might need something else, we can get a stamp of approval for the gray list.
It's fairly straightforward. It's not something like a three-week process or something like that. So you need that level of flexibility in your policy, for sure.
[Pinja] (5:06 - 5:27)
And if we talk about that, we said the right way, but if we talk about the wrong way of doing things, there are a couple of risks here. So we don't want to see any IPR leaks happen with the setup of AI tools. And my big yikes would be to see a very uncontrolled access to PII personal data that you're not supposed to let your AI tools see.
[Stefan] (5:27 - 6:03)
No. Imagine you spin something AI up in a corner, you have no control of the governance model around it. All of a sudden, it's reading PII data.
It responds to a website or a web service you're running. Oh, dear God. Cleaning that up and figuring out what data went where.
If your observability stack is not good, if your tracing is not good, then you pretty much don't know what went where. And oh, dear God, getting a visit from the authorities and having to pay that fine, that's not going to be a fun day. We see a rise in all of these incidents.
Some of them are with sensitive data, some are not. We're definitely seeing a rise in AI incidents this time.
[Pinja] (6:04 - 6:25)
We're not going to go into the IPR leaks or PII data accesses right now, but this is something to keep an eye on. But we do think that there is going to be a higher focus this year on having the right tools and, of course, the policies to support this. And it's not maybe the tools, but the tool and the model being used in adjacent to that.
[Stefan] (6:25 - 7:09)
It is the combination. We could just see some discussions with the updated policy we had internally where people were like, is it a tool or a model we're getting on a list here? Or is it the combination?
Should we have a matrix for assigning this? You need to have pretty open discussions on how this actually fits your corporation, because there is no magical model that will fix everything. Speaking of models, you need to take into account which models you allow.
If you allow a tool, how can you restrict access to the model you don't want in that tool as well? It's no surprise. Some European corporations don't like the DeepSeek model because it comes from China.
It could be the other way around that we didn't trust the American models as well. You need to be able to restrict or at least have a good insight into what's running where and why.
[Pinja] (7:10 - 7:37)
And moving on on the list. So the trend number two is moving from co-pilots to autonomous agents, so the shape to agent driven operations. And just to be clear here, when we now say autonomous agents, we're talking about the AI tools that perceive their environment, they make independent decisions, they're able to take actions so that they can achieve these complex goals so that there is not a constant human oversight present.
So just to be clear, this is what we mean when we say autonomous agents here.
[Stefan] (7:38 - 7:39)
That's the end goal of it.
[Pinja] (7:39 - 7:40)
Exactly.
[Stefan] (7:40 - 8:28)
And then we need to look at the maturity steps and the level of trust to get there. That is the big step. Like you said, the title moving from co-pilot or co-pilots to autonomous agents.
We've seen everybody using these like different gen AI sort of like prompting, getting feedback, like we need to shift further ahead because the world is moving. And yes, it's a nice assistant to have on the side. If you're writing an article, it might be able to give you a brief overview of it.
I might be able to do some research. It might be able to act as an editor or reviewer for you. But the next step is sort of like incorporating it into the full SDLC.
Like, can it actually just update your software without you having to intervene with it? Or how can you do this? There's so many things that are possible all of a sudden.
Yeah.
[Pinja] (8:28 - 9:12)
And maturity has gone a long way. So basically in the past four years now, since JATCPT was launched, we've really moved from just having fun with JATCPT prompts to actually seeing how the industries take on AI tooling, AI agents, and actually looking at how can we, how can we make this more autonomous, but also seeing it as critical that we have the human in the loop so that it was not wrong when we, it was Martin Woodward who said in the DevOps conference in 2023, I think, so three years ago, that we're going to see a faster change, like we're going to see more change now in the next five years than in the past 40. And now we're only like, it was only four years ago that we got JATCPT.
[Stefan] (9:12 - 9:26)
And now every day is just a new something, something AI that spins up, runs things. Like the best thing is like in the beginning, people just like put an AI label on everything and it's like, what's AI in this? And now we actually start to see what the real AI is.
[Pinja] (9:26 - 9:56)
Yeah. Of course, there's the, we're still talking about LLMs. We're not still, not close to getting AGI, I would say, but the real investment in AI is now. And actually taking advantage of that in the SDLC.
So going from actual, just like small experiments to actually being systematic in leveraging somewhere where it fits and it's not just a nice gimmicky thing that somebody's throwing out in the organization.
[Stefan] (9:56 - 10:56)
Like if we just look at the partners we have, like we see more and more of them coming out with a good AI story, like how it can actually support you in your SDLC, whether it's during your coding session, if it's during builds, deployments, operations, like trying to fit it all together into some places where you can actually hook an agent in. If you're reached that maturity and trust level in AI yet, then you can opt in and say like, all right, let me just run an agent here in sort of, we call it learning mode or a view only mode. And then see what it would actually suggest us doing here.
And then you can sort of, at some point when you feel comfortable with it, you can trigger and say like, all right, run full speed ahead, do everything without me intervening. Then we start going into the autonomous agents and that's where we're going to see an insane level of effectiveness because all of a sudden you don't need the human in the loop. We've had the human in the loop for so long and whenever it reaches a human, we know it's not operating at full speed anymore.
Like we are so slow compared to machines.
[Pinja] (10:57 - 11:26)
The agents are getting better and better. And if we think of the agent mode right now and how that's in the center here, that they're just getting better and seeking and remitting, like let's say production security issues. We're slow.
Like the human eye makes a lot of mistakes. So like it is, we're beginning to go towards that where we're not just looking at AI hallucinating things, but that we, with human oversight, catch a lot more of that than before with human eyes.
[Stefan] (11:26 - 11:59)
The tricky bit is like, do we actually have the data to support the autonomous agent? Like when we talk about remediating security issues, where do we have the information about the security issues? Like is it external fees that show vulnerabilities?
We are actually able to detect them in a system where the AI can act on it. If we don't have that data, well, you can buy an agent that will remediate this, but it doesn't know how to do it. So we come back to the, you need to have the data to be able to make the agents run well.
No data, no AI, more or less.
[Pinja] (12:00 - 12:03)
How about no trust and maturity in the organization? No agents.
[Stefan] (12:04 - 13:03)
Who needs trust in AI? It's magical. It can do everything for you.
If you're a dinosaur like me, you've been sitting back like, nah, this AI thing is not good enough yet. But like over the last year, year and a half, I've leaned heavily and heavily into AI. Like it solves many things to a good enough level for me.
If it's something I have very specific knowledge about, I usually tend to ask AI to see where we're at. Sometimes it fails, sometimes it's good. But some really specialist knowledge where I know the full context of the code I'm looking at and how to do this, it takes me longer to write the prompt than actually fix the issue because I need to prompt it two or three times before it actually does what I want it to do.
Might be me not being fully capable of doing AI, but I have a lot of discussions internally that's just because you need to learn how to prompt things. Like why do I need to learn yet another language when I could be programming this in a shorter time than learning how to prompt it? Yeah, that's the level of trust I'm at.
So you can hear old dinosaurs here.
[Pinja] (13:03 - 13:19)
No, it's the maturity of the organization. And of course, you need the guardrails. You need the big red button that says stop with the exclamation mark to be able to have that trust in your agents.
Of course, the data has to be there for the agents to be able to operate.
[Stefan] (13:19 - 13:26)
Just like applications, you need that feature toggle somewhere. You need to be able to say like, all right, disable AI here. Should always have feature toggles.
[Pinja] (13:26 - 14:05)
And speaking of AI, we're still keeping with the topic, but we're moving on to trend number three, which is regulatory readiness. So two major things we want to discuss here. The first one is the EU AI Act.
And the other one isn't an oldie but goldie is a Cybersecurity Resilience Act and complements these ones. So it is a fact that regulatory implementation and drafting regulations maybe has not been able to keep up with the speed of how AI is being developed. I'm trying to be very careful and nice here.
But the EU AI Act, it's been almost two years now that it's been in effect.
[Stefan] (14:05 - 14:08)
Yes, you usually get a window of two years.
[Pinja] (14:08 - 14:11)
Yeah, but there's something specific coming up in August this year.
[Stefan] (14:11 - 15:11)
Yeah, it's going to have a general application date where it actually goes into actual effect now. And then your high-risk AI system will have a lot of applications all of a sudden. We see a few signs.
I've seen when I'm sitting bored at night scrolling Facebook, I'm starting to see these small labels generated with AI or made with AI. So it's getting more and more into the view of the end consumer. Things are going on.
Here is AI. Because every now and then I see a video like, I'm 99% sure this is AI, but there is this slight chance it's not. Right until you see a car swiping through, a man is like, all right, definitely AI.
So with the EU AI Act, you need to label your things. And depending on the different levels, you're allowed to use AI or not. Our TRC people are super good at explaining this.
I'm not going to try and take their job. There are multiple levels in where you are in the whole profiling of the Act. And the fines can be big as well.
[Pinja] (15:12 - 15:30)
That's true. Because now we're now building on top of DORA and NIST too. And so to be able to be in control of your STLC, this is now the time to actually check your compliance with these issues.
And we're going to talk about this in a few moments, but the need for platform engineering, even with this context, is also crucial.
[Stefan] (15:30 - 15:54)
And you need to make sure you have the right people who know these things or good partners or however you use to sort out the Act. And if you do regular software where it's only internal use and so on, there are no super strict rules. As soon as you go into high level things, then there's going to be more rules and stuff to follow, just like every other legislation.
That's how it is. And then we had the old equality as well.
[Pinja] (15:54 - 16:07)
Yeah, we have the Cybersecurity Resilience Act between old friends and family members CRA. It's been lingering in the shadows for the past couple of years. But by summer this year, 2026, there should be conformity.
[Stefan] (16:07 - 17:42)
Yeah, it's always fun. It's just like when we had the GDPR coming in. Nobody did anything for two years like, oh, it's going to be in effect next month.
Now we're busy. I hope people started out earlier on the CRA because it does actually require a lot more than just writing a policy. You need to track vulnerabilities, severe incidents and everything.
And you actually need to report those by autumn and September. It's mostly made to reach physical and non-physical products. So more like the product level.
I want to create some transparency and make sure we actually have a good cyber security posture in the EU. And usually it's the EU and then some, because we have our old friend called the United Kingdom. Not really EU, but very closely related.
Usually they lean up against these legislations from the EU. So it's going to be EU plus at some point. But if you're building hardware, you need the CE approval on it.
If you ship hardware with a login, don't ship it with admin, admin as a user pass combination. All of these requirements are coming up, but you also need a higher transparency level. You need to be able to show you have no vulnerabilities.
You need to make sure you can actually patch your vulnerabilities as well. Like imagine shipping a physical device for somebody and you need to be able to patch it for X amount of years. When we talk to people in the OT section, then they're like, oh, you and your fancy new hardware.
New for us means 10 plus years. Just imagine having to support an end user device for 10 plus years. You can't just deprecate it.
That's going to be a fun ride for all of these companies creating the hardware.
[Pinja] (17:43 - 18:03)
And we were talking about Europe. So the next trend, number four, is the pivot towards the European sovereign cloud, cloud sovereignty. One of my favorite words to pronounce as a non-native English speaker.
This is something we talked about with Stefan not that long ago on this podcast. I think we had a separate episode that came out. Was it December 19th on the sovereign cloud?
[Stefan] (18:04 - 18:06)
It was last year's Christmas present.
[Pinja] (18:06 - 18:19)
It was a Christmas present from us to all of you. But we still think that the software should not really be political. In reality, we cannot deny that there are elements here when it comes to where your data is located.
[Stefan] (18:19 - 19:23)
As soon as you have policies, legislations, then it becomes like no matter what you create of software, it touches some degree of policy somewhere and turns political by that. So sometimes you just need to make sure you're 100% in control of your data. If you go far into these terms and conditions with the big cloud providers, we all know them, Google AWS and Microsoft, some would even say Oracle, the big four.
When you read the terms of those, if you go far enough into it, they can actually pull your data to the US due to support cases or something like that. If you don't want to allow that, you need to make sure you can run your software somewhere in Europe, which makes things a bit more complicated, as we did the full episode on it. When we look at the alternative options here, it comes in various degrees of how far they are.
Some are more like, here's a network you need to build. Here's the hardware you can get. But it doesn't really supply this managed service to the same degree as we see with the big cloud providers.
So it's a bit of a tricky ride.
[Pinja] (19:23 - 20:08)
It is right now. If we think of what is coming up in the EU, there is the upcoming EU cloud services cyber security certification scheme, a long word combination, EUCS. So it is expectable that we see now more organizations to actually move towards EU-based operations here.
We will see this, but there might be some more Europe-native clouds coming up in the next year, if not two. But we, as I said, covered a lot of this in a previous episode. But it's kind of like what comes to the AI question.
Can we compete with the big cloud providers, with what we have here in the EU and Europe and then, I guess, the models that we run in the cloud? So big questions up in the air, I would say.
[Stefan] (20:08 - 21:08)
It's always interesting. Yeah, like when you want to run AI on your own, you will never get the same performance as when you run with a big cloud provider. Like the setup they have, it's just like insane amounts of memory.
It's an insane amount of graphics cards to run all of this and power everything. You won't be able to afford this on your own. And most of the European cloud providers don't really necessarily provide AI as a service.
It's more like, yes, you can get GPU-enabled clusters if you're running Kubernetes, or you can get the GPU-enabled machines that you can opt into. So it's this balance of, if you want AI, how good of an AI will you get? I saw last week somebody posted that you can run large AI models at the edge on small hardware.
Yes, you can, but it runs super slow. So it wouldn't be something like real-time. It'd be like batch processing or something like that.
If you want to run these big models, I'm not really sure you should try to do it at the edge because it doesn't really make sense all of a sudden.
[Pinja] (21:09 - 21:46)
No, but this is something that we have anticipated to see with the Sovereign Cloud coming up. Next up, number five, platform engineering and its strategy. Do we see more platforms treated as a product model in organizations?
Because it has now become a default, luckily, in more organizations than before. But to understand who is a stakeholder and treating it as something you offer as an internal product for developers and others is something new that we kind of saw a peak of last year. And I'm happy to see that this is coming to see more light this year as well.
[Stefan] (21:46 - 22:34)
This is sort of like the good intersection of both of our focus areas. You love the product side of things, and I love the platform engineering of things. But I really want platform engineering to be run as a product, because if you don't really know your customer, you don't treat it as a product, what are you actually building?
I can give you a hint. Most likely you rebranded your ops team to be platform engineers, which means they do as they used to. So it's not really a product.
You don't know what your customers are asking for. Like you need to step into having a good product organization. You need to move away from this old culture shift where ops was just a cost center.
If your platform engineering team or area is just a cost center, then you haven't really achieved anything, to be honest. Then I'm pretty sure a good friend, Dan, could sit down with me and do a big calculation of what you're not gaining.
[Pinja] (22:35 - 23:14)
Yeah. And I'm thinking of the whole discussion, because how to think about the developer experience is not just that. And we've had some good talks in the previous conferences we have organized.
Dan and also our friend Emma have been on stage to talk about this. They've actually been on the podcast to talk about this as well. But to see this transition more, how do we actually build a roadmap for the platform?
How do we actually develop it with the needs in our minds? What are the different personas? So applying product management practices to building this as an internal product.
[Stefan] (23:15 - 24:10)
You need to know the needs, because if you don't know the needs, what do you need to build? If you don't know what to build, then you start building stuff you think is fun, which most likely will never fit with what the user needs. Of course, you can be innovative and come up with great ideas, but trust me, not everyone in this world is coming up with new innovative ideas.
But you need to look at the different uses of your platform. And I hope that we're going to see more expansion of the personas that are going to be using the platforms as well. Usually when we see portals, it's mostly portals built for developers or infrastructure engineers or platform engineers.
Why not invite security engineers in so they can get a better overview? We already have the data that shows how many vulnerabilities are in this repository. Why not do aggregations and overviews?
They can leverage or even have a good portal that can do it on its own. That would be an option as well. I usually refer to modern portals and model portals should be more than you're building on your own.
We know a few who do that.
[Pinja] (24:10 - 24:38)
We do. And invite your C-suite to look at the data as well, what's available in your platforms. And let's say, what would a CIO think of and use that data for?
If they go in and see, this is how much money we're spending. We get this amount of money back. And this is our ROI basically from the tools.
So it's not just for the developer experience, but as I said, build the whole personas and think about who can utilize that data going forward.
[Stefan] (24:38 - 25:26)
As you said, we need to look at the ROI, which means we also should push the responsibility of profit and loss to areas, departments, maybe even teams. And let it be a product decision to spend, let's say $100,000 to get 1 million of revenue. That might be fine.
You might be able to do it by only burning 50,000, but you're still making a pretty good profit. If those start to reach the same level, maybe it's not a good product all of a sudden. Maybe you should scrap that.
Why not tell that to the product organization? Invite them in and let them see the details here. Don't hide the details of what it costs to run things.
Make sure you know what profit and loss is. If you're burning your money and making a lot of money, who cares? You're making a profit.
Maybe not everybody likes that you're running 500 servers, but if it still makes you a profit, it's not the platform engineer's responsibility to decide on this for sure.
[Pinja] (25:26 - 26:09)
No, and don't let these be separate silos for sure. There has to be some shared work and treating that as a shared goal. And number six, we said six trends, so this is number six.
So the AI FinOps, so optimizing the GPU costs with the Kubernetes DRA, so the dynamic resource allocation, and this is a little bit more technical. We already talked about a little bit of things on the edge, so Kubernetes comes into play here. But we know that AI is here to stay, so we need to think about how we can actually utilize and optimize how we use GPUs, because we need to support these huge AI workloads going forward.
[Stefan] (26:10 - 27:27)
Yeah, it's fun. Whenever I see presentations from NVIDIA, they state the utilization of GPUs is only 5%, it's only 10%. If I do a broad search on it, they say utilization is 15%, which is still insanely low.
Just think about the amount of money people spend on GPUs, and it's only running at 15% utilization. That is a lot of money wasted. Maybe you could have settled for, let's say, two GPUs instead of 200 or something like that.
People might have been shopping in the beginning to get the GPUs, because it seemed like there wasn't enough. So people just went hype shopping, just buying all of them, to make sure we have enough. And then there's been an inner corner, and four new versions have come out since.
Oh, you wasted all of your money. But some of those cases came back to, in the early stages, we were thinking we should build our own models. We needed a lot of GPUs for training, and we weren't talking much inference back in the At the end of the day, it turned out to be a very low return of investment, if it was even possible.
In many cases, people have just burned lots and lots of money on AI, especially when buying hardware. That comes back to the, do we want to host our own things in the EU, or how do we want to do this?
[Pinja] (27:28 - 27:48)
It's a big puzzle of many things here. Looking into alternative ways of leveraging those GPUs and maybe Kubernetes, maybe that would be one way. But I think what we could perhaps insinuate here is that perhaps platform engineering might be one way to look at this again.
[Stefan] (27:49 - 29:13)
We have a customer who asked us about, could you build us a platform to make sure we could actually leverage our GPUs way better than we do today? Because it's complex, it's a very high cognitive load to make sure people can get in and use this, and we need to reset things like, yes, that's what we do. We build platforms for you, and we can easily help you with this.
Not always easily, sometimes we have to mingle whatever you have and bring in legacy stuff and twist the arm of the computer to make sure it acts like we want it to. But having a good platform set up, make sure you can actually easier move into the, if you want to move into the dynamic resource allocation in Kubernetes, yes, you need to update to a certain version. And as soon as you have that, you can sort of scrap all of the older alternative ways of doing this.
There have been several ways of scheduling GPU workloads, but now we have a standard, we have a good set of supporters like NVIDIA, Google, a lot of the big corporations are behind it because they need it as well. It's no secret that the big cloud providers need all of this as well because it's costing them a ton of money. And people like NVIDIA, they need to figure out how to actually make sure all of this is running secure when we start to run multiple workloads on the same GPU or share memory blocks, or how do we do it?
How do we isolate things in memory? Security is just moving further down the stack and it's turning into their headache, luckily for us.
[Pinja] (29:14 - 29:37)
That's true. But those were the six trends we wanted to highlight here. And we already mentioned that there are some risks here, not with the trends themselves, obviously, because why else would we promote them here?
But something to watch out for, perhaps, when you're out there looking into adopting these trends. One thing to highlight, I would say, is that don't treat these trends as the main destinations.
[Stefan] (29:37 - 30:17)
You always need to look at the contexts you're in. The good old saying, there is no silver bullet. The six trends are not a silver bullet either.
You cannot apply this and everything is good. It's a bigger task. And if we take the whole AI space, no matter if it's agents or fin-ups with shared scheduling, what kind of AI leaks are we going to see in 26?
Is it going to be the year where we see a lot of PII leaking out due to AI? I'm still waiting around to see this first big case. I know there's been sensitive information.
I think there was a single case with some PII last year, but we need the big one that states the sample for everyone. Maybe it will come this year. Let's see what happens.
[Pinja] (30:18 - 30:26)
It's not like we're betting that to happen and we're not excitedly waiting for that to happen. Please don't read it this way. But there are risks, just to be clear.
[Stefan] (30:26 - 30:30)
I'm not putting up the odds for that. I'm going to lose my money if I do that.
[Pinja] (30:31 - 31:10)
No, we would not do that. Another thing, still talking about AI and the agents perhaps, is that there might be some worries about losing transparency and explainability when you're working with multi-agent workflows. And another one that I would like to mention here is that, is the automation going to just amplify the chaos if there is no clarity to do it?
So again, going back to having a proper policy, proper guardrails to ensure that you're not just putting any data in and you do not lose, for example, traceability and explainability when you use the multi-agent workflows.
[Stefan] (31:10 - 32:14)
And having good tooling support as well. There's a lot of talk about how you orchestrate all of these agents. Maybe you need more than just simple agent registries.
You need to be able to go in and say like, all right, which agent talked to what agent about what? Like, what was the confidentiality level of the data they actually shared between each other? Or if they even shared something between them.
We need to be able to sort of have the audit trail of the agents all of a sudden, because if we get an incident and the authorities come around, if we can't explain it, then it's probably going to be worse for us. I had a small, what do you call it, seminar around the data authorities in Denmark. As I say, the first thing you do is lock the door when they show up, and then you'll find somebody who will actually walk right next to them and take notes of everything and make sure that you are actually answering what you know.
Don't let them find things without explanations. We need to be able to explain how our agents are acting as well. This is sort of the counter way of AI, because AI should be magical and connect unstructured things together.
But we need to be able to respond to this, especially if we're in a highly regulated space.
[Pinja] (32:15 - 32:44)
And in that context, having a transformation that is tool driven can be a risk instead of going with the systems thinking way or approach here. So looking into, again, the policies. What does your organization need?
What is your context that you're running around from? But I think the thing to end here is that this is now a list for CTOs. So a couple of things.
What's in it for the C-suite here? What should they care about the trends?
[Stefan] (32:45 - 33:56)
I think they should care about the trends to make sure they put the money where it matters. It's an old saying, but if you go in all of the other directions, well, you might have success. But this is actually what the industry is asking for, especially when it comes to cloud sovereignty, when it comes to treating your platform as a product.
If you don't know that, it might work, but it might help you in expanding your business in the future if you have a good platform as a product as well, because you can easily innovate your platform. You can easily plug in new things into it. Of course, it depends on how you build it.
But if you treat it as a product, you will have an organization that is aware, like you're adding capabilities, you're not rebuilding the platform every single time you do some things. I've talked to customers who built their third platform before they actually figured out how to do it as well. It's not an easy thing, but if you treat it like a product from the beginning, you will start thinking, well, all right, so this is actually what I offer my end user.
This is what I need to be able to offer my end user this. You might gather some extra data along the way. You can start showing, like, is this interesting for you to take a peek into?
Being able to have these, what do you call them, enlightened discussions with your clients.
[Pinja] (33:56 - 34:29)
Yeah, and what I would like to add here is that the risks have never been isolated, but now they're even less isolated than before because of AI and there are risks emerging from inside the delivery pipeline, not just the production. So with the speed of delivery getting higher and the cost structure of the server development lifecycle having changed, we need to make sure that we have proper procedures in place because keeping up manual controls is no longer enough.
[Stefan] (34:30 - 34:54)
The whole governance compliance and controls space, it used to be the highly regulated people who were deep into that, but we're seeing more and more just, not to put them further down the ladder, but generic software organizations, they need to be aware of governance compliance as well, especially when you start seeing EUX coming in, because then you will have to prove that you're actually doing this in practice.
[Pinja] (34:54 - 35:18)
All right, but those are the trends that we at Efico wanted to highlight for this year, the software development lifecycle. We're more than happy to have a conversation about this. If you disagree, if you agree, Stefan and I are both can be found on LinkedIn.
So if you want to challenge this view, let us know. We're more than happy, of course, to support the organizations on this, but I'm very eagerly looking into this year, what's ahead of us.
[Stefan] (35:18 - 35:47)
It's going to be an interesting year, especially when we see people starting to move into the, maybe not fully autonomous agents, but at least some level of autonomous agents to see how it actually works in practice when it hits production data or trying to fix production issues. That's going to be interesting. There'll be some uptime graphs to look at to see how things go.
Looking forward to this. Maybe that should be the recap of 2026. Uptimes of the year.
[Pinja] (35:48 - 35:53)
Yes. How did we do that? Hey, on that note, Stefan, thank you so much for joining me.
[Stefan] (35:53 - 35:53)
Thank you.
[Pinja] (35:54 - 36:04)
And thank you, everybody else for tuning in. And we'll see you next time in the sauna. We'll now tell you a little bit about who we are.
[Stefan] (36:05 - 36:10)
I'm Stefan Poulsen. I work as a solution architect with focus on DevOps, platform engineering, and AI.
[Pinja] (36:10 - 36:14)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Efico.
[Stefan] (36:15 - 36:17)
Thanks for tuning in. We'll catch you next time.
[Pinja] (36:17 - 36:25)
And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.
Published: