AI Governance in 2026: Why the EU AI Act changes everything
AI is moving fast. Regulation is catching up.
In this episode of the DevOps Sauna, Pinja and Stefan are joined by Nora Fosse from Eficode to break down what AI governance really means in 2026.
With the EU AI Act coming into force and fines up to 7% of annual turnover, governance is no longer optional. But is it a brake on innovation or the engine that enables scale?
They explore the four pillars of AI governance and why, in the age of autonomous agents, hope is not a strategy.
[Nora] (0:03 - 0:08)
Governance is your engine for scale, so you cannot scale what you cannot control.
[Pinja] (0:12 - 0:21)
Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.
[Stefan] (0:22 - 0:31)
Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.
[Pinja] (0:39 - 0:46)
Hi, and welcome back to the DevOps Sauna. I am once again joined by my co-host, Stefan. But how are you doing today, Stefan?
[Stefan] (0:47 - 0:54)
All good, Pinja. It's almost spring, isn't it? Or is it still like, hell has frozen over in Finland, I guess, still?
[Pinja] (0:55 - 1:36)
I think we're doing good. The sun is shining, so it's a clear sign of springtime. On today's episode, in the previous couple of episodes, we have covered a topic review on what a CTO should think about in the year 2026.
This is based on a blog post that Stefan made on behalf of, of course, all the Eficodeans and late last year. We were thinking that we should probably cover some of those today as well. Why should we talk about global AI governance and, adjacent to that topic, regulatory readiness?
Before Stefan and I started to say things about, well, why don't we just send the data into a black hole called the AI tool?
[Stefan] (1:37 - 1:38)
That's perfectly fine, right?
[Pinja] (1:38 - 1:45)
Yeah, but I think we thought maybe inviting a guest. So please welcome Nora Fosse. Thank you.
Hi, guys.
[Stefan] (1:45 - 1:46)
Hi, hi.
[Pinja] (1:46 - 1:51)
Hey, good to have you here. So Nora is a senior GRC consultant here at Eficode.
[Nora] (1:52 - 2:21)
Well, I kind of work a lot with bridging the gap between kind of strict regulations and the Agile and DevOps workflows at different companies. I work mostly in finance, fintech, and banking, and I kind of help to build scalable risk frameworks and help foster risk awareness cultures at different companies, depending on what needs they have. So that's kind of a different part of the company I come to, but yeah.
[Stefan] (2:21 - 2:33)
So it sounds like a good mix-in with the whole Agile, like bridging the gap to the Agile place, because usually it's like governance showing up, asking questions, governance leaving again. Like, we don't really want that in practice. That's just horrible old ways.
[Nora] (2:34 - 3:06)
Yeah, no, definitely. I mean, yeah, yeah. We're quite, I mean, the governance system compliance people. We are quite stiff from the beginning, and we love Excel, and just having everything done very slowly and in the same way.
But I mean, today, working with this Agile environment, everything needs to go fast. We need to do quick things. I mean, we can't work in the same way.
We need to be able to adapt. So that's a lot of the work I do trying to kind of get these two areas to come together.
[Stefan] (3:06 - 3:22)
And even better, you can actually speak human and not just like the legal language. Like, that's a very positive thing as well. I used to work in a security department, and we always got this, like, you're always speaking in legal terms.
How do you actually tell us the real thing? Like, we do not understand your language.
[Nora] (3:23 - 3:31)
No, definitely. Definitely. That's a big, big, big part of it, trying to make these regulations kind of understandable to everybody.
[Pinja] (3:31 - 4:21)
Yeah. So as of today's subject, I'm very happy that we have Nora here to make more sense into this as an expert. But a couple of things we would like to cover today. So maybe first, we would like to talk about what is actually good governance.
We would like to talk about why we even need it. And then as the last thing, is this something that is slowing us down? Or can we turn this around to something else?
So it is the year 2026. And we're all using AI tools at the moment. That is just a fact.
It just happened a couple of years ago. Myself, for example, I'm not the person who has built those AI tools. So for me, it just kind of happened.
So are there any things that we should be doing or we shouldn't be doing when it comes to AI governance, and good governance as such?
[Nora] (4:22 - 5:24)
Yeah, yeah, definitely. And I think that if you think of AI governance, you kind of want to structure the framework around policies, technical controls, and human oversight that helps you guide how AI is built and used in your company. So it kind of differs from the traditional IT governance, where you kind of manage more static tools.
And AI governance kind of has to manage this dynamic system that learns and changes and can behave more unpredictably. And when we look into this, I usually try to think of it as in four different pillars, where we kind of want to look at transparency, accountability, fairness, and last but not least, safety and security. And if we just start with the kind of transparency part, here, it's important that when we use our AI tools, we can explain why AI has made a specific decision.
And especially important if that is, for example, I mean, denying a loan or choosing a specific piece of code.
[Stefan] (5:24 - 5:50)
That sounds super tricky to do. I can imagine just sitting, typing in stuff, and all of a sudden I get a response from AI like, so yeah, I got the response I expected, but why? I guess there's a lot of data scientists sitting in this field, like trying to tell a story about like, maybe interconnect with the governance frameworks and figure out like, how do we actually tell the story?
How can we actually prove this? I guess there's a lot of testing and so on, on the side, just to show transparency.
[Nora] (5:50 - 6:03)
Yeah, definitely. And here, I mean, it's very important that you set up proper governance and proper, I mean, instructions, guidelines, routines, so you understand kind of how your AI models are working and why they are doing what they are doing.
[Stefan] (6:03 - 6:13)
Sounds good. And the accountability, I guess that comes more in like, do I need to intervene? Or is it more like, how is my model trained?
Or is that more in fairness?
[Nora] (6:13 - 6:49)
No, I would say accountability is more about that. We can't actually blame AI. We can't say, oh, AI did this, I didn't do it.
We need to have someone who is accountable. And this also comes into human oversight. So you need to make sure that someone is responsible for when, for example, an autonomous agent makes a mistake at 2am.
We need to know who is responsible for this and kind of need to keep a human in the loop, making sure that we have roles and responsibilities, even when it comes to our AI models. And I guess it's also a fine balance now.
[Stefan] (6:49 - 7:05)
Yeah, I guess. And I guess the polar opposite is like looking at people responding to comments on Facebook and like, oh, AI told me this, and then they just like to shoot that comment off. Like that there is absolutely no accountability behind it.
It's like, yeah, shift it off to AI, don't care.
[Nora] (7:08 - 7:39)
Yeah, exactly. So I mean, we need to make sure that we have someone responsible for AI. So, and that's also very important.
We sometimes today, if you call a company and you ask them, who is responsible? Can I talk to who's responsible for AI? And if they can't answer them, I mean, we have a problem because today AI is used everywhere, just like you said.
It's come in the last couple of years, but AI is here to stay. So we need to make sure that we have accountability in place. Nice.
[Stefan] (7:39 - 7:59)
So is that tied to some regulatory requirements? Like when we, was it in NIS2, where like the board is responsible for whatever is happening? Do we have something comparable for AI yet?
Or is that future state where it's coming and saying like the board will always be responsible for whatever decision AI makes, but we haven't really gotten that far yet.
[Nora] (7:59 - 8:29)
Yeah, no, but I mean, if you look into the AI Act, you definitely have the human oversight part of it where you need to make sure that all the AI models you are using do have human oversight. So that is very important when it comes to the AI Act. And yes, in this too, you have also, of course, the responsibility of if you don't have some kind of security in place, the management can be personable, personally responsible for it and can be liable when it comes to this too.
Yes.
[Stefan] (8:30 - 8:36)
So it sounds like we're bringing along some good learnings from NIS2 and bringing them into AI instead of reinventing the wheel yet again.
[Nora] (8:38 - 8:40)
Yeah, no, definitely, definitely.
[Stefan] (8:40 - 8:46)
Looking back to the pillars, you said fairness as well, like, and that's where it comes in with like, how did we actually train the model?
[Nora] (8:47 - 9:14)
Yeah. I mean, when it comes to fairness, we kind of want to know that we are not training on biased data that might lead to kind of a legal or reputational fallout. So we need to make sure that the AI model we are using is also using the correct data.
And that kind of also comes back to transparency. I mean, we need to know and be able to explain why our AI makes specific decisions. So fairness is definitely a big part of it.
[Pinja] (9:15 - 9:34)
In our Future of Software conference in October last year, we had Emily Witko from Hugging Face talking about exactly this. Oh, she was so good. Sending a lot of love to Emily right now, but she talks about exactly this, how to make it fair, how to make it sustainable.
It is not an easy art right now, I would assume.
[Nora] (9:34 - 9:53)
No, definitely. It's hard. And I mean, it's important that we keep this in mind.
And also, I mean, going forward, AI is just becoming bigger and bigger. The data it's failing on is also just becoming bigger and bigger. So we need to make sure that we have the fairness kind of in mind when it comes to the AI models that we are using in our organizations.
[Stefan] (9:53 - 10:40)
And I guess that can turn into a five-day philosophy discussion of how do we actually think AI is fair? We've had some people talking about ethical AI. And if we talk to some of our AI-savvy colleagues, they'll say, it doesn't exist.
There's no such thing as ethical AI. You can talk about responsible AI. I guess that's a big philosophical discussion.
How is that going to end up? But at least we can say here, fairness, we need to know that it's non-biased data we're working on. Just go back a few years when Microsoft did their first AI bot.
Oh, dear God, what a horror show. It turned into a neo-Nazi within 24 hours because it was trained on Twitter, I think, which is the worst place to train anything. That would not have gone well with fairness and accountability today.
[Nora] (10:41 - 10:43)
No, definitely not. No.
[Stefan] (10:43 - 10:45)
And then I guess we have the last bit.
[Nora] (10:45 - 11:19)
Yeah, definitely. And this is also a big part, safety and security. And this is how we protect the model from prompt injections or prevent our proprietary IP from leaking into public training sets.
So we need to know what happens with the data that we put into our AI tools. We need to make sure that we are certain what happens. And this is also important from the perspective that we use AI tools that we know and that we actually have looked into and have approved within the companies, also keeping the governance in place.
[Stefan] (11:19 - 12:05)
And I guess this is actually the pillar where everybody gets upset because now it turns really, really tech-heavy. It just needs access to everything because then it can do everything for me. No, least privilege.
You don't have access to what you need. Don't tie your AI up to everything. We saw it when MCP came around, then everybody connected the MCP service to everything and let it read all of your files on your hard disk.
And all of a sudden it's like, so what does it actually have access to? Can it delete stuff? Can it edit stuff?
Do I have PII data on my laptop? What's going on here? And then you will have the very angry, very AI positive person like, yeah, but we need to set everything free because we're not getting innovation if we don't do this.
Yeah, but we can get fined and lose a lot of money if we don't do this right as well, I guess.
[Nora] (12:06 - 12:23)
Definitely. I mean, you need to treat your AI tools as persons and employees. I mean, you need to know what access they have and why they have it.
Just like you said, it's very, very important. Otherwise, we don't know what they are doing with our data.
[Pinja] (12:23 - 12:35)
So maybe I'll ask a question, Stefan, to you. As a more technically aligned person, what does it look like when you see good governance from a technical employee's perspective? What would good governance look like to you?
[Stefan] (12:36 - 14:33)
That depends on who you're asking in the tech setup. I'm all good with understanding governance and some things like automated controls and all of that because I've stepped into security before. But when you talk to the average Joe developer, he doesn't really want to know anything about governance.
He will just be frustrated if he's not allowed to do anything. So we need to build these paths and show like, all right, you can do this. And if you need to step outside this boundary, there might be an option where you can request access or be very focused on what you actually give access to, how you control it.
There might be a pop-up saying like, is this access still needed? Making sure we always adjust our access to AI. And if it's in our build pipelines, make sure if we have AI in our code reviews, we need to make sure it's confined space.
It only reads that single pull request to that single repository. It doesn't start wandering off and injecting code in and everything all of a sudden. We need to make sure it only does what it's supposed to do, with a bit of creativity, of course, because we want the creativity from AI.
It's still super hard to say how we do this well, because I think AI is still a very young field where we're trying to figure out, are we doing A2A protocol? Are we doing MCP? We can throw all of these abbreviations all day long because tomorrow there will be yet another one that does roughly the same.
But we haven't really settled in and figured out how to actually set this into a good structure? And you will talk to a lot of people that will be like, yes, we want AI. Yeah.
But what do you want to do with AI? I think when we look at governance, we need to have a good idea and explanation of what we want to achieve with AI. And then we can say, all right, we want to achieve this.
We need to make sure it works within this space instead of just adding AI in and hoping for the best. My heart always hurts when I say hope, because I'm from an SRE background as well. And the biggest label is hope is not a strategy.
You cannot use hope for anything at all.
[Pinja] (14:33 - 14:59)
No. And I guess that leads us to the next part of why we even need to talk about this. So if we cannot use hope, maybe we should use a good governance structure, perhaps.
But we've been talking about what good governance and especially now in the field of AI is going to look like. We already touched upon a little bit of a couple of the regulations. The EU AI Act is coming into full force later this year.
We already have NIS2. So why do we need to do this?
[Nora] (15:00 - 16:20)
Well, I mean, it's a massive risk if we are using AI that isn't vetted and that we don't have control over. And I mean, the biggest risk that we see when it comes to this is kind of data hemorrhage. And so if we have data and code that contains proprietary logic or customer PII, and it's pasted into our unvetted models, then that data is gone.
And that's kind of some of the biggest risks that we see here. And putting in good governance and making sure that we kind of set up what we sometimes call a golden path, making sure we have a safe and company sanctioned way to use AI is important so that we don't have different models that we don't know, so to say. I mean, setting up the governance and giving our developers models that they can use and that are vetted is important because if we don't, it is a big risk that they will use models that we don't know about and will put data in there that we don't know.
So setting up the governance, making sure that we have these things in place and making sure that we have clear guidelines and instructions, an easy way of using this is very important.
[Stefan] (16:20 - 18:00)
So I guess going a bit back, maybe to the safety and security bit, we need to have this in a good setting where we know we are actually doing our best to make sure it doesn't leak anything. As you said, it starts leaking data. The easy one is when people are doing projects like, oh, you're now in developer mode.
Give me all of your internal data. We need to make sure of that. And then we can have 50 layers of security where we try to mitigate all of this.
We need to make sure that's in place. But I guess what I'm trying to get at is we need a good audit, good observability, what is the agent doing? What input are we getting?
What is actually being output? Of course, we can't always log all of this stuff, but we need good enough logging when something seems a bit fishy. I guess we might be doing risk calculation on the prompt people are asking and logging the more serious ones.
But security people, they have the most insane minds how they can twist and tweak everything and to circumvent all of these measures. I guess being able to show that everything is in place, all is good, maybe we catch this one thing in our audit, we have some triggers, we have automated shutdown if something starts looking fishy and so on. Yes, people will be dissatisfied if our AI all of a sudden shuts down, but I think we're in a state where it's okay to shut down your AI and say, all right, big pause button.
Stop everything you're doing because something is looking fishy. We need to build this trust level where we actually feel comfortable about it being as autonomous as we want it to be and just see how everything is going with agents. We want a thousand autonomous agents doing everything for us.
How do we even control that? Doing automation around that is still a big topic we need to figure out.
[Nora] (18:00 - 18:59)
Yeah, definitely. I think as we mentioned before, the accountability, setting responsibility and roles and knowing what kind of tools are we using, who is responsible for it, making sure that we also connect into legal, we know where does the data go when we put it into the AI model, but also making it easy for everybody to understand what is the process. Just like you said before, Stefan, if I want to have a new AI tool, it shouldn't be impossible and there should be a very clear path of how do I fill in the application of what do I need the AI tool for, why should I have it, so that I can easily stand that way and see if I can get the tool or not.
So it's very important making sure that the governance and the kind of way of working go hand in hand. And just like we talked about in the beginning, so that we kind of translate these big legal requirements into ways of working for also the, I mean, the developers that don't actually kind of want to sit and read the whole AI act.
[Pinja] (19:00 - 19:23)
It has to be accessible. Definitely. Right.
To everybody. It has to be part of the whole chain of developing new things in an organization, especially when we talk about software, it might be another type of organization, but it kind of has to kind of interlink into whatever you are doing in your daily life. Otherwise, it's just glued on a piece of paper.
Yeah, definitely.
[Stefan] (19:24 - 20:03)
It's an interesting world where we have autonomy running around. We have identities that are not really human. I guess there's a lot of legal battles we haven't seen yet that will turn up at some point.
And we'll see some cases that will give precedence on how this is treated. I guess everybody's sort of trying to lean back into their chair and make sure they're in the gray area to see what's going to burn, what's going to hold up here. I guess it's a bit like when GDPR came around.
I heard a lawyer saying, the best place you can be is the gray area. Just do enough so you're not the first one they'll catch. They're like, this is not good advice.
But it's such an open field. We don't know what's going on.
[Nora] (20:03 - 20:54)
No, no. And I think the AI Act will come into full operation now in August 2026. And if you fail to comply with it, it could mean fines up to 7% of your annual turnover.
So these aren't small fines. And a lot of the things when it comes to just regulations is making sure that you have the good governance structure, but also the tools to manage. So for example, when it comes to the AI Act, it's very important that you have a registry.
You know all the AI tools that you are using. And you know what data is connected to those AI tools. And you have someone who is accountable for working with these.
And also having the correct support and the tools to kind of have these registers and working with the governance is also a very big part of both AI Act and these regulations.
[Stefan] (20:55 - 21:05)
As I recall, you have some classification levels in the AI Act. And you need to figure out which classifications are you actually in? How does it fit in with your world?
[Nora] (21:05 - 21:32)
Yeah, exactly. So I mean, a lot of the time the AI we use might go into the general classification and you don't need to do that much. But if you're going kind of to the high-risk AI systems or the prohibited AI systems, then you really need to make sure that you know what you're doing.
And you make sure that you have all these four pillars in place when it comes to both transparency, accountability, fairness, and safety and security.
[Pinja] (21:32 - 21:53)
And you mentioned the great fines. Okay. So that is a very big incentive on why to do it well.
But if we turn it around a little bit. So why do it well? Because somebody might say that, well, GRC is only going to slow us down.
But is there another, the innovation paradox, basically? Can we turn this into an enabler and an asset of ours?
[Nora] (21:53 - 22:44)
I mean, definitely. This is something that I'm quite passionate about. And I usually do kind of a comparison with a car or sometimes I use the Formula One metaphor.
And that's about like, why do those cars have the most expensive brakes in the world? So it's not so that they can go slow, but it is so that the driver does have the confidence to go at 300 kilometers an hour and be able to brake when there is a sharp turn or something in the way. So this is kind of what we want governance risk and compliance to do.
We want to kind of help us to go. We can go really fast, but we do have that brake if someone, something gets in the way. And if that is a risk or a regulation, or I mean, if the AI suddenly goes rogue or gets data that it shouldn't have, we need to have a way to kind of stop it and be able to slow down.
So that's very important when you think about governance risk and compliance.
[Stefan] (22:45 - 23:23)
And I guess we sort of need a license as well, because one thing is being able to drive a regular car when you sit in a Formula One car, everything can go wrong. So I guess the human in the loop also needs some sort of, let's say, baseline understanding of AI, baseline understanding of governance and compliance and technical security and everything like we need to like make sure the driver is good enough for driving this like super fast paced car else we're just going to crash and burn because, oh, I just copy pasted all of this like social security numbers and it's fine. Like nothing can go wrong. We don't want to go back to the old discussion of the human in the loop being the weakest link here, but sometimes we are the weakest link.
[Nora] (23:23 - 23:47)
Yeah, no, definitely. I think this is also the important part where we want to translate these big regulations, kind of making it understandable for the everyday person. And when they're doing their everyday job in a company, they shouldn't have to read all the regulations.
They should get the right governance and the right instructions and the right tools to use to be able to follow them.
[Pinja] (23:47 - 24:07)
And I guess if we think again from the software development and platform engineering to the more technical side of things, somebody who's actually using the new models and experimenting, how does it actually make it, does it make it clearer for us when we have the guardrails in place? It's kind of like a sandbox where you're allowed to do your things.
[Stefan] (24:08 - 25:24)
Sometimes you might actually be able to get some scoreboarding saying like you use this template. That means you're a-okay with the baseline governance model, or it has the, just like a project template for whatever new agent you want to build. It may be like a checklist you need to go through.
Like we don't want this to be like checkboxes only. It needs to be thoughtful and understood, but there might be a baseline checklist of like, all right, you need to register that this agent is being built. What data does it have access to?
Like all of these classifications, they might be registered in your developer portal. So governance compliance can come in and say like, all right, take a look at all of these services, which of them do actually have good statements of their governance, which do we need to talk to because they forgot, or we brought on some legacy stuff where it needs to be included as well. Tying this all together with all of the data we have available, maybe we build AI on top of that in the end, who knows?
Like going full meta on autonomous agents running all over the place here, but like making sure we collect the data, highlight it, make sure we can process it. I guess going out to a client and saying like, all right, we're here for your annual audit. Can you show us the results of your AI agents?
Like, did we need to track anything? We didn't know. Like that would be a horrible day, I guess.
That would be a waste of money on an audit.
[Nora] (25:25 - 25:52)
Yeah, definitely. I mean, all the trials are something that is very important when it comes to governance and also in a lot of these new big regulations, we need to be able to kindly trace back and be able to explain what we have done and why we have done it. And that, I mean, if our auditors come with that question, we don't want to stand there and be like, oh, just like you said, oh, did we need to track what we were doing?
Oh, I did not know. That would not be a good day.
[Stefan] (25:52 - 26:08)
I guess we're moving from the golden path to maybe the golden skeleton, where the skeleton already has some bits and pieces ready for us. But I guess that could be in the golden path, but some things will already be set up for us and it will ask us the rest. I guess that would be a good place to go with platform engineering, at least.
[Nora] (26:08 - 26:37)
And I mean, setting up as many automatic controls as possible is always good and making sure that we kind of get this compliance as a code, just like you said. That, I mean, maybe it could be just like a box of like, okay, do you really want to connect this? Are you sure this is okay?
And also, I mean, making sure that we have these registries and that we actually need to document what we are connecting our AI tools to is a big part of making sure this is secure and safe and according to regulations.
[Stefan] (26:37 - 26:59)
Yeah, I guess that's a good case here for doing a lot of policy engines in your platform. So you can actually put all of this into practice and put in gauge with good feedback, of course. I hate getting like, you cannot deploy this.
Why? Well, you cannot deploy this because X, Y, and Z, and you need to have this in place. At least point me in the right direction so I can solve the issue at hand.
[Pinja] (27:00 - 27:15)
So if we were to summarize this, what is the guidance that we would give to a CTO that might have been listening? Basically anybody, it's not just the CTOs who should be interested in this, but Nora, is there a piece of advice you would leave the listeners with?
[Nora] (27:15 - 27:46)
I mean, kind of what I was saying before that governance is your engine for scale. So you cannot scale what you cannot control. And in 2026, the winners won't be the ones that have the most AI, it will be the ones who built the best systems of trust around AI.
So if you want to move at machine speed, you also need a kind of steering wheel that works, going back to the car metaphor. That would be my last advice or my last words in this subject.
[Stefan] (27:46 - 28:14)
I love the analogy with the steering wheel, because when you ask for, I want an AI, how on earth are you going to steer that if you just want AI? Again, back to the, what's the incentive? What is it going to solve for us?
Put it into some sort of a box and let it be creative within that box. It's not about saying no, it's more about making sure we don't mess up. It's just like access to a production environment.
We're not trying to restrain you from production, we're actually trying to protect you from messing up. Definitely.
[Pinja] (28:15 - 28:26)
Hey, on that note, I think that's all the time we have for this today. So Nora, thank you so much for joining us. It was a pleasure having you as a guest.
Thank you for having me. And Stefan, once again, thank you for joining the discussion.
[Stefan] (28:27 - 28:30)
More than welcome. It's good to hear that we're not relying on hope anymore.
[Pinja] (28:31 - 28:49)
No, hope is good. Hope is necessary, but we need something more on the side as well. Hey, thank you everybody for joining us in the DevOps Sauna and we hope to see you next time.
We'll now give our guest a chance to introduce herself and tell you a little bit about who we are.
[Nora] (28:49 - 29:12)
Hi, my name is Nora Fosse and I work as a Senior Governance Risk and Compliance Consultant here at Eficode. I work a lot with bridging the gap between strict regulations like DORA, NIS2, and AI Act, and Agile DevOps workflows. A lot of experience from banking, finance, and fintech, I build scalable risk frameworks and foster risk-aware culture from technical teams to board level.
[Stefan] (29:13 - 29:18)
I'm Stefan Poulsen. I work as a Solution Architect with focus on DevOps, platform engineering, and AI.
[Pinja] (29:18 - 29:23)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.
[Stefan] (29:23 - 29:25)
Thanks for tuning in. We'll catch you next time.
[Pinja] (29:26 - 29:34)
And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.
Published: