Skip to main content Search

Security isn’t a cost—It’s your biggest growth engine

Security has long been treated like a necessary expense—but what if that thinking is holding your business back?

In this episode, Pinja sits down with Kalle Sirkesalo to challenge the traditional narrative around security. Instead of focusing on risk avoidance and compliance, they explore how security can actively drive revenue, accelerate AI adoption, and enable faster, safer innovation.

From the ROI trap of fear-based thinking to the reality of AI-powered development pipelines, this conversation dives into why organizations that treat security as a growth enabler—not a blocker—are the ones that scale.

You’ll learn:

  • Why security loses funding—and how to flip the narrative

  • How AI is reshaping both risk and opportunity

  • What “built-in” security actually looks like in modern pipelines

  • Why developer experience and security must evolve together

  • A practical way to align security with business goals

  • Security isn’t just about preventing loss—it’s about unlocking what’s possible. 

[Kalle] (0:02 - 0:08)

AI's job is to get to the end result. It doesn't matter how it gets there from AI's point of view.

 

[Pinja] (0:11 - 1:09)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security and more as we explore the future of development. Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

 

Hello and welcome back to the DevOps Sauna. This is a topic we have not discussed recently so much, but everybody can agree is very important, and that is security. It is often seen by organizations as a cost, but I would like us today to take a very different approach, which is to shift the thinking and look at security as a key enabler for growth instead.

 

Today to discuss this topic with me is the Field CTO from Eficode, Kalle Sirkesalo. Kalle, welcome.

 

[Kalle] (1:09 - 1:13)

Hello. So I think I don't need to do any introductions here.

 

[Pinja] (1:15 - 1:35)

We'll save it until the end. But Kalle, you were one of the key people that I wanted to talk to about this topic since you talk to a lot of organizations when you talk to our customers and other companies in the field of DevOps. Mainly due to your role and your interest as well, but is it still true that we are still not good at security?

 

[Kalle] (1:35 - 2:25)

Well, if we look at the news from the last month or so, it's all about how bad we are as humans in security. You can read the Claude Mythos article and be like, oh, it's again like we suck at security. And then you can go to Twitter and read about people being like Claude Mythos is nothing new.

 

And you're like, yes, because we don't actually invest in secure development most of the time. I think one of the nice hot takes on that was that Claude Mythos is nothing new. If you would give the same amount of money as you spend on Mythos, you would find zero days for a whole month.

 

So it's an under-invested field in general, because it's kind of a supportive function. Same as if we look at IT as computers and access management, it's not really like you want to put a lot of money there. You want to put enough that the company stays up and floating.

 

[Pinja] (2:25 - 3:12)

And that's a minimum requirement, to be honest. And we can think about that, of course, a company that is operating in a regulated business, you need to have your NIS2s and everything else related to those. For example, you need to comply with the regulations when it comes to security.

 

But how can we come away from that, that this is now bolted-on security and not built-in? And why isn't it considered more? And if we set the scene real quick about what are the standards with the modern security practices?

 

Everybody's talking about shifts left. I hope people know what we mean when we say shift left, like we do with testing. Let's embed it early.

 

Let's do threat modeling, have the dependency scanning, continuous validation of product, but also the automation first mentality. Do we already see the security testing being part of CI/CD? Is it a separate phase still?

 

[Kalle] (3:13 - 4:50)

So this is kind of like, it depends on what part of the security. Because if you take the picture of all the different security tools, you have over 20 different tools that you have in your methodology when you develop software. So if we take a static code analysis, that nowadays is starting to become the standard.

 

If you don't have a static code analysis, so SAST in terminology, you're probably behind compared to competition. But then we come into the thing like SCA, which is like package analysis, so following those packages that go into your code. That is surprisingly little in place.

 

Most companies don't know what they put into the package that they release. And that's been visible for the last two months now, because everything is getting vulnerable. We have seen LightLLM getting vulnerable and hacked due to it.

 

And these go to the pipelines and cause problems. So that creates this platformization problem where if your platform is bad, it creates additional security vulnerabilities. If your platform is good and you have good golden roads, it's super easy to fix those problems.

 

So the automation first is kind of like, yes, you should have this, there are tools for it, but people are not putting them into place. On the other side, the whole platform then gets developed based on basically risk-based prioritization. So you only look at what's the highest risk and you only develop the highest risk, but you don't really look at the security side of what actually creates the most value.

 

So we kind of forget about the following value and we only look at the security risk and not also the value on the developer side. So our risk-based prioritizations are heavily on that. How likely are we to get hacked?

 

[Pinja] (4:50 - 5:03)

Yeah. And the developer experience is considered in this as well. We've seen that if it doesn't work and if it slows you down as a developer, it is not going to be prioritized as a step, as a tool, as a practice anyway.

 

[Kalle] (5:03 - 5:20)

Yeah. People skip it. So that leads to the problem that zero trust security, for example, is done badly, which means nobody has access to anything and it all goes through automation.

 

Usually it gets skipped because the developer experience ends up being bad if you don't actually build the tools and capabilities for the experts.

 

[Pinja] (5:20 - 5:33)

Yep. Again, it's bolted on and is not actually built in. You mentioned Claude Mythos.

 

It was not very long ago that we were introduced to this. What other new tools have you seen lately come up in this context?

 

[Kalle] (5:33 - 6:52)

So there are a bunch of different tools, depending on what you are looking at in the LLM slash AI scope of things. So we have tools that try to imitate attacks like Claude Mythos and try to find those and red teaming you and trying to create value on it there. Mythos, of course, is very little available, but then OpenAI released their version of the security tuned systems.

 

And we have many authors that are doing this thing. But on the other side, we also have LLM poisoning style attacks where we have tools that try to poison the data sources. There are tools that try to break your LLMs and we have tools that try to evaluate, can you be attacked against based on those tools?

 

So I would say that the LLM space is getting a bunch of different tools that do different things. But then we have these basic tools that have not been even taken into proper use that are core for getting anything working. So we create these traps in the organization where we bring in new tools because it's exciting, but we forget that it's supposed to create value across the pipeline and especially to the customer.

 

So I like to call it a return on investment trap. So the management team always wants to do this thing where they ask you a business case, like what's the business case for this? And then we see the security expert going in and being like, okay, I need to make a business case.

 

How do I actually do this? And they go in and they list all the risks that attack the company.

 

[Pinja] (6:53 - 7:22)

It's the negative effect approach, right? So if we, for example, if we get attacked, we get fines for being non-compliant or it's like NIS2. I guess the essential entities are fined, is it 10 million euros or 2% of their global revenue based on what is greater.

 

The data breach costs can be estimated quite easily, but at the same time, these are all approximations and they're all assumptions. And we look at it from the negative perspective instead of what it could bring us instead.

 

[Kalle] (7:22 - 8:42)

Yeah, exactly. Because if you look at it from the negative perspective and you start, like I had this conversation with one of our security experts lately that he wanted to lock down skills.md, which is kind of like this thing for AI to do things automatically for you. And it's an easy way to share things, working flows and things across the organization.

 

And we have our internal skills.md repositories where we share things. And he wanted to lock it down so that we don't have data. Pushing data outside would be more controlled.

 

But if we would start doing that, the business risk for us as an organization is that our business dies. So if we stop doing AI development and stop being at the forefront of AI, we don't have a business. So his view that this is a business risk where we get 10% of the revenue as a sanction sounds big.

 

But when you think about it from the business leader's point of view, who is like, we just need to get AI into use. And if we don't, we lose the whole business. It becomes like this thing where the security, of course, loses the conversation.

 

So then we started figuring out how we actually make this so that it would be aimed at so that we would figure out how we get the money from the customer? Because that's where the money comes from the organization. How do we really create value from the customer point of view?

 

Like how do we make sure that happens? But at the same time, how do we also end up in a secure place? So what could we do?

 

[Pinja] (8:42 - 9:12)

It's a really fine balance, really. And as you say, security often loses in this competition because it's seen as a feature and not an enabler. And if you put feature versus feature, and the other one is a so-called feature called security, and you don't have the ability to calculate the business benefits out of it, you just see that there are potential risks.

 

We don't do it. But for example, why are you implementing AI solutions if your security is not well in place? Again, a benefit from that.

 

Yeah.

 

[Kalle] (9:12 - 10:16)

And that's a big question again. And it's again like this challenge that you have to overcome and do things. Just like LLMs do hallucinate, they do have problems.

 

They solve problems so that they break things to get to the end result. We've seen it in many models. And the big thing here is also the fact that the AI's job is to get to the end result.

 

It doesn't matter how it gets there from AI's point of view. And the big thing here is also when we consider ROI, if you look at it going in the security risk side, you're kind of like doing short selling. So you're always looking at your maximum return on investment is 100%.

 

So you have a cap on how much money you can save, which is like if nobody touches anything, you save all the costs. But if you get it into the revenue side where basically the increase is infinite. So this is why you don't want to short the stock market.

 

You want to go up because that is an infinite scale theoretically. Of course, you don't usually reach those infinite scales, but that's why people are invested in that side rather than cost savings, because the cost savings are always capped.

 

[Pinja] (10:16 - 10:47)

And if we think of the fallacy right now, because as you say, the business side and management are looking at the ROI. They are looking into what they can get back. And it was not that long ago I had a conversation with you at our office and you said that somebody at an event said to you that they're looking for at least 20x return of investment from implementing AI solutions.

 

That was not even high enough for them. So are we even realistic when we look at the ROI needs and the requirements that we need from that in current organizations?

 

[Kalle] (10:47 - 11:53)

I know there are a lot of people probably listening to this to think that AI is not that big, it's not affecting my work. But if you're in the consulting field, especially when you see what it can do at the customers, there are customers that are like completely eliminating process parts that they had to do in history. I have removed weeks of work from my calendar with AI at the moment.

 

So the question is, how much do those weeks convert into long term? And that means that if I can scale 10 more people to get weeks from their time, we can scale up the organization to another universe in the revenue scale compared to previously. And that creates this thing where 20x doesn't even sound that impossible, especially if we look at industrialization.

 

I remember when we started fighting about the 20x internally four years ago, I think now, and I pulled numbers from the industrialization at the time. And I was like, yeah, the lowest industrialization had was like 40x the economy of the places that it impacted. So I would say 20x is still us being kind of like trying to understand where we get.

 

The software is not the only thing that needs to change.

 

[Pinja] (11:53 - 12:12)

Of course, it cannot work in isolation. It's not a vacuum where it's happening. So again, coming back to security, and that's the ROI from security.

 

Basically, it is compound from what you get through to software as an ROI is compounded with, for example, security as one of the elements.

 

[Kalle] (12:12 - 13:44)

So I think this is where we can actually start talking about how you would build a secure ROI for a security team? So you don't want to give people stuff for free. But an AI, if you prompt it, will give you stuff for free.

 

Because AI always tries to please us as humans, because it's based on humans talking on the internet. So for AI, the key is how do I please the other people around me? So it becomes a thing where we really need to start thinking about the security controls already downstream.

 

Because when we let AI talk to a customer in the chat box, we've already seen customers asking for 90% discount codes or getting them. But we have security practices and risk management practices, we have tools for these things, like we can use CMs, we can use our data detection, anomaly detection tools that we have in security practices existing, which we haven't purchased, because they cost a lot of money. But if we can explain that we can use these in the LLM development, for example, and find these, oh, someone broke our security guardrails, we could probably impact the revenue way more than just the negative cost.

 

We could actually be on the positive side where you can create a chatbot that can be released online, and can still feel the controls that we have. So we can actually start talking about being compliant, being regulated, and still being able to provide the fastest and easiest possible security experience. But of course, it's going to require a lot of engineering around it, which means that you could get the security engineers sitting in the development team as we've been talking about for years at this point.

 

[Pinja] (13:44 - 14:12)

That would be a nice change to actually see in the organizations. Because as you say, this has been discussed so many times in the old tale of, hey, let's shift left. It is not just that we have a separate silo somewhere, a security silo that is building it, and we have to take it as developers and as a software team at an early stage, but as you say, actually implement it into the team and what the team is doing.

 

That sounds a bit more scalable than just having the silo.

 

[Kalle] (14:12 - 16:20)

So that's why, for example, we partnered up in history with different security training systems. We've been trying to get developers to upscale themselves. But now the challenge becomes, how do you scale developers that are not developing code in the future?

 

At the moment, they need to code review things. But again, how do you scale up the fact that you're supposed to read code that you're not writing? And that creates these technical systems more critical, because we don't have that many controls on the technical completion there.

 

So we need to have better technical controls. We really need to have the SAST and the DAST in the testing places. And it becomes these things where your AI gets stuck on these things if you don't have these in place.

 

So if you're not throwing spec to your development, and if your spec doesn't include security controls for it, it's really difficult to run faster. So what we see in data, for example, is that most customers have this, they put up the co-pilot training, the analytics, and they see, oh, we get a lot of requests, but the approved code keeps getting stuck. So we're not seeing the approved code going up, and we don't see more requests coming in.

 

And that's because you get stuck in this assistant trap, as I like to call it, where you no longer can push more to the pipeline, because the pipeline gets stuck on people reviewing it, the security systems, and processes leaving chains. So now is the time when security should be there and talking about how they would bolt into the new process and practices so that it would be already at the beginning of the development. So we would actually do a risk matrix at the beginning, like we keep saying we would do, and we get to map these different things so that they would go already to the spec, because the spec is anyway written by AI.

 

Like, let's not kid ourselves. We're not writing the whole spec. We write the guidelines, and then it writes from that specification, and then it generates the code from that.

 

So our goal should be, how do we make this happen so that the security is there to define in guardrails that go into that specification, so that the code that comes out of it is secure? And then how do we get to the production, the technical systems that follow, are we still secure? And then what do we do when those get vulnerable, like really?

 

[Pinja] (16:21 - 17:07)

Because that's one of the things, like the scaling is one of the things, as you say, the amount of code that we can now create so much faster. And if we don't have a safe and credible way to scale, number one, our customers are not going to believe what we put out. So what we send out and ship out, and the developers internally are not going to start actually feeling secure in doing it this way.

 

I was thinking about this before we started recording, and even like an indirect enablement towards revenue is something to consider. Even if you don't think about it, well, what about, I don't care about scaling, who would have, I really hope nobody's thinking that way, but like the actual revenue, so the enterprise sales, the procurement process, it would get a lot faster, and time to close the deals. Everybody would like to have those in their organization.

 

[Kalle] (17:07 - 17:55)

But I don't, this is the challenge, like, yeah, we can probably make the fast answering faster, but then we get to the problem with even agile had, even SAFe had, everything that we have had is tied to the fiscal year, like the revenue enablement, the revenue tracking period. Because like, we are still tied to a 12 month cycle on everything. Like our thinking still revolves around what we are going to be doing in the next 12 months.

 

And we've shortened it from last, from five years to 12 months, which is kind of a trouble also in history, because like, we no longer have that long term goal. Now we have these short term goals. But the problem is that the short term goals are kind of too long goals to be short, but they're too short to be long.

 

So that means that our capability of making any decisions gets really difficult.

 

[Pinja] (17:55 - 17:56)

That's true.

 

[Kalle] (17:56 - 18:50)

And that requires you to be capable of doing speed with stability, like how do we make it so that we have very few interruptions? How do we make predictable delivery, because then we can get back to the revenue and like go against that and make possible decisions faster. Because we keep talking about the quarterly economy, but we're only doing quarterly the tracking of where we are against the goal.

 

But if we could also have the goal being set per quarter, we would have a very short and fast rationale and speed, which means that we can do better business prediction because we can really be agile constantly. But that requires predictable delivery, which we are really bad at engineering historically. Because like, if you ask someone to do an estimation, they can't do that.

 

So that was one of the first things I automated in our instance, like how do you like to read this ticket and make an estimation on it? Or tell me what is missing to make an estimation on it?

 

[Pinja] (18:50 - 19:02)

Or if you do, if you make an estimation, it's going to be so wrong. It's like, we always said that estimations are wrong, but historically we've seen that it's extremely wrong. And it's not just that it's just off by a little.

 

[Kalle] (19:02 - 19:47)

And it's again, like, it's also the fear of saying, hey, we need to stop and specify this better. And like, let's put it in the back hook until we have a better specification, because nobody, then they ask you, what do we need to specify better? And these are the things where risk thinking, secure thinking, and AI really help create better specifications.

 

Because like, we don't need to think about the security only in terms of like, it needs to be technically secure. We can also think about it, like, if we understand the specification better as an engineer, to make a better AI prompt against it, it means that we have a better place to do a security assessment against it. Because we don't need to think about loss avoidance, we can actually think about how to make this thing good already at the beginning.

 

[Pinja] (19:47 - 19:56)

And that is actually a strength towards customers, towards a brand strength. And actually, if you think about it internally, the willingness to do experimentation, exactly, be in a better position to actually start and create new, adopt new features.

 

[Kalle] (19:56 - 20:27)

Because you don't need to be afraid constantly about the features that are happening, breaking something. Because like, if you have a good enough baseline and strength, you can really do a lot of good stuff around it. But if all of this is missing, if your pipeline doesn't have the security controls, and if you're, your AI uses are immature, if your security controls are immature, and your organization is immature, you're not going to get any of the benefits that are mentioned without taking significant risks, which again, takes you back to now you're missing the business goals.

 

[Pinja] (20:27 - 20:59)

Yeah, that's because like having that everything in place and having the options, actually, because if we go to the company strategy people, we want to have the ability to again, the scaling comes up, but you need to be able to pivot. If need be, that's not possible with a fast paced environment, unless you have a secure environment, you're not going to be able to integrate new things unless you have a secure environment. So it gives that strategic capability and the options to do these things in a secure environment.

 

[Kalle] (20:59 - 21:48)

And the nice thing is, it also creates cost control in the long term, because right then what is what going to cost? How long does it take to recover from this incident? Well, we know because we practiced it.

 

How long is this legal overhead going to be? Well, we know because we have it automated, and we have these controls out of it. How much operational chaos are we going to create from this?

 

I don't know, it's going to be about this. How much unplanned work would we have? Well, we don't have unplanned work because we have specified things, and we know what we're going to be doing.

 

So it creates all of these positive things that business loves, but because we are saying this from the cost control point of view at the beginning, we don't talk to them about the correct thing about the revenue side of things. So then they're always thinking of security as a cost instead of as an enabler. And when you are a cost, you don't get the resources.

 

[Pinja] (21:49 - 22:10)

No, because it's seen as a negative instead of bringing in good things. When wrapping this conversation up, Kalle, what would you say to an organization that now would get inspired to, hey, let's fix this. Let's start to look at this from a more positive perspective and give security the place that it deserves.

 

What would be the one place to start from building it?

 

[Kalle] (22:11 - 22:15)

So first off, Eficode sales page would be here to take the AI metric assessment from us.

 

[Pinja] (22:15 - 22:16)

Obviously.

 

[Kalle] (22:16 - 23:18

I do that a lot.

 

So I will say that that is the best place to start because we will create a really nice roadmap about what is wrong in your organization. But if we go back to personally, if I wouldn't be working at Eficode and I would have to figure this out myself, I would start by talking to business people, finding someone who is like a nice person who sits at the coffee and like going to that VP and being like, hey, can you tell me a bit about what you guys are doing? And then being like listening, what they're trying to accomplish and then like writing that down even on a piece of paper and then mapping that towards security techniques that you know that could help him.

 

And then another day comes by, possibly not immediately, because if you can't immediately map those, don't try to be immediate, just listen to him. And then next time you see him, be like, hey, I took the ideas that you were saying last time and we play back this to you and play that back to him and listen. Does he get excited?

 

If he gets excited, you're probably behind something that could get you funding or organizational change that would help you. So that's how I would approach it as an individual.

 

[Pinja] (23:19 - 23:49)

If we turn this around, so if you're a business person and you think about it, hey, now is the time to actually improve our security practices and you want to approach a security person, they're not as scary as they might seem. We've laughed a couple of times here in this podcast how security people can be kind of crouchy. There's a reason for that.

 

But please also consider it from that perspective as well. Think about, again, what is it that you want to achieve from the business perspective, from revenue perspective and turn that around. But starting a conversation might be the tip, right?

 

Thinking about the goals.

 

[Kalle] (23:49 - 24:18)

I might also just ask by like, what are you doing from the security person. And they will list you all the bad things that they're doing because they're crumpy. But don't take it negatively.

 

Take it that that guy really loves your company because usually they're working hard. So then the question becomes like, okay, is there anything here that sounds like could help my team? And if you hear even one thing that your team could help him on or he helps your team, you could be like, hey, should we talk about how that would be part of our process?

 

[Pinja] (24:18 - 24:33)

There you go, there’s the start. Hey, Kalle, thank you so much for joining me today for this. And thank you everybody for tuning in.

 

And we'll see you in the sauna next time. We'll now tell you a little bit about who we are.

 

[Kalle] (24:33 - 24:59)

Hello, my name is Kalle Sirkesalo. I work as a field CTO in Eficode at the moment. I've been here for over a decade and I'm at the moment mainly focused on AI powered tooling, especially in the SDLC pipelines.

 

I'm building very stable platform engineering platforms and DevSecOps practices. And my biggest job is scaling CI/CD and SDLC in industrial and regulated industries.

 

[Pinja] (25:00 - 25:15)

I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode. Thanks for tuning in.

 

We'll catch you next time. And remember, if you like what you hear, please like, rate and subscribe on your favorite podcast platform. It means the world to us.

 

Published:

DevOpsSauna SessionsSecurityAI