Skip to main content Search

How do you make money out of AI?

In this episode of DevOps Sauna, Pinja and Darren sit down with Eficode’s Lead AI Consultant, Henri Terho, to unpack the hype and realities around artificial intelligence in business. They explore why so many AI projects fail, what true success looks like, and how companies can actually see return on investment. From data quality challenges and security concerns to the legal landscape and the so-called “AI bubble,” the conversation digs into both the risks and the opportunities of AI transformation. Whether you’re curious about strategy, implementation, or the future of work, this episode sheds light on how organizations can make AI truly pay off.

[Henri] (0:03 - 0:09)

What's the core value path in your company and what do you actually want to get done?

 

[Darren] (0:14 - 0:22)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security and more as we explore the future of development.

 

[Pinja] (0:22 - 0:45)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna. Hello and welcome back to the DevOps Sauna.

 

I am once again joined by my co-host, Darren. Hello, Darren.

 

[Darren] (0:45 - 0:46)

Afternoon.

 

[Pinja] (0:46 - 0:55)

And we have a very special guest here today. He has been with us before and we get to introduce you again. Our lead AI consultant from Eficode, Henri Terho.

 

[Henri] (0:55 - 0:57)

Hi, hello, thanks for being special.

 

[Pinja] (0:57 - 1:47)

Yeah, we said you were special and we're very happy to have you here as well. And since I already mentioned that we have a lead AI consultant here, guess what the topic of today is going to be? Our favorite AI, obviously.

 

And a lot has been happening in this field, go figure. And of course, everybody is at the same time talking about the possible AI bubble burst. But do we have a couple of statistics on how AI performs in organizations?

 

So should we talk about how to make AI work and how does it function and how can you actually make money out of it? So the first study that we are referencing here is the report from MIT that everybody was talking about a couple of weeks ago that said 95% of AI initiatives actually fail. And this was the state of AI in business 2025.

 

But to me, this failure rate sounds astonishing. So why is that?

 

[Henri] (1:47 - 2:21)

What I think about is that I think it's a quite typical failure rate. If you look at any historical stuff about software projects, everybody's saying software projects always fail. So I think it's just a continuation of the same thing that AI is a software project and everybody's kind of like trying to do the same thing and figuring out why stuff isn't changing because we're now probably in AI.

 

So my thoughts around this is pretty much that how do we even measure what's a success and a failure? If we tested out an AI copilot and realized that it doesn't fit our use case, then is that a failure or a success of the initiative? How do we measure that failure or success in these?

 

[Darren] (2:21 - 2:41)

Are you then suggesting that the title of the MIT report was clickbait? Use 95%? I would genuinely be interested in what the failure rate of regular businesses is because 99% of Finnish businesses are small to medium enterprises, for example.

 

So a lot of them, I imagine, end up not going as well as people would like.

 

[Henri] (2:41 - 3:13)

Yeah, exactly. I remember now I might be lying out of my teeth, but there was a study a while back saying that 80%, 90% of software projects fail. And it's mostly about how those programs are scoped.

 

Are they going full-time, or do they even have a defined end goal? How do you say that we succeed or fail? And typically it's either that, so you do bad goal setting, or you just start doing something and you realize at some point that this is not at all what we want.

 

So the typical, any project, plan it out. What do you want to achieve and all that?

 

[Pinja] (3:13 - 3:33)

This is one of what really stuck to me, what you just said before, because this is just yet another software project, right? So are we now putting a lot more focus on this failure rate because it's AI, right? We would like to see it succeed, and then we have the people who doubt it.

 

So why do we put so much emphasis on this is now the failure, right?

 

[Henri] (3:33 - 4:08)

I think it's just that if you read through the paper, at least the MIT report, they actually go on stating that these are the success metrics, so how can you succeed with this? And these are how the best projects are doing it. And they're well-defined.

 

They actually take AI tools which fit their scope. They adapt AI workflows to fit their business and not the other way around, that they try to change their whole business to fit a tool. So just like typical stuff, and even that is just in the paper.

 

So I don't even think they knew what they were doing from the marketing perspective that they've raised this one sentence to be the topic of the paper, because we are talking about it, everybody's talking about it. So success on that part, I guess.

 

[Pinja] (4:09 - 4:21)

And the one thing is that we hear so many people just say that, you just start with AI, just start adding it to everything. And is the phrase like, just improve it with AI, but where can one even start? What would be the questions to actually ask?

 

[Henri] (4:22 - 5:16)

Yeah, that's a good thing. And I think it relates to a lot of the discussion that I've been hearing about. It's like you go into a company or they call us and say that, hey, we are now doing this AI strategy and we need AI.

 

So then you ask, hey, what's your AI strategy? Or do you have an AI strategy? Of course we have, of course we have an AI strategy.

 

And then you ask, can I see it? And then I'm like, well, we don't really have it written down. It's just in our heads and we know we have to do this.

 

And then like a lot of the businesses are coming to us, probably what's happened inside of the companies, like the board has said, hey, okay, we are going to be AI first company now. Okay, guys, we are going to be an AI company. Peter, you are the project lead.

 

You now have two months of time to figure out how we are going to be an AI first company. So go ahead. And then Peter, in his panic in the company, just looks through the phone book looking for an AI consultancy company and calls us or somebody else and says to, hey, I need help doing an AI strategy.

 

And this is what I see a lot in the market happening. So I'm quite positive that that contributes to the failure rate as well, as everybody is rushing to it.

 

[Darren] (5:16 - 5:38)

It sounds a little bit like Kubernetes back in 2016, where everyone was saying Kubernetes was the next thing they had to implement, not realizing Kubernetes was built for infrastructure like Google. And so you had all these tiny companies implementing Kubernetes because it was the latest buzzword. And now they're regretting the fact.

 

Is that what you're saying is happening with AI? We are a mobile company first.

 

[Henri] (5:38 - 6:53)

We are a cloud company first. We are a DevOps company first. We are an AI company first.

 

So we've seen this before many times. So exactly this is what's happening. But what I am actually positive about is that because of AI, we have now a lot of people outside of IT asking that, hey, could AI fix this problem that I have?

 

And that's the kind of positive spin and change that I've seen in the market that I haven't seen in IT. People from marketing come and ask, hey, you are the AI guy. Can you help us fix this problem?

 

Or hey, you come from accounting. Hey, I heard that AI could fix this. Nobody has ever come to me like, hey, could you build a piece of software that helps solve this problem?

 

So this positivity towards automation and building systems now just because it has an AI label, I think that's super good. So what I kind of see is that AI is now transforming a lot of these cases because everybody is now talking about it and everybody wants to adopt it. But what we also see is that yes, everybody is adopting AI tools.

 

Everybody is taking Gemini or whatever into use because Google, of course, gives you that and all of that. But Transformation is a company, it's still super rare that they actually adapted to this new world. And I think that's going to take time.

 

But I'm positive about the spin that's happening. It kind of gives the whole idea that we can transform a lot of stuff with AI, which is positive. You're not just like, oh, it's IT. I hate IT.

 

I don't want to do that.

 

[Pinja] (6:54 - 7:08)

Is there something we can say that would be the biggest thing holding back the transformational part of AI and, for example, the quality of the use of AI? Is there one or two couple of things that we could say that are the kind of the tripping stones at the moment?

 

[Henri] (7:08 - 8:05)

I guess people think it's the silver bullet. You drop an AI into the system and you get proper results. But the fact is that as a company, you have existing processes.

 

You might have a 20-year-old SharePoint instance where you have 20 years of collected documents, collected stuff that you don't even probably know what's in there. And IT doesn't know. Nobody knows what's in there.

 

And then somebody comes in, hey, let's put an MCP server on top of that. And this will tell us what our business is doing. And then you go ask it questions.

 

And then, well, it gives you answers. It gives you exactly the kind of answers that you expect to find from that document pile. And I think that's one of the biggest things holding back the whole AI transformation is that people haven't realized that this whole AI transformation, not just gluing AI on, I think this is going to be a bigger transformation than buying an ERP system or doing a subversioning or changing from SAP to Salesforce.

 

These projects are going to be huge because you have to go basically through your whole data infrastructure, security, and all of that.

 

[Darren] (8:05 - 8:35)

A question on that front. So there's a little bit of a tangent, but we've talked about the data quality being too low before. When I've done machine learning projects in the past with numerical data, there were various techniques to fill in the data with averages or generate the missing data.

 

Is there anything in the language model world that's heading towards including standardized versions of missing things or standardizing documentation and data?

 

[Henri] (8:35 - 10:33)

I'm pretty sure that there is. And basically, if you think about LLMs, that's what they are already doing. If they don't find information in your systems through a FRAG or an MCP server, so FRAG and MCP being ways to integrate your data into an LLM, they basically just answer based on the LLM.

 

But because you are looking for answers that are focused on your business case, if you just use an LLM, you get generic answers. You get exactly the same answers as basically your competitors in the business. So this might also be an identity crisis for many companies that, hey, we're not that unique and we really have to think on what's the value of our company in this space as well and what's the process that we are bringing because that's what AI is now.

 

Of course, there's a lot of companies now trying to help with data integration because that's pretty much the holy grail that you have to do. It's not about AI models. It's about how to do those integrations, how to exactly fix this messy data or understand that, hey, this document is a lot older than something else.

 

Please use the newer version than this one. That's the crowd truth. You have to lay those rules out somehow.

 

We as humans can kind of filter out. I think everyone has dug through a lot of the old documents in Salesforce or something and trying to figure out what's the state of this case. What you end up doing, you end up asking the sales guy or somebody else in your organization, hey, can you tell me what's going on here?

 

I think that's also coming with AI. We need an AI agent that organizes that, then goes to ask people, hey, I saw your name here. Do you know which one's the best thing that there is?

 

And I think this is going to lead to a lot of interaction between agents and humans as well. But then the second thing is actually access management as well. And we've kind of seen that happening in some companies that somebody just quickly says, hey, we want an MCP on top of this so we get access to this data.

 

And we are kind of saying, that's a bad idea to do because then everybody's going to have access to everything. And then, no, no, we don't care. We just want to get access.

 

And then two or three weeks later, we built that and they have access. And then I'm going to say, okay, we can demo this. Hey, can you give me information about how you decide to buy from us or something?

 

And there's a lot of documents which probably we shouldn't see and show to the client. Is this actually what you wanted from us? There's a lot of depth to these kinds of discussions and not just throwing access.

 

[Pinja] (10:34 - 10:54)

So you mentioned the MCP. So we're talking about the model contact protocols in relation to the AI. So if we talk about the security issues, let's deep dive into one legal issue in a moment, but the security first, perhaps.

 

So MCP security is now a really hot topic, but that's really new at this point in time. Is that an issue right now?

 

[Henri] (10:55 - 11:24)

Yeah. Is that an issue? Is security always an issue with new technologies?

 

Like, I think everybody's thinking that, again, that the MCP will solve all your problems, but basically it's just telnet for your AI. So it doesn't have any kind of security. It doesn't have access control.

 

It doesn't have any kind of this, like, let's say, security layer stuff in it. You have to build that by yourself. You have to figure out the security and the access levels.

 

Who gets access to the board emails? Who gets this? And you have to define it.

 

The AI can't do that for you.

 

[Pinja] (11:25 - 11:31)

Let me rephrase it a little bit. Maybe it's not a silver bullet, right? Just to get one thing right, is it?

 

[Darren] (11:31 - 11:49)

From my perspective, it's like the security of MPCs, like someone selling, or for AI in general in quite a lot of cases, is that someone's selling a new car, and they're saying, it goes 800 kilometers an hour. We haven't included any seatbelts or any brakes. And that's the state of MCP.

 

It's like an interesting situation to be in.

 

[Pinja] (11:49 - 12:13)

We talked about the legal stuff as well. So maybe one very topical thing from not so long ago is the lawsuit with Anthropic. And we're now talking about a $1.5 billion settlement to a group of authors. And how do we see this in relation to what might be happening? Do companies actually take this into consideration, the legal aspects of using AI and LLMs?

 

[Henri] (12:14 - 13:13)

I guess that and previous comment of, yeah, we are going 800 kilometers an hour and there's no brakes and seatbelts. That lies heavily with this world as well. I think it's also like the next big thing to happen and like Kelsey Hightower said as well, and LinkedIn, the next big thing to happen in AI is your bill.

 

Because now we are kind of like, we've flown a lot from the future. We broke probably a gazillion copyright laws pushing these AI models to where they are, indexed half of the internet, used that in material. Those lawsuits are coming.

 

They've been building up like this Anthropic stuff. And the same with all of the usage around it, because basically now most of the market is of course fed by VC money. But the prices are going, token prices are going down, but the prices of services are also going up.

 

Like OpenAI is serving the $500 a month AI and stuff like that. So that's also happening on the other front. So I think it's going to be interesting times in AI now that the kind of legalization and the other guys are also catching up on what's going on.

 

But the next thing is also already happening. So they are going to be behind. So we are going to be stuck in this loop for a while now.

 

[Pinja] (13:13 - 14:01)

So there are a couple of these driving zones that are slowing us down. I'm using air quotes here, but this is something we have not perhaps talked about enough as an industry, because we say what is successful, what is not. And that is also, Henri, you said in the beginning of this episode, it is hard to say.

 

Was it a success or was it not? But can we measure it by ROI? There is a Google report related to the ROI of AI, and it was measuring not just by the size of the return, but also the speed of return.

 

But the question setting in that report was more, do you see that you will benefit out of this in the next five years? It was a very interesting study. But are we somehow able to measure this?

 

And have the companies that we talk to, have they actually been asking for some concrete numbers on what it brings me back?

 

[Henri] (14:01 - 16:01)

Yeah, for sure. Everybody wants a number that, hey, if I buy this $100 license per month, how much I'm getting back. And I think it's very clear on the tooling side, exactly that, hey, coders can produce value much faster.

 

Those numbers are all over the place. Microsoft has said 80%. Somebody said 20% and all that, because it's also a lot about what do you calculate being as the work of a coder, because most of it is not actually writing lines of code.

 

It's about thinking about the design, talking to people, figuring out what should be done and all of this around it. But we are seeing that those are easy to say. The ROI is we can write code 40% faster.

 

But this is, I think, the transformative factor of AI as well is because it's not going to improve all of your systems or your processes. It's going to totally change them. So how do you measure the ROI of let's change the whole system?

 

Because you cannot compare the small bits that you're thinking about. You're basically changing the whole system, a totally different operational logic. And then you're comparing apples to oranges.

 

So I think that's also one problem with this AI transformation that we're talking about. And what do you compare? I firmly think that actually AI might be a problem to Legendary Mythical Man-Month, and there is no silver bullet papers from software industry which basically say that you cannot fix a project which is late by throwing more people at it because you have to waste so much time communicating to those people what's to be done.

 

But for example, with AI systems who know the state of the project, you'd have an AI project manager who you could then bother the AI and ask what the heck is going on here? Why is this code file here? Who knows the history and knows that?

 

You could actually start fixing this problem. So I think it's going to be transformed into these kinds of communication problems and information sharing problems which are easily ROIable because they change the whole way of working. And it also comes down to a lot of the scale.

 

Like even with our clients, some people are talking about how we can make a return on this investment this month? And some are talking, can we make a return on investment on our AI investment in five years? And everything in between.

 

I think those are pretty much the extremes that we have. Somebody's thinking about savings on a five-year scale and some on a monthly scale. So of course that changes your perspective on the whole thing.

 

How fast do you want to succeed or fail with AI?

 

[Pinja] (16:02 - 16:38)

And maybe that is the kind of the fallacy that we have at the moment of being very short-sighted. And if we really take back a little bit and think what's the difference between the 5% that again quotes succeed in implementation and again let's draw some comparison to a regular software development project, right? So if you're thinking of am I getting ROI from this project immediately during this month if it's just a regular software development project?

 

What are the building blocks of success when you implement AI tooling in your organization?

 

[Henri] (16:38 - 21:30)

What we are seeing a lot or what I discuss a lot is actually kind of the opposite of what we talked about in the Google paper that you figure out your goal setting first before you jump into the AI tools. What's the core value path in your company and what do you actually want to get done? And then figure out what's going on around that where can AI help?

 

Where can we help there? Not just kind of like shoving the hey we heard about this co-pilot agent, we heard about this and that. Let's shove it in here and see what happens.

 

It's more about thinking of hey where can we help? What do we have in this critical path that we can use on AI? And that requires you not just to talk with a single person in the company, probably multiple persons and kind of like also bringing and I think this is also like the really great thing about AI is that it brings the business and the development IT guys much closer to the same table because now we are talking that both of us can implement features of course with different kind of experience levels but the feeling is that we can both talk about this because it's just AI where I can get it to do stuff.

 

Now we can talk together so that's what we see in successful organizations that you're not just starting with point solutions to simply highlight the value. Hey we can actually get some value out of this. Do a small scope POC piloting in there that hey okay we can do this and this then when we experimented in organization they typically after that they realized hey we have to ask these questions when we go into a new AI implementation and then scale that there.

 

And at that point when you start to have one or two or three AI POCs going on in your organization then you kind of like you have proven that this is also weird to say in this sense but many people have problems in their organization to say that their boss is a successful business. What's exactly the ROI of AI and why do we find the business case that this is actually profitable? And this sounds weird when the whole media is like yeah AI is going to take all of our jobs. AI is super profitable it will make but number one case typically has been in this that how do you find the business case of how you can actually measure the AI ROI?

 

And now when you have this point solutions you typically can prove with those hey we automated creating this legal document where we typically have a team of four people making it. Now we can make 80% of it basically in one day and now somebody just has to check it, check the translations and do it like that. So like then you have the buy-in from the organization you can start going hey okay we have multiples of the managers and if you want to go from the 80% to close to 100 what do we need to do?

 

And building the kind of technical layer or the kind of AI orchestration in there that you have access to the IT structure you have the security you have access. And then when you have that then comes the most difficult part of figuring out the transformation of your whole organization because basically making your organization AI first where AI can do all the jobs is very easy taken from the technical side. I can give an AI agent access to my credit card and tell it to go have no problem.

 

I can technically do that today. But having the kind of support structures the organization that trust those systems to behave in the right way that we don't have and how do we build all of these organizations to actually have that that's the coolest thing that's now happening in the whole not even IT sector the whole jobs and whole of humanity pretty much that are figuring out what parts can we automate today and what do we want to keep for ourselves. And there was a really cool report I was just speaking in Alma Media about this about how the job market is changing and Stanford did a really nice study on what's going on and what they found out sadly is that the junior market is actually now temporarily at least it's for juniors it's really hard to find jobs.

 

But the seniors get hired much faster. But this fact also combined with the fact that the total workforce being hired to AI-centric areas is actually rising. So we're actually hiring more people to areas where we have AI.

 

So the net impact is still positive. And what's also happening is there's a shift from roles that are automatable like even roles that are totally automatable by AI. Those are of course going the way of the door opening.

 

If you have people doing a manual process or something that's easily definable that you can write on a piece of paper or two what should happen automatable. But then roles where AI can augment you like being a doctor, being a lawyer, being a management consultant, all of these we are seeing a trend upwards actually in AI usage and in the amount of people in those areas that are being hired. So I think that's a very positive spin on the whole AI taking all of our jobs.

 

What's interesting also is that AI is not in AI fields you're not actually losing your jobs. The transformation is more happening on the compensation front than on the amount of jobs actually existing. So that's also an interesting thing that came out of those papers.

 

So what's the value of one hour of work of just me or me plus my army of 20 agents. How should I price those two things? And I don't know and nobody else does either.

 

But it's going to be very interesting in the next 10 years to see what's happening.

 

[Darren] (21:30 - 22:27)

There's kind of a curious meta situation that's happening too in what I'm hearing from you is that we have a lot of management and decision makers across companies across the world all pushing for AI and a lot of the AI people are saying yes but we have to take several steps back. So we're having this kind of tug of war where people are pulling in the opposite directions you'd expect them to. Where you have managers pulling towards AI they're unprepared for and the AI specialists pulling away from that particular AI saying you're not quite ready for this and here's what you need to do.

 

And I think that's something that's also causing a lot of these things like these job losses because we all saw Klarna as they dived into headcount reductions and are now course correcting compared to what you were just saying about how AI should be built and having all this data in place. It's like it feels like we should be listening to the experts instead of the management in this case.

 

[Henri] (22:27 - 22:56)

You know how it is. It's always been like this. To any big revolution this is how it's going to be.

 

Everybody wants to go directly to the goal and not play the game so don't kill the podium and not do the effort. Of course that's just natural. That's what I want to do.

 

I'm lazy and greedy so of course I want to do that as well. But the sad reality is that you have to dig into those boring IT systems and figure out how things actually work to get it done or maybe get an AI to do it for you in the future.

 

[Pinja] (22:56 - 23:11)

So kind of what I heard from you mentioned, how would you succeed? You mentioned a couple of very basic ideas on how to actually run a regular software project. So basically just do your software properly, do your AI properly.

 

[Henri] (23:12 - 24:13)

But if we think about it, everybody is saying that all the stuff is going to die and AI is going to replace all of us. I think the opposite exactly because there's so much to do around this whole space. You're going to see more people working around consulting, you're going to see more people working around how do we implement AI in this because we still need humans to figure out how to implement this and like all the support systems.

 

I see that as a big change and like you see it in the big American companies as well now. OpenAI just opened up their AI consulting business and hired like a thousand or something consultants to do these transformation changes because they also realized that just having the LLM product is not enough. You have to have people to help and figure out the organizational changes and help to do the implementation because of that.

 

So I think that the key now in succeeding with these bigger AI transformations is having people figure out the people process and being the kind of like at the forefront of that and then just using the tools, the AI tools and building those connections in there, understanding that.

 

[Pinja] (24:14 - 24:23)

So if we return to a couple of clickbait-y discussion points, of course we want to get more listeners to the podcast, so let's keep things clickbait-y at the same time.

 

[Henri] (24:23 - 24:25)

No, it's marketing, it's not clickbait.

 

[Pinja] (24:25 - 24:40)

Exactly. It's just marketing. This is how you learn how to run a podcast, but are we, the one question, the first clickbait-y marketing-related question is, are we making money out of AI implementation and if we're not, then who is and how are they doing it?

 

[Henri] (24:40 - 25:57)

Of course the answer is a shitload, like how to implement this, growing that business. If you just talk about what we're doing, we're currently hiring more AI guys too, or hiring even more just DevOps guys, just IT guys who have an interest in AI, because as I said, it's not about the scientific, how do I make a better LLM or how do I do the neural net thing, it's more about how do I integrate databases, how do I get access to this SQL database and put an adapter between it so that I can feed it to an MCP server in a secure way and then get it to an AI.

 

So because of this, I think there's a huge amount of money, not just to be given to the big AI companies anyway, to just run their AI models and run the data centers. No, there's a lot of work to be done around these systems. That's the space that we've been seeing a lot of growth in.

 

I think we'll see a lot of growth because everybody has said these are big transformation projects on the cultural side and on the software side, so you're going to see all the big four consultancies, management consultancy companies jumping on the train, you're going to see software engineers jumping on the train, and UX designers and everybody around this because this is such a paradigm shift in many places.

 

So there's a lot of money to be made from this whole AI gold rush. And I think if I have to toot our own horn a little bit, I think we are in a really good position as a shovel shop now to sell shovels to everybody who wants to dig gold.

 

[Pinja] (25:57 - 26:19)

And then the second part is we talked about in the very beginning that so many people are already afraid now that the bubble is about to burst on AI and you now mentioned the gold rush. And we know how that ended. But so do we. We need to be worried about the bubble bursting and what does it actually mean if the bubble bursts?

 

So what basically is my question? What happens?

 

[Henri] (26:19 - 27:19)

Yeah, that's what I've been asking myself that everybody says that like, hey, the AI bubble is going to burst soon. And I'm like, okay, what does it mean? What's going to happen if they like, what is the bubble?

 

Like people are people just going to suddenly decide, hey, AI is too expensive. We're going to stop. We're not going to buy these models.

 

I'll just uninstall ChatGPT and never go to that website and go back to something else. Like, I don't know what even means for the AI bubble to burst, because I think it's just what's happening. It's going to get commoditized.

 

It's going to become boring because it's just the stuff that we do, like IT, like it's just AI. It's interesting and the same thing as dot com bubble, IT bubble. Like, yes, the stock market was maybe oversaturated with people giving money to crazy ideas because nobody knew where the market was going.

 

Of course, if you have money, you try to front run the market and give some crazy guys some money. So maybe they'll succeed. And some of those guys succeeded in all of them.

 

So of course, the same is now happening there. Yeah, but everyone was using the internet, social media, all this stuff is in there. AI is not going anywhere, right?

 

[Darren] (27:19 - 27:45)

Right. I don't know. Well, that's a good question.

 

I think the main problem I see is the ethical use of it regarding the constant lawsuits against the big companies. But we've already seen it as anthropic. We talked about them, but they're not the only ones.

 

It's like a 1.5 billion settlement, which is something they agreed to. So it's like these things are not stopping anyone. So yeah, I don't think AI is going anywhere.

 

[Henri] (27:45 - 28:41)

And I think it's really like the anthropic laws that are interesting that when you split it to all of the parties involved, it's like three thousand dollars per rider. For some, it's more money that they've ever gotten out of their books. And like this is also like something that how do we divide these kinds of lawsuits and the money and all of this ethically as well?

 

Because if you just do these blanket stuff, it's not going to change the way these companies operate. It's going to be a cost of business. And how do we build all of these ethical systems and laws and everything around AI to actually work?

 

And we've of course been discussing a lot about the AI Act and a lot of these other legalizations around it. And I think they're a good thing. I don't fear you being in a bad place in the market that we've already been front running a lot of these laws because we've had time to experiment and like figuring out how to do this without the societal collapse of clickbait like we are going to go for clickbait.

 

So how do we do this without societal collapse? So we've just been front running that side of it. So in the long run, I think we're going to do fine.

 

[Pinja] (28:41 - 28:48)

All right. On that note, I think that's all the time we have for this topic today. So Henri, thank you so much for joining us.

 

It was a pleasure.

 

[Henri] (28:48 - 28:53)

Thank you for letting me speak again. I'd be out of the closet to come and speak, so of course I come.

 

[Pinja] (28:54 - 29:00)

And we even gave you a microphone even more so. And once again, Darren, thank you for joining us.

 

[Darren] (29:00 - 29:01)

Pleasure as always.

 

[Pinja] (29:01 - 29:13)

Thank you everybody for listening and we hope to see you next time. We'll now give our guest a chance to introduce himself and tell you a little bit about who we are.

 

[Henri] (29:14 - 29:45)

Hi, I'm Henri. I've done AI for, I think, 20 years. It started when it was still called ML or just statistics.

 

And I have a background in biology, so that's how I ended up doing all the statistics for my professor. At some point realizing that, hey, this is boring, I want to do something cool, so I went to do web programming. But somehow I'm back in here doing all this mathematics again and now it's cool.

 

So that's changed a lot. So it's very, very nice to be on this side, building, rebuilding a lot of different companies and now talking about what's happening. Somehow my quotes end up in Forbes about this.

 

I don't know how that happened, but it's cool.

 

[Darren] (29:45 - 29:48)

I'm Darren Richardson, security consultant at Eficode.

 

[Pinja] (29:48 - 29:53)

I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.

 

[Darren] (29:53 - 29:55)

Thanks for tuning in. We'll catch you next time.

 

[Pinja] (29:56 - 30:04)

And remember, if you like what you hear, please like, rate and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna SessionsSecurityAI