Skip to main content Search

April DevOps News

In this episode of the DevOps Sauna, Darren Richardson and Pinja Kujala discuss the latest stories and developments in the DevOps space in April 2025, including OpenAI and ChatGPT, open source vs. commercial software, and the new reveals at Atlassian Team '25.

[Pinja] (0:02 - 0:11)

It is now true that politeness is actually more expensive than you think. People saying please and thank you to ChatGPT is actually costing OpenAI millions of dollars.

[Darren] (0:14 - 0:22)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

[Pinja] (0:22 - 0:32)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

[Darren] (0:37 - 0:41)

Welcome back to the DevOps Sauna. I'm here once again with Pinja.

[Pinja] (0:42 - 0:43)

Hello. How are you, Darren?

[Darren] (0:44 - 0:51)

I'm doing reasonably well. I think it's about time that we start discussing the news once again.

[Pinja] (0:52 - 1:01)

It is the end of April. And as per usual, as we have done in the past, let's look at what has happened in this month. And guess what, Darren?

I think it's AI.

[Darren] (1:01 - 1:15)

Yeah, yeah. AI won the news at the moment. There's really nothing else happening in tech.

It's just AI. AI won. Congratulations.

Thank you for joining us in this episode of the DevOps Sauna. We're going to go a bit more in depth than that.

[Pinja] (1:15 - 1:33)

But let's talk more. And yeah, sure. We do need to cover some of the news.

Having come from the realm of AI as well, let's talk about slop squatting. And this was a new term for me, but it derives from a word that may not be unfamiliar to some of our listeners, perhaps.

[Darren] (1:33 - 2:18)

Yeah. So for a while now, these typosquatting attacks have been a thing on the security radar. Basically, you register a package that is named something very similar to a common package.

And then when people typo it, they download your malicious package instead. And it's like a very common supply chain vulnerability that's actually been responsible for a lot of quite high-profile incidents over the last few years. But now, thanks to the amount of AI code being generated, we have this new threat called slop squatting, which is essentially that instead of a typo by a human, it's an AI hallucination, which also calls these non-existent libraries.

[Pinja] (2:18 - 2:38)

Yeah. And this is increasingly getting a bigger problem right now because, of course, developers are even more using the AI coding tools. So there was a study run by researchers at three universities that showed that, was it a 20% tendency in LLMs to recommend non-existing libraries and packages for them?

[Darren] (2:38 - 3:08)

And I honestly think 20% might be low. I wonder about the study, whether they were using extremely common packages or not. I haven't read the study, so that's something simple.

But yeah, anyone who's coded with AI will know one of the first things that happens is it will recommend libraries that just don't exist or functions within those libraries that don't exist. So checking that the libraries actually exist, that they can be imported and that they're not malicious becomes kind of a critical thing.

[Pinja] (3:09 - 3:24)

And if we think of what languages are targeted here, and the more widely used languages like Python and JavaScript are the ones that are particularly concerning, because they are so widely used. So they were the ones that are more in the scope of this.

[Darren] (3:25 - 4:03)

Yeah. And that's the thing about, you know, you have languages like this that are heavily dependent on third-party software. I mean, I don't think there are languages that aren't dependent on third-party software these days, because especially when it comes to something like authentication, no one's wanting to, nor should they really reinvent the wheel.

But that does mean everyone now has this SBOM requirement, software bill of materials. They have these software supply chains that are now coming into the forefront. And as these things like AI copilots and things like vibe coding become more common, we're just going to see this threat kind of explode.

[Pinja] (4:03 - 4:50)

Yeah. And we need to be more aware. And also Black Duck, who's been involved in this conversation, they also urged that, please, start educating your developers more.

And of course, for developers, be more aware of the risks when you're using AI-generated code. They are making us faster. They might increase the speed.

But at the same time, we need to be very aware of this. Because in this study, they were looking into, was it 16 co-generation AI models? And this included DeepSeek, Claude, OpenAI's GPT model 4, and Mistral as well.

So all these, the most prevalent models were being tested in this one. And if we think about how widely these are being used nowadays, I think we need to be more and more alert for this threat.

[Darren] (4:50 - 5:15)

We do. And it's the thing that people haven't really noticed about picking up speed. Like AI is allowing us to develop faster, but that just means we can crash far more spectacularly.

So if you want to hear more about this topic, and you happen to be in Hamburg, CTO of Managed Services, Kalle Sirkesalo, is actually speaking on this subject on the 3rd or 4th of June. So if you're in Germany, if you're near Hamburg, feel free to go check that out.

[Pinja] (5:15 - 5:17)

I think the event was called TechCamp Hamburg.

[Darren] (5:18 - 5:19)

TechCamp, yes.

[Pinja] (5:19 - 6:05)

All right, let's talk about open source versus commercial software. One of the favorite topics that we've talked about here is, in fact, open source. But there was an analysis published by Reversing Labs.

And they did some scans of more than two dozen widely used commercial software binaries. And this included commercial operating systems, some password managers, and some VPN software. And they found some numerous risks.

And amongst those risks, what they found was that many of the packages that they saw there were receiving some failing security grade due to the fact that there was an exposed secret, they were actively exploiting several vulnerabilities. And the evidence of possible code tampering was quite obvious in that sample that they took a look at.

[Darren] (6:05 - 6:48)

Yeah, I don't think it's news to anyone. I'm glad that Reversing Labs did a study on this. But the idea of commercial software being more secure is, I don't think it's one that's ever been registered.

Open source software has community scrutiny put on it. And one thing we all know from any kind of release of software is as soon as a larger number of people get their hands on it, then they start finding bugs that a small testing team never did. That's why we have things like canary testing.

It's why we have these A-B releases. We understand that more eyes will find more problems. And I think a lot of closed source commercial software relies on security by obscurity.

[Pinja] (6:49 - 7:34)

Yeah, and then these findings were of course compared to what has been found on open source software. And they did some scans, was it of like 30 open source packages? And those account for more than 650 million total downloads across, was it three leading open source package managers?

And they had on average six critical severity, 33 high severity flaws per package. So this was also now a comparison, as you say, Darren, is not an unheard thing that when you have software, you need to check the vulnerabilities. And I guess the study also showed that no matter how the software was created, everybody is relying on code that is really highly insecure at the moment.

[Darren] (7:35 - 8:09)

Yeah, it's not surprising and yet also kind of surprising because you would hope that people had better supply chains at this point. You'd hope that the security gates would be built in, but they're just not in a lot of open source cases and a lot of closed source cases. But we basically did learn that buying closed source commercial because you think open source is less secure or is less well-tested, it's not accurate.

Shall we talk about the American political situation in a roundabout way? Let's not dive too deeply into that.

[Pinja] (8:09 - 8:39)

No, let's not dive too much into the debates, but this is something we do need to address. And we need to talk about the tariffs that the current administration has imposed right now. And let's take one example of one country, and the US tariffs on Chinese goods has now been set for 145%.

And this is not an example, but it will have an impact and it has already had an impact on AI development.

[Darren] (8:40 - 9:35)

Yeah, I think the fear here is that these tariffs are going to hit things like data centers and supply chains. So if we're thinking about the chipsets that allow for the development of AI, I think the, what is it, the 800, I'm not sure of the actual categorization, but they were costing upwards of $40,000 a card, and they now have 145% tariff on such things. And China obviously struck back by imposing its own tariffs.

And like it was 145% when we were researching this episode, that was a few days ago. So we don't know exactly where it's hit now, but it raises this uncertainty that data centers will be able to get the tech they need to hit their targets. They may not be able to provide the power required.

So it's going to be interesting to see how this affects pricing for various cloud platforms, et cetera.

[Pinja] (9:36 - 10:19)

And there are already signs of some caution in expanding AI infrastructure. And many of the tech companies that have had these plans of perhaps building data centers and looking into new supply chain developments have actually not perhaps paused or abandoned, but they've been actually thinking, how do we do this in the near future with the rising cost? And perhaps there is some oversupply in some of the other countries.

So this is also hitting the players such as Microsoft and Amazon, and already they are now scaling back on their infrastructure leasing. So we will see how this will also have perhaps an impact on how the AI-related business segments will potentially take a hit with this.

[Darren] (10:19 - 10:28)

Especially for Microsoft, didn't they just sign the 20-year contract to reopen the nuclear plant on Three Mile Island?

[Pinja] (10:28 - 10:28)

Yeah.

[Darren] (10:29 - 10:39)

Not that long ago. So I think it's going to be interesting to see how that shapes out. I think it was Three Mile Island, but they had a lengthy contract.

So we'll see what happens with them.

[Pinja] (10:39 - 10:59)

Yeah. It might have some broader economic implications across the globe. There have been some market losses already now, but I think the investors are remaining optimistic.

We saw the global economy take a hit, but there has been some recovery inside already. So time will tell how this will look for us in, let's say, even in a month.

[Darren] (11:00 - 11:33)

Yeah. It's so hard to know with everything changing so rapidly. Okay.

Shall we talk about the United Kingdom for a moment? We found a story on developertech.com suggesting that UK teams are struggling to keep pace with software deployment standards, and that the average British organization deploys new software once every 29 days, which is considerably longer than what we would expect from a modern release cycle. And that when delays do occur, they add an average of 3.8 months to project timelines.

[Pinja] (11:33 - 11:50)

This is actually very surprising. I guess surprising might be the wrong word. It's surprising and not surprising, because we do see this in many companies.

And at the same time, if we compare this to the current DORA standards, for example, it's quite high to deploy every 29 days.

[Darren] (11:51 - 12:18)

Yeah, I think that's a very slow release cycle. But the primary reason for this seems to be a strain on development teams from skill gaps. So it seems that there are a lack of perhaps trained coders to ensure that development cycles are on a faster release.

It seemed over half of respondents pointed to under-resourced teams, so lacking appropriate personnel or tools.

[Pinja] (12:18 - 12:51)

Yeah. And at the same time, there is that disconnect between the leadership and IT team. So perhaps the business leaders think that, I think there was a study that said that they believe only 10% of deployments are delayed, and then the team leaders are reporting a 52% delay rate.

So there is also this disconnect in looking at the data, looking at the statistics, and the current situation that might contribute to perhaps not bridging the gap that is right there at the moment in the tech skill, skill cabin in these companies.

[Darren] (12:52 - 13:28)

It does look like, yeah, as you say, a communication gap, too, because 10% versus 52% is a huge discrepancy. And obviously, we expect the people who are actually coding and deploying to have more accurate information. So there's a lot of skill gaps.

And I don't know, I'd like to get into this topic more on a research level, because I'm curious how much of this might be a consequence of Brexit, and the inability to have the free travel of people and skills coming in from Europe, who are now moving elsewhere in Europe and taking advantage of those economies instead of the British.

[Pinja] (13:28 - 14:10)

Yeah, this study was focused on UK companies. And we can also think about a deployment every 29 days, to us who work in a company that lives from DevOps, sounds very far apart. And perhaps there is also this, this disconnect, sometimes between the IT and the management, perhaps that deployments might not be considered as the core part of the business's wider strategy, in order to, hey, let's enable our teams to do deployments on a shorter cycle is not often still nowadays in 2025 is not in the in the strategy documents and the business of of the tech companies that we see.

[Darren] (14:11 - 14:26)

Yeah, and it makes me wonder, after the recent episode, we recorded whether they're trying Agile and failing or not trying Agile at all, what kind of development process they're using. So it will be interesting to see if more news comes out about that.

[Pinja] (14:26 - 15:52)

Indeed. Let's talk about a lawsuit for a moment. And they had their Team ‘25 event in Las Vegas, some four weeks ago, and there were some interesting announcements.

They did multiple ones during that week. And yeah, let's talk about AI for a moment. Their AI agents, Rovo, will now become generally available.

So, regardless of what paid subscription level you have, everybody will have Rovo agents available. They announced some product bundles. But I guess the one thing that was the most interesting to me in this one, perhaps if we don't talk about AI agents in that much detail in this episode anymore, but the isolated cloud is something that is coming next year.

There are still many organizations, perhaps depending on the industry that they're in, that have been hesitant on moving from Data Center to cloud when it comes to their Atlassian business. But now, Atlassian announced that they will bring this Isolated Cloud that will perhaps help these organizations. They might be in the finance industry, they might be in some critical industries, that this would be either governed by Atlassian, or there's actually just Atlassian Isolated Cloud.

But this is a very interesting development. And let's see how much they will be able to move their data center customers now with this to the cloud environment.

[Darren] (15:52 - 16:26)

Yeah, it's going to be quite curious, especially in countries like Germany, where I don't think there are any specific restrictions, but the amount of red tape that needs to be cut to actually move something to the cloud has always been problematic. And I wonder if this will help move the needle for some of their clients. I feel like a lot of the clients who are more security conscious, this might not be enough, like the Isolated Cloud.

But we'll see. I think it's a good move from Atlassian's side.

[Pinja] (16:26 - 16:57)

It is a gesture, nonetheless. I think it is very likely that we will see that there are some industries that just cannot accept the terms of this. For example, there are some data residency issues, and it might actually come down to some regulations that they need to have that aren't available.

But we will see how this will come. Because this is now the only information we have of this isolated cloud by Atlassian, as it's coming out next year on the 26th. So this was a teaser that they provided for the audiences in the Team ‘25 event.

[Darren] (16:58 - 17:00)

Yeah, it will depend heavily on how it's implemented.

[Pinja] (17:00 - 17:01)

Yeah, exactly.

[Darren] (17:01 - 17:20)

Let's move on to talk about one of my favorite subjects, the explosive growth of non-humans. So the GitGuardian State of Secrets SPRAWL report from 2025 reveals that a massive amount of non-human entities exist within network and software elements.

[Pinja] (17:20 - 17:43)

Yeah, and it also came with the notion that this report revealed that there was an alarmingly large scale of secrets exposed in our modern software environments. And in GitHub in 2024 alone, 23.77 million new secrets were leaked just last year alone.

[Darren] (17:43 - 18:26)

And if you're wondering why these things are connected, these non-human entities are things like API keys, service accounts, Kubernetes workers. And in DevOps environments, these now outnumber human identities by 45 to 1. Wow.

So for every one, and I think this is because this is a report that's looking back. I don't think this is even including things like AI now. So even before that, you have 45 different non-human identities compared to every one person.

And if we look at something like Zero Trust, this is where Zero Trust networking becomes important because I don't know who is keeping track of all 45 of these non-human entities. (Hey P&D, the pronunciation is kinda like Eye-Lynn :) But feel free to continue with what comes out easier for you).

[Pinja] (18:26 - 18:52)

No, that's true. And we've talked about the importance and the urgency of comprehensive secrets management in companies before in this podcast. And I think we've talked about in this news episode in the previous months also about reports that have been exposing secrets.

And I guess this new dimension of non-human entities is something that we need to take. Non-human identities need to be taken seriously here.

[Darren] (18:52 - 19:14)

Yeah. And this actually refers back to something we've talked about before when we talked about blue sky and bots. I'm actually going to be talking on this subject in Edinburgh at the start of June, June 4th at Eurostar in Edinburgh.

I'm going to be talking about the dead internet theory and how we avoid it with this explosive level of non-human entities.

[Pinja] (19:15 - 19:50)

All right. And should we get back to, for our last story, let's get back to AI. This is perhaps a little bit lighter than the other stories, but I think many of us have seen this.

But it is now true that politeness is actually more expensive than you think. And many of us, when we prompt our ChatGPT or whatever you're using, could you please, dear ChatGPT, could you please give me this and this and that? Thank you.

And now we hear from Sam Altman, who's the CEO of OpenAI, that people saying please and thank you to ChatGPT is actually costing OpenAI millions of dollars.

[Darren] (19:51 - 19:56)

Yep. Tens of millions of dollars. Are you someone who says thanks to your AI?

[Pinja] (19:56 - 20:03)

I don't say thanks, but I say please and could you, would you? I address it as if I were a human being. How about you?

[Darren] (20:04 - 20:28)

I'm similar. Yeah. I'm polite to my AI.

And I think Sam Altman, actually, because there were a lot of people who were trying to frame this as a negative thing, but it also came out that Sam Altman from OpenAI said it was money well spent. And he had the joke, after all, you never know. So I'm trying to avoid Skynet coming for me.

I want to be the guy who survives because I always said thanks.

[Pinja] (20:29 - 20:45)

I think the cyborgs that build on with AI chips in the future, they will have a memory of us saying thank you and please, and they might save us. But you never know when, as Sam Altman said, it's money well spent. So I guess it's okay for us to keep on doing that.

[Darren] (20:45 - 21:03)

Yeah. What was it? Rocco's Basilisk.

It's the idea that an AI would punish people who didn't bring about the AI overlord. It's weirdly becoming more of a day-to-day thing, so we need to keep an eye out for the Basilisk. That's an interesting rabbit hole if you feel like diving down it.

[Pinja] (21:03 - 21:06)

All right. Stay safe out there, I guess, is the final message.

[Darren] (21:07 - 21:11)

Yep. Stay safe and polite, even if it does cost ChatGPT tens of millions of dollars.

[Pinja] (21:12 - 21:12)

Yes, please.

[Darren] (21:12 - 21:18)

And that's all we have for you today. Thank you for joining us again on this news episode of the DevOps Sauna. Thank you, Pinja.

[Pinja] (21:18 - 21:19)

Thank you, Darren. It was fun.

[Darren] (21:19 - 21:27)

And we hope you join us next time. We'll now tell you a little bit about who we are.

[Pinja] (21:27 - 21:32)

I'm Pinja Kujala. I specialize in Agile and portfolio management topics at Eficode.

[Darren] (21:32 - 21:35)

I'm Darren Richardson, Security Consultant at Eficode.

[Pinja] (21:35 - 21:37)

Thanks for tuning in. We'll catch you next time.

[Darren] (21:37 - 21:43)

And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna Sessions