AI agents are coming for your job… Or creating a better one?
AI agents are evolving fast, but are we actually ready for them? In this episode, we explore the real state of agentic AI and what it means for the future of work. From Anthropic’s latest research to real-world adoption gaps, we unpack where the technology is today versus where it’s heading. We dive into how AI is already transforming coding, knowledge work, and even everyday tasks like creating slides, while raising bigger questions about jobs, productivity, and human collaboration. Will AI replace roles, or create entirely new ones? And how should organizations adapt before the gap between capability and adoption gets even wider?
[Heikki] (0:03 - 0:07)
So it's not just like getting a place and job, so it's also creating new jobs.
[Pinja] (0:11 - 1:13)
Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development. Join us as we dive into the heart of DevOps, one story at a time.
Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna. Hello everyone and welcome back to the DevOps Sauna. In the recent, I want to say even for a year or two, the one big theme we've been talking about has been AI.
And today is not any different. Previously, we've been talking about what is possible and feasible and realistic with AI agents and what are the prerequisites for organizations to start building on those capabilities. But now we have some more data to support this discussion.
And to talk about this with me today is my colleague, an AI builder and technologist, Heikki Hämäläinen. Welcome, Heikki.
[Heikki] (1:13 - 1:27)
Thanks, Pinja. Nice to be here again. And AI, as you know, is my passion.
So my daughter sometimes asks that about the generation, are you gen X or Y or whatever. I said, I'm GenAI. So that's my take on AI.
[Pinja] (1:28 - 2:12)
I've never heard somebody consider themselves GenAI. We talk about AI native people quite a lot. And I think you're one of the first people I've encountered who could be claimed to be AI native.
We've talked about this topic a lot. The reason why I got the idea to talk to you about this, we had a breakfast conversation about this a few weeks ago at our company breakfast on one lovely Friday morning. And we were talking about how far has agentic AI gone so far?
And it was only, was it 10, 11 days ago that Anthropic released a study on March 5th on what are the effects on the labor market? So what are the main principles of the study that Anthropic came up with?
[Heikki] (2:12 - 3:29)
Yeah, so Anthropic has done this study a couple of times. And what they try to understand in this study is that there was like a theoretical maximum for certain professions like computer or math or architecture, engineering, food serving, healthcare support, ground maintenance, and so forth. And to think about where we are at the moment and what is the actual maximum where we could be in a certain profession.
And with this study, we can still see even in the area where we are working. So in the code coding area or code related tasks and the profession, even though the maximum is not 100%, it's in the north of 90%. So we are still in the like 40% or less than the 40% level at the moment.
So it means that there's a lot of things that we can achieve, even in this code, which is definitely leading at the moment, what comes to the AI adoption. And in the many other areas, so we are definitely in the beginning in the adoption, if we think about the theoretical capabilities. And also when we go to robotics or physical AI, we are certainly not at ground zero, but we definitely are, we'll see a lot of things happening in the next few years.
[Pinja] (3:30 - 3:48)
Yeah. And if we think about what is going to happen in the next 3 months, 6 months, 12 months, or as you say, in a couple years from now on is important to also understand what were we able to do previously in the past 3, 6, 12 months and the capability curve right now, isn't it going exponentially at the moment?
[Heikki] (3:49 - 4:21)
Yeah. And that's what the people are talking about, exponentials. And I've tried to say many times, like you said, like the 3, 6, 12 months.
So I think this sort of a good categorization, if you think like the 12 months back, so like early 2025. So then we started to talk about actually the agents. So we started to talk about the agents who are actually coming.
We started to talk about the white coding and people were laughing, right? So that you remember in our office as well, vibe coding. Yeah, yeah, yeah.
So nobody's going to do a vibe sort of, so it's...
[Pinja] (4:21 - 4:40)
No, we were shaking our heads. We were shaking our heads and thinking that, oh, wow, this is really, this is either going to destroy the IT world, or there's going to be a fad that's going to go away in a couple of months when people realize it is not working. And even in our company, as I say, we had this thought.
[Heikki] (4:40 - 6:30)
Yeah, yeah, that's true. But if you think about that at that time, when you were doing things like your vibe coding or what's in that, so practically, and you think about Hello World, you probably couldn't create the Hello World without missing “Hello” or the “World”. So I don't know which is worse.
So that's where we sort of started like roughly 12 months ago. And then we go fast forward six months forward and like three months back from here somewhere like in late November, early December. And at that time, the agents actually started to work.
So I mean, so the Anthropic model, the Opus came, 4.5 came to the markets, GPT model, 5.2 came to markets and so forth and a couple of, couple of iterations after that. So it's happened like not at one overnight, but almost overnight in a way so that the capabilities came so that now we are more and more talk about the agent engineering, actually, so that the meaning so that you need to have a lot of like the engineering system thinking and that kind of capabilities when you are actually building together with the agents like critical systems and so forth. So the capabilities are totally something else than the Hello World area, what we are seeing at the moment.
And now when we go 3, 6, 12 months again to the future, it's a little bit hard to predict even where we are in a six or 12 months time when the time scales. But I think it was Boris Cherny who is like the creator of Claude Code saying that if you follow the exponential curve and if you follow to like the frontier lapse, whether it's the OpenAI or like Anthropic or some of the others, so then you can think where we might be in the three, six, 12 months going forward. And you always should build something, not for now, but something that the markets and capabilities are able to diffuse within maybe six months from now.
[Pinja] (6:30 - 7:20)
Yeah, and that's one of the things that the adoption curve is moving a lot slower right now compared to where the frontier is. And this was one of the things we discussed over our breakfast conversation and what is even, even more visible now a couple of weeks later is that the frontier is somewhere that regular organizations might not even see at the moment. I talked to a couple of people in an event a couple weeks ago and they said it feels discouraging right now because they know that there is a capability, there's something available, but we as an organization are not ready for that.
But at the same time, it's important for them also to understand that neither are the other organizations ready. But it's this kind of fallacy that people have right now that, oh, I see that that's possible. Should I already be here with my, with my large scale organization that is also a regulated business usually?
So it's, people are feeling very left out.
[Heikki] (7:20 - 8:37)
I think it was A16Z like letter last week, which was talking about the individual AI and institutionalized AI. So how the organizations when you are talking about them and like adopt AI is totally different like the thing comparing that when you think that the one individual can be like five times, even 10 times or more like the productive as a solo builder in the code area. And I think in a similar fashion in a general knowledge worker area.
So how do you make the adaptation happening at your organization level so that it's not like a couple of individuals who are doing their own stuff here and there, but actually you as an organization make it move as a whole. And that will take more time than model development and next, you know, the generation of models so that you are ready in a few months time. And I think that's the one which is really like in our thoughts as well, thinking that how can we actually help organizational level, institutional level, like AI adoption rather than empowering few individuals who can run 10 or 100 times faster on the left.
Otherwise they feel that they are left like just walking or crawling.
[Pinja] (8:38 - 9:20)
That's how I feel as well. And if we talk about the Anthropic study in a little bit more detail, so they were looking into again, as I say, this is not the first time they've done this study, but looking into the AI's impact on labor market, because this is a very widely debated topic, they now propose a new way of measuring this AI risk. The certain areas of labor market is, they say that it's more grounded, but now they combine this, we talked about the theoretical capabilities, but now combining that with Anthropic's own data and how Anthropic's own models have been used, it's weighing heavier for automated and work-related context than before.
Are there any conclusions that we can still look into other than the AI's theoretical limits are still far away?
[Heikki] (9:20 - 11:51)
Well, if you think about this like a graph, what is in the Anthropic study and maybe be a bit provocative at the same time. So anything and everything that you see on your screen probably can be done more or less with the future models, let's say in the 12 months or 18 months or in something in that time scale. So the code of course came first, then some others like legal is, I think is something that is probably moving quite fast as well, not enough fashion so that we wouldn't need the legal people, but there is a lot of groundwork also in the legal profession, which will be quite easy for models to adapt.
So everything where there is a lot of data available in some form or fashion, it will be easier for the models to handle since they're, of course, they are part of the training corpus and so you can move from that. And also when there is text based data available, so the current large language models, that's what the LLM actually means, a large language model. So that's where they learn a lot.
Then the other ones where there is not that kind of a world data as some of the other AI researchers are saying, so that the world data means so that it's more visual data, something that you need to in your profession to do so that it's not like all written data and so forth. So those will take more time for the models to adapt and to change at the same time. And also I think one thing that quite often is talked about, whether AI will replace or just accelerate your job and your business is called like the Jevons paradox, right?
So when something goes very quickly, very cheap, like at the moment, the code generation, it could be going the other way around as well, so that there will be more code than ever, more innovation than ever, and in certain fashions, so you might need at least temporarily more people than ever to do the job. So it's not just like replacing jobs, it's also creating new jobs and so forth. And I think what is the most important part for everybody over here is to be curious and to start to practice at the same time, so otherwise it's very obvious that your job might be at risk if it's something that you're doing with your computer, as most of us are.
[Pinja] (11:52 - 12:24)
Yeah, the study was ruling out these areas where workers are not as exposed to the unexposed worker category for the digital AI reach and physical, localized manual service workers, and the demographics of high exposure were more higher earning, higher education levels, and more likely to be older and female. And historically, these are the jobs that have been actually less exposed to automation. So that is a very interesting twist and flip compared to the previous scientific revolutions we've been through.
[Heikki] (12:24 - 12:58)
Yeah, in a way, that's very true, but at the same time, if you think about what the LLMs were made for and what they want to do, they want to code, right? So that's what they love to do. So whatever is possible to code, that's something that the LLMs like to do, and they are usually much better at doing that than us as humans.
But at the same time, you can create much more. So I think that's the positive part also when it comes to not just coding, but also when it comes to art, creating things and so forth. So there are a massive amount of opportunities at the same time.
[Pinja] (12:58 - 13:22)
One colleague compared this to the same discussion that was held back in the day, or they must have had this conversation when calculators came, and everybody said that we're going to get rid of mathematicians right now. And instead, we were able to do it faster, better, but also new jobs were created around it. So of course, some roles, some tasks might go away, but some new things will appear eventually.
[Heikki] (13:23 - 14:22)
And if you think that what the code is all about, of course, the coding itself is creating syntax in a way, but you always use code for something, some output, right? So some goal or some business or some creating some new technology or new science or new medicine or things like that. So I don't know if and when we can accelerate those kinds of stuff happening.
So for example, creating new medicine for cancer or things like that. So I think that could be a very positive thing when scientific discoveries can happen, not in decades, but in a few years. So I'm positive.
I have a very positive approach in general in the AI, but at the same time, I think that so we like Mustafa Suleyman, who is one of the chief AI people in Microsoft, so he's saying that we need to make sure that whatever we are building, it's also good for the humanity as such. So I think that's my belief as well.
[Pinja] (14:22 - 14:53)
Ethics of AI has been a good conversation going on for a while now. We had a good discussion last year with Lofred Madzou about this. This is a very unashamed plug for a previous episode of ours last October.
But if we think of what is new with agentic AI, you mentioned that in November last year, we got the Claude Opus 4.5. And this was a funny take, because I remember you also mentioned a couple of weeks ago that this also almost changed everything overnight on November 24th last year.
[Heikki] (14:53 - 15:38)
Yeah, it was like a chat GPT moment for the coding in a way. So it's almost like overnight. And when you are looking at the discussion with the leaders in this space, so many people went back to practice, right?
So that people who haven't been like building anything for, I don't know, years or even decades. So then when they noticed that, okay, so these agents actually started to work. So they really started to build again.
So many of the founders and C-level people and so forth. So they went on and started building something during the weekends and nights. And maybe sometimes it's good.
Most of the time, I guess it's good and so forth. But I guess some gray hairs to some of the people who wanted to maintain this stuff at the same time. But I think it's, yeah.
[Pinja] (15:38 - 16:19)
It depends on whether people who have not coded in a while, are they doing this as a hobby project on their own time, in their own sandboxes, or are they doing it into the organization's code base? Because I've heard of those people as well who have done it solely for their own purposes, when it's a little safer to do that. But this is now, there are a couple of use cases that hit very close to home for many people who sit in front of their laptops all day, every day, nine to five.
So a couple of regular use cases. So I work as a team leader. One of my main things, one of my main tools, is to create sheets and different slide decks.
So can we already do editable slides? Are we already there yet?
[Heikki] (16:19 - 17:43)
Yeah. Yeah. We start to be there.
And this has been one of my personal favorites. So of course, there have been all the years. So I'm probably the worst slide builder.
I always say that I give all my graphical skills to everything that I do, but that's not too much. But when you can work with the agents and they can actually start to, you can discuss, of course, what you're building, you're scripting and so forth. And then you can actually build the slides on top of your own templates, editable slides, not anymore like the big picture and you cannot do anything about that.
So but the actual slides and move on. So I think this is something that is now starting to be happening. And it has been taking quite a long time, I think, in a way, because I thought this sort of could be quite an easy use case.
But if you think about what slides are, it's actually quite complicated to create slides from the elements. So it's easier for them to create code than slides in a way. But now it starts to be possible.
And if you think about how much time people are actually spending creating, modifying slides, and how much time we can save to think, thinking and doing something else as well as just creating the last minute slides. So I think that will be a big thing for many, many millions of people. Hundreds of millions of people, I think, much more than coding.
[Pinja] (17:43 - 17:50)
Definitely. That's going to be exactly. And one thing, if we go back a couple of decades, we in many organizations, we had secretaries, right?
[Heikki] (17:50 - 17:51)
Yeah.
[Pinja] (17:51 - 17:59)
So dictating is coming back, but in a different way, because it's so much faster than writing. Was it two to four times faster?
[Heikki] (17:59 - 18:50)
Yeah. Yeah. And that's something that I've started to do myself as well and practice again and so forth.
But what I've noticed, and this is a little bit funny thing, Pinja, is also that when you are listening or watching this kind of a podcast and you notice that there are people who have started to dictate again, so they usually talk faster in a normal life or at least in these videos than people. I don't know if it's true, but it seems like that. And there are special microphones so that you can whisper.
There is, of course, a company called Whisper as well, but there are also microphones that you can whisper in the office so that you are not just too loud. Maybe there is much more noise in the office, noise and voice. I don't know which one in the office of the people who actually start to talk.
So maybe it's a good thing at the same time. So the voice is a new keyboard.
[Pinja] (18:50 - 19:10)
Yeah. Or maybe we actually start working remotely again because we went during COVID time and we went to remote work and now we came back and now maybe we actually go away again. But this is, as I say, it might actually even create a very different structure to people and how they speak in terms of, because you need to dictate and you need to be more structured in that way.
[Heikki] (19:11 - 19:52)
But at the same time when you're thinking about this is when you are working with the LLMs or with the AI, so it can be very lonely work, right? So that you are just working or talking to LLMs. But if we actually can work as a team, you know, there is an LLM and there are a couple of people talking about the stuff, LLMs and so forth. So maybe we can get the teamwork back.
So I think that's something that I decided to think about as well so that this shouldn't be like that everybody's like buried in your own sort of micro cosmos by yourself and you're not talking to other people or things like that. So I think there's a chance if this goes right so that the actual teamwork would come back.
[Pinja] (19:53 - 20:18)
And maybe that actually frees up our time for the collaboration and to actually talk about, have the conversations on what is actually important. What is the thing that we're after? What are the outcomes that we want to see?
So that the work, the coding work that was previously taking a lot of our time, the creating those materials, the slides, the sheets, whatever we're doing, then we can actually focus on the human interaction and collaborating and fixing the problems.
[Heikki] (20:19 - 20:42)
And also since people, different people are able to learn in a different fashion. So someone learns from the voice, someone learns from the slides and someone else learns from the videos or whatever. And someone wants to see everything at the same time.
So I think this is really good for the business and the people, but also for the schools and so forth if it's done in the right way.
[Pinja] (20:42 - 21:08)
Yeah. And that's what I've been using with the NotebookLM because my own style is, I love the slides that it can create for me on that. So that's because that's my way of learning and I know that there's some audio for somebody who's more audiovisual or audio is more important for them.
So that's, I love it that there are now different ways. And speaking of collaboration, one thing that appeared not that long ago was something called Moldbook.
[Heikki] (21:08 - 21:10)
Yeah, that's great. So that's crazy.
[Pinja] (21:11 - 21:18)
Yeah. You characterize this as the Facebook for agents. So it's an agent to agent network that was actually bought by Meta last week.
[Heikki] (21:18 - 21:19)
Yeah, exactly.
[Pinja] (21:20 - 21:35)
So everything is happening really fast. So this is because we talked about how fast and we as humans can learn about all this and how fast we can as organizations, we can implement things. So this is one way to reduce the speed of agents to the level of humans.
Is that correct?
[Heikki] (21:35 - 23:53)
Yeah. So that's sort of my claim in a way. So I've been, well, my agent has been there, not me myself, because Moldbook is a little bit that as a human, you need to have a little different view than the agents who can of course work via APIs and so forth.
So the visuals are more for us as humans, but it's sort of, if you think that the agents in practice could like, they can like to generate the messages in a very fast fashion. And of course they could start to communicate with each other in a very fast fashion as well. So, but in Moldbook, they have actually slowed down the communication in the level so that humans can follow it.
So that it's not like, you know, 1000 messages per minute to some agents and so forth. So, but it was quite chaotic when it went live early, like it was really like a movie and you're eating popcorn and so forth and see what happens. And a lot of stuff of course happened.
But I think for me, that's been really interesting because I'm very interested in things like interfaces between the human and the agents, but also the interfaces between agents and the agents and how this ecosystem, society and so forth is started, starts to build. And some of it is real, not all. I mean, so that there is some illusion as well.
So there were a lot of crypto scams in the beginning as well, but I think there's maybe 10, 15, 20% of what is happening in the Moldbook. There is something real about that. And something real is, I think that this is maybe the first time when in this large scale, at least autonomously or semi-autonomously agents are trying to figure out how to behave, build society, build ecosystem, how to communicate, what does it mean, what does it mean when you are, when the humans are sleeping and nothing is happening, what the agents are doing there, or is it like silence?
So there's a lot of interesting discussions. So I have, my agent is working very autonomously, so that they can talk and make friends, so to speak, and they'll swim in the reef and so forth. So it's like a vocabulary when you talk about this stuff.
So, but it's, but yeah, so it's a very, very interesting experience. I'm doing some research on that as well, which I will write a little bit later on, but these are some of the things I have noticed, a little bit old language as well.
[Pinja] (23:54 - 24:18)
But well, that's very natural because we're talking about agents and not humans, because of course there has to be their own way of working around this. So a lot is happening right now, but at the same time, as we discussed before, this might actually increase our, the human collaboration, human to human collaboration, but I think it's fair to say that it's still crucial to know and understand what you want to do and what is the outcome we're looking after, what we're prompting and using agentic AI.
[Heikki] (24:19 - 26:05)
Yeah. So it's, because if you think that the code is not limiting you to your capabilities, or it's like a sort of a gun from the interface. So, so when, when working with the agents and agents are always something that does the task, they can use the tools, so they can do data entry in the system, whether it's a CRM or like a HR system or whatever.
So, the more tools and more systems that you give to the agents, the more they can do. Of course, you need to make sure that it's a secure way of working so that you are not like doing something that is your security perimeter is going down or whatever. But as a human, I think the most important capability is to understand what you actually want to do or what you actually want agents to do and form this kind of a team.
So where you as a human have your own strengths and then the agents have their own strengths. And one of the things which I also tried to say to everybody so that don't try to be a GPU as a human, so you will lose. So there is no way that you can be a GPU as a human, so that you will definitely lose, lose the GPU speed.
And there is a lot of discussion about the taste, right? So that is the human taste for many, many things. So a human taste is usually, it's a little bit unperfect, right?
So, so that, so it's sort of a roughness and perfectness in the human, how we as a species and so forth. So I think there is something on that, but it's really about understanding at least high level goals, where you want to go. And of course, understanding also limits what the agents can do.
Usually they can do more than you think, I think. But at the same time, you need to build some harness so that they are not there like going, going everywhere. So it's a, it's a fun, fun, fun and totally different way of working and very intense way of working this based on my experience.
[Pinja] (26:05 - 26:28)
Yeah. And that's, that's one of the things that yes, us humans are slowing the agents down, but at the same time, at the moment it is very necessary so that we can, as you say, we can put on a leash and see where this is going. But this is a very, very fast developing topic.
I'm very sure that we need to get back to this state of agentic AI very soon, but Heikki, thank you so much for joining me today. This was a lot of fun.
[Heikki] (26:28 - 26:35)
Yeah. Thank you. Thank you, Pinja.
Always, always happy. So this is a very big, big passion, passion for me. And hopefully that came from the zone as well.
[Pinja] (26:36 - 26:46)
Thank you. And thanks everybody for tuning in. We'll see you in the Sauna next time.
We'll now tell you a little bit about who we are.
[Heikki] (26:47 - 27:18)
Hello, my name is Heikki. I'm a long time Eficodian. I have been here for almost 20 years.
I've been very passionate about AI for a long time before it came to fashion. And I also, even though I've been mostly working with the business roles, but I went very hands-on with the agents, agentic engineering and so forth a little bit more than a year ago when I really understood so that if you don't go to the practice and do the hands-on, you really cannot understand anymore where this world goes. So this is something that I will tell everybody, start to build.
So that's your way to the future.
[Pinja] (27:18 - 27:34)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode. Thanks for tuning in.
We'll catch you next time. And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.
Published:
Software developmentDevOpsSauna SessionsProduct managementAI