AI vs humans? Why people still come first in the age of AI
AI adoption isn’t just about tools, it’s about people.
In this episode, Pinja Kujala is joined by Satu Kivioja-Ronkainen and Dmitry Tayya to explore what really shapes how AI works inside organizations. From fear and curiosity to unclear policies and uneven skill levels, the biggest challenges aren’t technical, they’re human.
We talk about why psychological safety, learning pace, and inclusion matter more than ever, and why the most successful AI strategies start with people, not technology.
[Satu] (0:03 - 0:10)
We need to understand that people are learning at their own pace, and everybody starts from zero at some point.
[Pinja] (0:12 - 1:24)
Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development. Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.
Hello and welcome back to the DevOps Sauna. If you are a frequent listener of ours, you know that we've been talking about AI a lot lately, and it's for a reason. It is something that has revolutionized the way we do the work.
It is changing everything about how we do it in such a small amount of time. But this time, our topic and perspective is not from a technical perspective. We want to know what really happens and what really shapes how AI works in an organization.
So today, we're going to be talking about the human capabilities and organizational capabilities, and not the tools. I'm not going to be here talking about this by myself, but I have two of my lovely colleagues here joining me today. First of them is Satu Kivioja-Ronkainen, who's a service designer.
Welcome, Satu.
[Satu] (1:25 - 1:26)
Hi, lovely to be here.
[Pinja] (1:27 - 1:33)
Hey, good to have you here. And my other colleague joining me today is Dmitry Tayya, who's an organizational coach. Hey, Dima.
[Dima] (1:33 - 1:35)
Hi, Pinja. Hi, Satu. Hi, Satu. Hi, everyone.
[Pinja] (1:35 - 2:06)
Hi. Hey, I'm so happy to have both of you joining me here today. So there are many aspects and many conversations going on.
What is actually stopping organizations from implementing AI and adopting AI solutions? Because sure, it's just a tool. I'm using air quotes.
But at the same time, do we need to think about how people and AI fit together? Dima and Satu, have you seen this problem with your customers? This is not perhaps thought about so much in an organization.
[Dima] (2:07 - 2:50)
Well, definitely. Nowadays, people are actually mad about AI because one part of the organization is actually frightened that people are a bit scared. Hey, would it take over our jobs or will it be actually relevant in the future?
However, there's the second group who are kind of curious about the tool and the capabilities that it can bring to the organization, to the teams and the site itself. So definitely, these kinds of two camps are somewhat different and depend on the organization. You can find more of the first or top likes of groups or people or the second one.
It 100% depends on the team and the organization itself.
[Satu] (2:50 - 3:08)
Yeah, and I want to feel more about that. I've also seen that people are hesitating about what is actually allowed at the organization. So basically, the rules given by organizations, they can't understand them or they haven't even heard about those.
So this is also a relevant point.
[Pinja] (3:08 - 4:00)
What I've seen with many customers in the past couple of years is that, especially what you, Dima, say is that there might be the pioneers who are in their own spare time - they might be studying the new tools. They might be tinkering a little bit on their own.
And at the same time, there are the people who fear that I'm going to be left behind. And if we take a usual organization, and now we're not talking about any startups, we're not talking about tech startups, but we're talking actual organizations, they might even be in a regulated industry. Take banks, healthcare, for example, and fixing those levels of knowledge, interest, and Satu, what you say, at the same time thinking about what is allowed.
What is the guidance that we give? What is the risk taking maturity and willingness of ours when it comes to implementing new tools?
[Satu] (4:00 - 4:23)
Yeah, I had one discussion just a few hours ago and was discussing this topic. And what I found from that was that the person was worried about that. She heard that the information from the company was slipped outside of the company with AI, and she was afraid of that.
And that was one topic that put her behind or using AI.
[Dima] (4:24 - 5:34)
Yeah, I couldn't agree more with you, Satu. And what I also heard from some of the customers, they were pretty much conscious on how actually they can use the tool. Because sometimes, even though some different AI systems or AI agentic tools are available to their disposal that they can actually start using them right straight from today, they're not necessarily sure how to use them in a best or an optimal way, and what kind of information they can share.
So one of the tricks that we have applied with my customers during this AI adoption framework, let's say, yeah, AI culturation maybe, sessions, we actually start every session, every first meeting with a new group by just letting them to find out their relevant AI policies in the intranet. Hopefully, they have access to those, and they're like widely shared within the organization. And then we just ask them to upload this to an AI system and to communicate and interact with this document in order to understand what is allowed, what is not, what they can do, what is perhaps like something they should avoid.
[Satu] (5:35 - 5:50)
If I may comment on this, it's a great thing that you are actually seeking the policies, because what I found from one customer case was that people didn't know if they have a policy or if it's relevant for them, even though it was the company policy.
[Pinja] (5:51 - 6:54)
Yes. And this is, I think I asked that question, we had this event some weeks ago, where we collaborated with the Finnish Women in Tech organization. We had a lovely afterworks panel discussion, and I asked the audience, like, do you have an AI policy?
And everybody raised their hand. Okay, let's twist it around. Who does not have an AI policy?
Somebody, like really shakily, this lady raised her hand and said, we don't. And she was afraid to admit it. And of course, we said that, yes, it would be preferable if you did, but that is always not up to her.
And this is what we talk about when we go into an AI first world, because this is not how we have designed the world so far. Yes, we have had policies before. So an AI policy in that sense is something maybe not so out of the ordinary for the common person in an organization.
But because AI is changing so much how we work, and we need to figure out, do we want people to be for AI or is it AI for people? So which way are we going to fit these two things together? And that's where all these questions come into play.
[Dima] (6:54 - 7:37)
In my opinion, AI is definitely created for people to make our lives easier and to kind of delegate those maybe not necessarily the most motivating or like inspiring moments at work that all of us have. And what I also think that it's really important to align at some point within the organization on how we're going to use it, how we're going to utilize the tool in order to create new capabilities or make our strategy be achieved faster or with less efforts, like in a more kind of cost effort approach where we generate value by investing less time, things like that.
[Pinja] (7:37 - 8:12)
Because we humans were not designed basically to be AI bots and to work in this way. So if we really think about it, not that much has changed in the human brain in the past tens of thousands of years. So this is a big revolution for us.
We've really done some studies about how AI is being adopted as any technology. So Dima, I understand that you're especially been looking into the AI adoption curve when you work with the customers. What are the major groups of people and organizations that we see in the AI adoption and technology adoption curve?
[Dima] (8:12 - 8:46)
Yeah. Surprisingly, the curve remains the same. There are early adopters or innovators who have been using AI for almost two, three, five years, and they're pretty hands-on that they can teach others.
But then there are also the late majority, those who are a bit suspicious. They're not pretty sure they should start using, how they should start using, or they may be waiting for an invitation or some kind of training sessions led by experienced people so that they would be introduced to the tool and start actually using it.
[Satu] (8:47 - 8:56)
Yeah. And this is actually something that you can compare to any tool that your organization is introducing to you. Nobody doesn't adopt it immediately.
[Dima] (8:57 - 8:57)
Yeah.
[Pinja] (8:58 - 9:08)
Yeah. And the more complex and larger the organization, the harder, or maybe the more steps are necessary to get it to stick.
[Dima] (9:09 - 10:04)
Exactly. What we usually recommend is actually to start with a strategy. What kind of capabilities you as an organization would like to develop within like six months, one year time frame, and then you break down this strategy and you figure out actually where should we start this AI journey or adventure with your people who are their early adopters, who are already curious and just need some kind of support to start building something advanced with AI.
But also what I actually advise us or invite us to think about is how to lower the barrier for the early majority people, groups, cohorts, late majority, so that will be also included. Because I think that inclusion is the key, especially nowadays, because we don't want people to be scared about AI. We want to give them our hands and help them to climb on this mountain together at the organization.
[Pinja] (10:04 - 10:32)
And that is something, when we now talk about the organizational perspective, a little bit of like zoom out of the picture right now, that we always talk about is it the bottom up approach that we let the people figure it out first and then to provide context for the organization, or is it a top down approach where it is more mandated, it is governed from the get go, from the very beginning? Have you seen either of those approaches or a mixture of those, either Satu or Dima?
[Satu] (10:32 - 11:09)
Yeah, well, I've seen it in real life from both ways. Usually the management says we want AI. We've seen all those funny memes.
The CEO says we want AI and employees are like what we do with that. And then the discussion goes our way on that. And I think the CEO needs to say that we use AI, but also the employees need to study and learn and find their ways how AI benefits their work and is beneficial for them.
[Dima] (11:10 - 12:11)
Couldn't agree more with Satu. Yeah, we need to prepare the organization for this adventure, considering the strategy, the governance model, how we're going to help our people with alignment and what we're doing, what we're not doing, like risk management. And for sure, everything that should be documented in a way that people would easily consume this and understand actually what is expected or what is not.
Because I think this is the key. And the relatively educated managers, they fully understand that this is within their full attention and hands on. And they should prioritize this thing over the orders and comments, let's say this way.
But at the same time, like Satu said, it's really important to onboard all the people and help them to kind of make those first baby step towards their AI adoption and then just simply support them on the way.
[Pinja] (12:11 - 13:12)
There was a study by the European Commission not that long ago about the AI implementation rate. And the statistics based on this report say that 19.95%, so we're not talking about 90%, but 19.95% of enterprises in the EU that have 10 or more employees and self-employed people use at least one of the AI tools. And when we live in our own DevOps bubble, we work for an IT consultancy, a DevOps consultancy.
To me, from my bubble's perspective, this sounds very low, to be honest. The rate is a little bit higher in the Nordic countries with Denmark having 42%, Finland having a little bit over 37% and Sweden 35%. So it sounds like the Nordic countries are a little bit more mature than the other European Union countries.
But does this number, the 19.95% only of the enterprises that have adopted some kind of AI tooling, surprise you as well, Satu and Dima?
[Satu] (13:12 - 13:39)
Actually not, because as you said, we are living in an AI bubble here at our organization. And when you think about the majority of people's jobs, they are actually doing manufacturing work and work that actually doesn't require AI so actively. So I would reflect on that, that actually the bubble is on our hands, but not on their hands.
[Dima] (13:41 - 14:14)
Yeah, I would agree here. And especially when you open this report, then you can find a figure eight that enterprises have ever considered using any of the AI technologies. And the reason why they do not use those is actually fascinating, because like the 71% stated that they have no relevant expertise with AI.
That's why they're not using it. All right. Then some of those like said that, hey, AI technologies are not useful for our line of business.
[Satu] (14:15 - 14:48)
Yeah, I have one example for that. My physiotherapist told me that she was obligated by her organization, a big health organization, to use AI to record when she meets her clients. And what happens, physiotherapists are actually moving around a lot in different spaces.
So she was wondering how on earth she can record the discussion with the customer, because it's not relevant for her. And it's not so easy to adopt AI on that network.
[Pinja] (14:49 - 15:03)
If not being provided the technology, the other technology and the equipment, for example, like a mic, for example, like a shirt lapel or something, it's going to be harder to implement in many, like, for example, this line of business.
[Satu] (15:03 - 15:10)
Yeah, AI was developed for doctors who sit at their desks, but not physiotherapists. Yeah.
[Dima] (15:10 - 15:48)
And then I'd like to loop it back to that point about what kind of adoption model should be selected bottom up or top down. And in my opinion, everything should happen at the same time and really well facilitated, because without the voice of this doctor, you'd never ever create a solution that would serve focus. And we know from our experience that people who create their own processes, like they co-create them, will feel ownership.
And therefore, they're more likely to explore and improve them over time so that you can build a better solution for both parties and achieve all the objectives that you have had in mind.
[Satu] (15:48 - 16:10)
Yeah, spot on. I totally agree, because in my work, my responsibility is to find what's actually the problem in the organization and where we would create a new solution. And if we don't find that problem from the organization and from the employees, they won't adopt that solution because it's not purposeful for them.
[Pinja] (16:11 - 17:04)
No, that's what we see in any kind of technology or process, that if it doesn't work, we're going to use workarounds and we're not going to be going by that. And it creates dissatisfaction. It is not a good experience for the people.
So again, this is, as you say, Dima, I really, really want to also highlight the hybrid approach with having the top-down and bottom-up approach or both. One thing I would like to discuss with the two of you is what happens in an organization when the speed of creating software increases? And some even say that productivity increases, but that's, I would say, debatable.
Those lines of code with more speed doesn't mean more productivity. But what do we do with the faster coding now? Is there a way in an organization to handle that?
And what are the things that need to be taken into consideration when we have this faster code creation going on?
[Satu] (17:05 - 17:31)
Well, one thing that we need to take under consideration is the people. Because actually there is a limit to how much a person or people can handle during the day. And in case they are speeding up their work, like multiple times or a hundred percent or something, our brains don't follow on that effectiveness.
And that's going to be a problem a little bit later with our work because we are overwhelmed and totally fatigued.
[Dima] (17:32 - 18:29)
Yeah. And if I may complement, we also should not self-optimize things. Because when you think about faster coding, it's just a part of the workflow or the software development lifecycle.
What I mean by lifecycle are actually all the way from the strategy to the idea management and then down to the product management, implementation, go-to-market activities, all the legal aspects, the pre-sales, sales, making our customer support representatives ready with the new features or the new products that we are about to ship to our customers. So while we're just improving one thing, we create bottlenecks or we shift those bottlenecks in other parts of the organization. Back to the point, we make our people more and more dissatisfied in a way that they feel overharmed because they now have to adopt yet another new thing and the pace is just rising exponentially or like, oh yeah.
[Pinja] (18:29 - 18:49)
I agree with that. And I think this is a, let's use this as a segue to talk about the people's perspective because we need to think about the people as well. So with the organization we zoomed out and now we're zooming in to the actual personal level.
Has how we learn changed recently? This is a loaded question, I think.
[Dima] (18:49 - 20:18)
I'd say that the learning process itself remains the same. However, the expectations have raised enormously. So usually what we hear from our customers, they want us to help their people to adopt new ways of working with AI in a three-hour workshop.
In my opinion, it's absolutely impossible. In my practice, I've iteratively figured out that roughly 40 hours is enough to start experimenting to bring some evidence that AI works as it was expected. And it actually shifts the metrics that were selected by the people who are working in this subject area.
Yeah. So if we want to change the behavior in an ethical way, I mean, if we want people to start thinking AI first engineers, AI first specialists or talents in a way where they first consider, okay, I should try this out and then see how AI would perform instead of doing this manually, especially like said, those parts of the process where they actually not necessarily like the routine that they should follow, or they not necessarily actively think about the process or the outcomes like just doing this automatically. So this change takes time, but when you support your people and you guide them towards that vision of perfection, then they can pick it up and would never go back to old ways of working. Let's call them this way.
[Pinja] (20:18 - 20:37)
Satu, maybe a question for you about how do we start to learn something new? What is the starting point from you look at the service design as a starting point, you look at the process, but with an organization, you look at those, but how do we fit the person? How would we take the person into the loop here?
[Satu] (20:37 - 20:40]
Good question. I need to think about this a bit.
[Pinja] (20:40 - 20:42)
Usually we start with the familiar. Is that true?
[Satu] (20:42 - 21:09)
Yeah, yeah. Familiarizing ourselves and of course trying ourselves, for example, AI, but there are people that don't want to try by themselves, but they need their pre-made use cases where they can start and start learning to use AI.
We need to help those people. Those are the ones that actually we need to take on the boat in this AI journey.
[Pinja] (21:09 - 21:34)
Could there be something in the background for these people? It was mentioned before in this conversation that there is the fear of becoming obsolete and redundant. We know we have seen the statistics and read the news that, yes, there have been many layoffs in IT across the whole world in the past couple of years.
That has been attributed to AI, for example. How can we tackle this side? Any thoughts around that?
[Dima] (21:34 - 22:49)
I'd say that fear kills the innovation process, active thinking process, and whenever a person experiences any emotions like this, they actually start repeating the very familiar strategies that they have acquired throughout their life. If we want to experiment, there are no doubts that we have to create an atmosphere where people feel psychologically safe. This must be communicated from the top management down to the team leader level because we want people to understand that our competitors, their competitors, do not sleep at all.
They're also experimenting with AI. They also would like to disrupt the market. They want to outsmart us, et cetera, et cetera.
Therefore, we need these talents to keep working in our organizations and help our organization to strengthen their market positions and to bring the most responsible solutions to our customers. Isn't that right? I don't know.
But I think that we still need engineers. We still need designers. We still need product managers.
All the roles that already exist in every organization are still valid and will be valid in my opinion.
[Satu] (22:49 - 23:23)
Yes, that's true. We can't forget that we are still people. We are not AI and we have different styles of learning and looping back on the original question that actually people, we need to understand that people are learning on their own pace and everybody starts from zero at some point from different topics and people shouldn't be afraid that they are obsolete, but jumping and testing and trying out trial and error style and AI, that's the way on going to AI.
[Pinja] (23:23 - 24:25)
And I think one of the key words here is the psychological safety and the safe space to do the experimentation and the understanding that we're not going to be experts on day one or not even day two, because now with AI has been brought into the workflow and we're expected to be on top of our game. And there was a study from Qualtrics that said that managers are now actually expecting more from the workers. And I think there's this, it feels that we're, since we can create materials faster, we can create code faster, the expectation is higher.
So the speed has increased quite significantly. So what are the human factors that create this divide between the expectations for management and to the people actually doing it? Is there something from an organizational coaching or service design perspective that we could highlight with this divide that has been created now?
[Dima] (24:26 - 24:47)
I think that no one is perfect with AI at this moment of time. Yes, for sure. They're like early adopters or pioneers, but the majority is still, they're still not even practitioners.
So they're trying things, as Satu said, and they are not necessarily trying to build something on an enterprise level, kind of.
[Satu] (24:47 - 25:06)
I see we actually are in a hurry to get something out of AI. And one thing that we haven't covered is the value, that actually we need to find value where we build on top of what we build. So that's something that actually the CEOs should also understand.
[Pinja] (25:07 - 25:51)
And that's a personal level of the C-suite, for example, that the C-suite is also nervous on a personal level. It is not just the organizational thing, but I feel that there is a nervousness because I feel that there is the really strong fear of missing out going on. Because we see where the frontrunners are going.
These are the really early adopters, the frontier people. And that's what LinkedIn is all about right now, what they can do. This is technically possible, but at the same time, we know that it's going to be slow for us.
And people, especially the management, feel that, oh, we're going to be left behind if we don't do that. And then they need to, as you said, they need to be able to provide the return of investment. This is why we have these tools.
[Satu] (25:51 - 26:00)
That's why we need to seek the value points that where actually the value is built on. Dima, you've been doing these studies with the customers, don't you?
[Dima] (26:00 - 26:55)
Yeah. And preferably this should be done with a quite diverse group of people representing higher management, team leadership level and the engineers and specialists level as well. So they should be defined and designed in a quiet, like I said, diverse group of people and to ensure that we take into consideration both strategic levels, the structure is also optimized for the learning and the processes are allowing us to experiment.
People use the state of the art practices and then try to improve their own ways of working and the reward system should be aligned with their goals that I've just mentioned. Because when all these components are aligned and reinforce each other, the organization is most effective. If one piece is missed, then we cannot expect any positive outcomes or impact out of this.
[Pinja] (26:56 - 27:23)
And at the very beginning, we talked about what comes first or which one is for which. So is AI for people, people for AI? How can we, if we summarize all this conversation, how do we make sure that it is AI indeed for people and not the other way around?
What would be the starting points for an organization to, for example, facilitate this dialogue between the top management and the people doing the work? Where to start from?
[Satu] (27:24 - 27:41)
I definitely, I'm flagging on discussing with the employees and discussing the organization that what actually would benefit or what are the gaps or pain points on their work. That's the starting point from my perspective.
[Dima] (27:41 - 28:27)
And as AI has no pain points, but people do, we should start with people and understand what bothers them the most right now in their teams, units, departments and organization. What should be immediately improved and where AI can actually help? I think we're quite aligned with Satu here and it's really important to start with small baby steps, but thinking about people first, how can we improve their flow state or how we can reduce the amount of errors and mistakes happening inside with AI.
And the more you learn, the more you experiment, the better solutions you'll build. And then eventually, we are no longer participants of this happening. We are practitioners with a solid experience, our tool belt.
[Pinja] (28:28 - 28:42)
I like those tips. The tip is to be practical and talk to your people. Hey, I think that's all the time we have for this topic.
I've had such a lovely time talking to you, Dima and Satu. Thank you so much for joining me today.
[Satu] (28:40 - 28:43)
Thank you. It was lovely to be here.
[Dima] (28:43 - 28:43)
Thank you so much.
[Pinja] (28:44 - 28:57)
And thank you everybody for tuning in and we'll see you in the sauna next time.
We'll now give our guests a chance to introduce themselves and tell you a little bit about who we are.
[Dima] (28:57 - 29:06)
Hello, my name is Dmitry Tayya. I'm an organizational designer and coach at Eficode, helping people and teams to set up their state-of-the-art ways of cooperation.
[Satu] (29:06 - 29:16)
Hi, I'm Satu Kivioja. I'm a service designer and my role is to help organizations to look for their pinpoints and create real solutions for those.
[Pinja] (29:16 - 29:32)
I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode. Thanks for tuning in.
We'll catch you next time. And remember, if you like what you hear, please like, rate and subscribe on your favorite podcast platform. It means the world to us.
Published: