What does a DevOps transformation look like from a practitioner’s perspective? Hear the experience of Henri from OP, one of the largest financial companies in Finland.

Lauri (00:05):

Hello, and welcome to DevOps Sauna. My name is Lauri, and I am the chief marketing officer of Eficode. When we hear and read about the benefits of DevOps, we often learn that DevOps accelerate software delivery and that this in turn drives business performance. Also, we learn that the best DevOps performers are more than twice as likely to achieve or exceed their objectives in profitability, productivity, and customer satisfaction. But how does a DevOps transformation look like from a practitioner's perspective? We had an opportunity to spend some time with Henri from OP, a customer of Eficode, and one of the largest financial companies in Finland. Henri has their firsthand experience on getting a large organization on a change management journey, and he's happy to share that with us. Let's tune right in and listen to his experiences. Good. So here we are, Henry Helakari from OP in Finland. Welcome. Glad to have you in our show.

Henri (01:14):

Thanks Lauri.

Lauri (01:15):

Today. We talk about a DevOps transformation. And before I let you define what is DevOps transformation, why don't we give you a fora to just briefly introduce you?

Henri (01:30):

Okay. First of all, Henri Helakari from OP. I've been working at OP now roughly three years. My history is from the telecommunication business. I work with the giants. Nokia and Ericsson have been the telco houses I've been working for. And basically from there now to the banking sector. And then three years here. My responsibilities now are engineering and DevOps, heading basically the technical transformation here at OP, then part of the entire OP enterprise or job transformation and then the technical part.

Henri (02:23):

DevOps, how do we work? How do we work from the technical point of view, the ways of working, the practices there and then as well how do we combine the development and operations? Of course, breaking the silos, as everybody knows, and then the engineering part. Basically modernizing the way we do the software. Taking the cloud into the picture and what ways of working we have in the software development side, then once going more towards the DevOps. That's then the responsibility area. And then you asked, how do I see the DevOps? So if I put it into one sentence, increasing the software delivery performance is my interpretation of that. There of course we have a lot of things than behind the scenes, how do we achieve that?

Lauri (03:31):

What caused you to start what was called DevOps transformation at OP? What was the starting point for you to come to, let's say a program decision that we need to do something programmatically called DevOps transformation? Was there some underlying business problem that you thought that this is specific to something that we want to solve?

Henri (04:01):

Yeah, there were actually several things let us into the point of basically initiating that transformation. From my point of view, there are not a single thing, but several things. And I think at OP, people had talked about DevOps, I think roughly in 2016 already before I joined here. My journey starts from 2017. We talked about DevOps during the '17, then we basically started some actions on 2018. And then at the end of 2018, there was an initiative, basically the whole OP enterprise or job transformation started. And then it kicked in in early or beginning of the 2019. So from that point of view, like with the DevOps transformation, we were at a correct place, at the correct timing because beyond DevOps, the transformation was initiated early 2019.

Henri (05:27):

But of course then there were technical things that led us starting this transformation. So there were really siloed. There were NoOps, as it's pretty usual. Those were separated in an organizational level and basically developers didn't know what was happening in NoOps and vice versa. So that was of course evident and clear that there is something to be improved there. Then of course we witnessed quite a slow, as I mentioned, the software delivery performance. If we treat that as a KPI, there we were not, I would say in a good shape, as well like a developer experience like from the tooling point of view, how smooth your everyday work is as a developer. So, those things were basically initiating the change from the tech point of view.

Henri (06:41):

Then if we shift to the non-tech area of things, we basically had a slow learning and an experimentation capabilities, as we're not that fast. So of course, then you are not able to learn from your customers, experiment together with your customers, so getting the feedback loop in place. And then if we, again, go to what was the actual, I think at least from my point of view, the business problem was that we wanted to get better customer experience and the problem statement related to that. We're talking about the learning and experimentation, doing together with the customers. And of course that then requires a faster reaction and adoption capabilities for us as an organization.

Lauri (07:49):

Interestingly, not too long ago, I talked to the CTO of WHIM, you perhaps know him, and we discussed with him about agile and DevOps. I remember him saying that somewhere in his past, there was a situation where the team was falling behind their expected recycles and they were not able to keep up with the expectations and then suddenly things changed. And then when they had to give an explanation to the positive change to their management, that's when they revealed that actually, we just adopted some new methodologies. We haven't told you. Did you have similar experiences from individual teams or some pockets in the organization where somebody in the organization had begun adopting DevOps practices or agile practices before you embraced this bigger transformation project?

Henri (08:53):

Yeah, of course we had some individual teams or pockets in the organization that had adopted the ways of working and those we actually used as an examples. We learned from them as well how they are doing things and brought those forward of course, those stories, how things are actually in a bit better way in those teams and pockets in an organization where they are doing things in a bit different way. And then as well, of course as we started to initiate the transformation, we piloted certain things and tried to increase the automation capabilities and methodologies. So we piloted those in certain individual teams, and then brought those experimentations to the bigger audience so that these sort of benefits can be achieved once adopting these sort of things and putting some new things in place. That was crucial at the beginning to get those first small wins in those individual teams and then that brought trust, and these sort of things for the larger audience that, "Okay, this is the way to go."

Lauri (10:32):

You already refer to some of these objectives, or the problems and objective, which is improving the release speed, improving the developer experience, improving the learning and experimentation, the feedback loop and customer experience. How did you go about selecting the specific KPIs? Which KPIs were important for you? And then the followup question is, once you have specified those KPIs, how did you collect the information?

Henri (11:10):

Of course we thought a lot about the KPIs, what are the proper measures and KPIs and leading indicators, lagging indicators, this sort of a thing. That was a thorough process of thinking and investigating what are the proper measures, like indicating technical agility and those things. But then we actually ended up using and we selected those... Probably people know about the DORA metrics about the Google-owned association, which probably a big part of the enterprises are using, about those lead times, deployment frequencies, then of course the quality is emphasized as well, the entity are... which is the meantime to restore, change failure rate. Those metrics and KPIs then we actually selected. So it's about the speed, of course, gaining the speed, but then you're not able to compromise the quality. Those needs to be balanced.

Henri (12:30):

That is a holistic view of telling that what is our software delivery performance? Are we able to deliver? And then in case we are able to deliver, are we able to do that in a sustainable way? What I'm usually saying here in our corporation that we should do sustainable development. According to the climate change discussion as we're doing in a global scale, so that software development should be done in a sustainable way as well. Then of course behind those metrics, they are like in a lower abstraction levels, so we have of course probably in a team level that the commit counts and that measures but what I mentioned earlier, those are organizational metrics and inner credit in an organization, how we are performing as an organization.

Lauri (13:38):

DORA metrics are particularly interesting for me as I have recently focused on net present value calculator on not only DevOps, but specifically DevOps platform. And it's interesting to see when you really have to sit down and figure out the factors that influence a certain metric, let's say cycle time or lead time. How is the business case really building up? And that's of course what's called a spreadsheet exercise, so we can sit down and make certain assumptions and then build a business case with this hypothetical. But then it's a completely different situation to then go back to the organization and go back to your systems in place to try and get the real information out, to reflect what is actually going on in the world. So how accessible were the data for you that gave you that information that you were looking for? And if I'm not mistaken, DORA has these four metrics, which is the lead time, cycle time, there's a deployment frequency, and then there's time to restore, if I don't remember them wrong.

Henri (14:52):

Yeah. The data as we had it, it was accessible since the systems and tools we have in place are these publicly known systems, we're talking about the Jira and those kinds of tools. We had that data available and accessible, but of course putting those measures in place... There was information missing, so we didn't have all the data points in place in our systems which we need to measure those things that we wanted to measure. So there was some new data mining and data collection need to be started to be able to put down those measures and actually start to see what is the current state and where we should go.

Henri (15:59):

But yeah, some data was in place and then some was missing, but then just starting to measure that. And of course, in some cases it required some... Once new data needs to be collected, so then that needs to be produced first. So, we agreed that these data points needs to be in place, then it means that teams basically need to produce that data. And of course it always needs to be kept in mind that this sort of data collection is not supposed to generate a lot of new work, if any, to the teams so that we're not consuming the time from the teams to this sort of work that they are putting some data in place, which is not productive work from the team's point of view.

Lauri (17:08):

What's your experience with harmonizing processes and harmonizing data? And what I mean by that is, you sit down and you decide you want this type of metrics out to measure the output, or measure the effect of a change. And then you face the fact that you have disparate processes across different organizations. And of course the tendency would be either to go and change the process, or the different processes would be harmonized to deliver the same type of data, or you could try to solve the problem in the data layer where even though the process would be different, then the data, the different processes produced are comparable. How do you evaluate this challenge, like changing the process versus changing the data that the process produces?

Henri (17:58):

I think harmonizing things, whether it would be a process or data or whatever. In a big organization, it's always an extreme challenge. Like when you have tens and hundreds of teams, it's always really difficult. We have been streamlining some of the processes that needs to be streamlined, and that is a challenge. But then of course we are doing some things as well in the data layer that not everything needs to be harmonized, but we are trying to do our data mining and the tricks. Then, once we are handling the data, that we're getting the correct data points out from that data, even though it's not produced in a really consistent and harmonized way throughout the organization.

Henri (18:57):

Probably, we're doing both, and of course then these things needs to be automated as far as possible so that we're not relying on manual processes or manual data collection but producing those in an automated fashion. But some work needs to be done, like for the teams that we're doing our tricks in a data layer so that we're avoiding some extra work in the team. So it needs to be attractive as well for the teams, and easy.

Lauri (19:33):

Speaking of the teams, we haven't discussed about where are you right now with your project? So, maybe before we go into talking how the transformation has turned out for you, where are you right now in this DevOps transformation?

Henri (19:55):

As I said, basically still at 2019 was setting the scene and initiating the change in an official way. As our transformation started as well, but 2020, I think this year is about building up capabilities and competencies of people and teams. So we're educating our people so that they know about the cloud development, about the DevOps, about the APIs. That's sort of the things which are an integral part of the modern software development. And there, we have our own software academies and that sort of things. We're introducing these new KPIs to our organization, what is our way that we are evaluating and steering the change, for example, during the next year? What are our objectives there?

Henri (21:05):

Again, then increasing our capabilities from the tech point of view and then from the non-tech point of view as well. So, what are the most efficient ways to handle the backlogs and what sort of principles we should apply there? We're not trying, but putting in place that teams have time for improving things so that we have a permission and we have a license in teams to improve things and to learn, and not just build day in, day out the new features, because that's not a sustainable way of doing things. So putting the ways of working place for that sort of things. And then as well we're creating building blocks for our teams in a centralized manner, we're talking about the pipelines or that sort of the tech stuff, which then enables teams to be more productive and that we're not doing those things over and over again in all of our teams, but we're building the ready-made building blocks, which team can then utilize and deploy into their daily work.

Henri (22:42):

Then as well getting more teams on board, starting initiating their DevOps journey, whether it's about the automation or the backlog management or whatever, so getting more teams, more ambassadors within OP on board. Then of course we are as well doing a lot of communication about the success stories, about the failures and whatnot, so that people know what is this all about? And we're spreading the message as well a lot during this year.

Lauri (23:24):

I have seen from marketing team that there are a lot of practicalities or rules. It's a very wrong word, but anyway... that you can take from DevOps and apply that to different areas. And specifically in marketing, what we have found out is the technical debt management. When we hand off things, they are not always as good as we would want them to be. And there's always a reason to go back and make it better later. There are a million examples how you could do that. And I remember reading from, I think it was Gene Kim's The Unicorn Project, where he referred some of the underlying literature saying that if you want to give your house clean, you have to dedicate some 20% or so time for technical debt management. It's not easy for the marketing team, and I can only speak for the marketing team, even though you have permission to do that. So I'm curious to hear how it has turned out for you when you give permission to technical debt management. Do you still find the time to do that?

Henri (24:35):

That's an interesting question, and I'm quite happy that you actually asked that. It's not easy. Our standpoint was that it was not allowed, we didn't have the permission or the teams didn't have permission. Actually how we did it, we did it from top down, that there needs to be permission. So we went all the way to the roof, to the executive leadership, proposing that this model for the time division needs to be in place. We're talking about the 70%, 20%, 10%, so that 70% of time you can picture, for example, during the one sprint, two week sprint. So 70% of the time should be consumed into the normal daily work, 20% of the time should be improving the daily work, and 10% of time should be learning, building up one's competence and experimenting new things.

Henri (25:45):

That's the way we proposed it. And that was approved. And now let's say for the past half a year, we've been working with different parts of the organization that how do we actually deploy this sort of a methodology where you work into our daily work? How should we plan according to this? What objectives there would be so that we are continuously improving things? So it's not easy. Often, I'm finding myself discussing about this thing with business directors and people that have really the business background and not that much a technical background. So, we are argumenting and discussing about things, like 20% of time paying, for example, the technical debt. What is the return on investment? Is it really paying off?

Henri (26:56):

But we are having still hard times deploying that into daily work of teams and tribes, but I think we're proceeding. We have started from zero, we're now increased, let's say the percentage that we're using for improving things, paying the technical debt. It's definitely now more than zero, but we're not in a shape we would like to be yet, but of course that's natural, it's a big change in an organization level and not happening overnight. So patience is needed, but I think we're still making a big impact once we're proceeding with this initiative as well.

Lauri (27:46):

I don't know who has originally said it, but I believe it has been attributed back to Amazon saying that whenever you have to select between doing the daily work or improving the daily work, you should always select the latter because you are making the tomorrow better than today.

Henri (28:07):

Yeah, exactly. We've used this as well, that nothing is more important than improving your daily work so that it would be easier tomorrow than it's today, and a developer would be happier to come to work tomorrow than one is today. But yeah, it's a big change in minds and mindsets.

Lauri (28:32):

Yeah, yeah. I think the furthest that one can take that, I think... And again, I'm not quoting anyone because I would be remembered wrong, but the quote was that the miracle of Toyota wasn't manufacturing, the miracle of Toyota was the improvement of manufacturing. So we are on to something when we think about the improvement of daily work. What I wanted to go back a little bit is looking at the metrics from today's perspective. We discussed about selecting the metrics and setting them in place and going for collecting the information. Are there are some metrics that you can see are already bearing fruit?

Henri (29:13):

Yes. I think we are now in a point that we have all the data in place and we have a visibility where we are now, we know what is the current state. I would say that those metrics, we haven't improved yet, at least in a big way, like when reflecting those metrics. That is actually a game plan as well, because those are rather new things. So this year, our focus has been a lot in improving the capabilities of teams and we're not that much measuring based on the hard data yet during this year. This year, 2020, is about building up the capabilities. And there we have this self-assessment tool built for the teams where they are evaluating theirselves. Of course we're calibrating that they are doing it roughly in a proper way, but they are indicating their progress based on certain lists of things. Let's take an example that are they using VIP limit in their backlog? Af you go to the tech side of things, are they using trunk-based development or is unit testing in place? So these sort of things.

Henri (30:46):

And it's a self-assessment of teams. So based on that maturity assessment, we're assessing teams and our transformation, is it proceeding during this year in an organization level? And then moving towards the next year, we are taking more than the hard data in this DORA metrics we talked about already, taking those more into use. Therefore, this year is a softer year. And then probably next year is a bit... Of course always want to use the hard data. So it's a bit harder and not assessing those soft things, and it's not anymore based on the self-assessment, but based on the data.

Lauri (31:42):

I personally think it's the right way to go because you are empowering the teams also with skills. It's not about going back and said, "You should have known how to do it."

Henri (31:53):

Yeah. Personally, I feel that if we would have gone this year straight to the deep end of the pool and the measures would have been this hard database measures, that would have been probably a bit unfair to the teams, because if you don't have the capabilities, if you don't have enough competence, let's say about the cloud native development, it's quite unfair to measure what is your deployment frequency if you don't have then the basic fundaments in place, but give time, give the possibilities and those resources to build up the competence and capabilities. And then we shift the focus a bit more towards the hard database measures.

Lauri (32:45):

Yes. Said another way, perhaps one of my former colleagues... I posed the question to him earlier in that which way you should go, culture first or the tools first? He said that absolutely you should go culture first, because if you go tools first, all what you are doing is you are building up a measurement, you are building practices to prove what you already know. And that's not going to be very useful to just have... I mean, there is value in confirming your hypothesis, but what do you gain by spending one year only to learn to know for the fact what you thought was true instead of investing in culture and empowering teams and enabling them and building the change of culture? What you said that something that has been forbidden for a long time, suddenly embracing that, it's not a policy change, it's culture change. That needs time.

Henri (33:52):

Yeah, it definitely needs time, but then as well... I definitely agree on that, but then what I've recognized here at OP is that when we talk about culture, because it's a fluffy term, what is culture, what is DevOps culture? It's vague and fluffy. On the other hand, then the tools are concrete. How I see it is that when you talk about culture, you should give the view of the culture, what it's about and what are the building blocks of, let's say DevOps culture? First, you need to see the soil and talk about the culture, but then rather fast, you need to be at least a bit more concrete, then give the tools, talk about the automation. That concrete software related things and matters and bring the tech in and then bring some benefits out of those.

Henri (34:59):

And then again, show the gap, like between the culture and tools, because probably the tools, the pipeline, all that automation is then again probably revealing the gap that we need to fill by, again, talk about the culture. And of course, let's say an example that we need to allocate more time on the continuous improvement. The pipeline probably can go visible that our test asset is not reliable. Then how I see it is that the tech reveals that the culture is not in place. So, it's probably a mix of things and it might depend as well that what sort of people you have there that you need to convince. So if you go culture first, or then the concrete things first.

Lauri (36:03):

I actually had a question for you, which I was hoping to ask at some point, what advice would you give to someone in the same situation you were before you started the transformation? What you just said can be part of that answer, which is that technology or tools reveal the shortcomings in the culture. Is there something else you think, based on everything that you have experienced, if you went back to the same situation you were in the beginning of the program and you thought, okay, having learned everything you have learned now, what lessons learned would you take and probably do something in a different way or different order now that you know everything you know?

Henri (36:48):

I think we're learning every day and every hour, so there's a lot of learning. I think what has been really good from our point of view, from the DevOps transformation point of view is that really within the whole OP agile, enterprise agile transformation, we had the buy in already from the executives. So, we have a top-down mandate and then the executives are saying, "This is the way we go. Our objectives is to be agile and business agility is our priority." So then it means that these sort of things are prioritized as well in a tribe and team level. So that is really a key and it enables a lot of things and removes a lot of redundant discussions with people because that is our common priority as a company.

Henri (37:44):

That needs to be in place. Of course, before you're starting, learn and get the current state from the organization. There are various methods to get that but get the feedback, try to understand what people think, what are the most painful things in their daily lives so that you're actually solving the proper and the right problem? So, some sort of a reflection is needed. Of course one might have some hypothesis, but then validating that together with the organization. Then what else comes to my mind is that, especially in the beginning, focus on depth before width. What I mean with that, get really a sharp focus on some areas on whether it would be even a one team. So, focus on that one, get the things running there, get a couple of small victories and then probably celebrate, spread the message out of that. Then actually people start to want it once you have made it attractive. That's at least from my point of view to give away.

Henri (39:17):

And then as well, once the transformation is panning out and spreading throughout the organization, still you need to select the battles and you're not able to go everywhere, but you need to select and prioritize. Then of course you're not able to prioritize only from the tech point of view, but probably the prioritization should be aligned with the business priorities. For example, if this business segment or this product or portfolio is the most important for our company, then we should make that to go to the DevOps direction first and then probably concentrate to other businesses. So it's about the prioritization as well.

Lauri (40:12):

I would imagine that it totally depends on the team, where they have the depth and where they do not have. So depending on the team, some teams are really good at... Really, their capabilities are very high already, but they might have a development area somewhere else whereas if you pick another team, then their situation can be the opposite. Someone might say that there's an academic conversation, but I think it's not only academic between the terms maturity and capability. I wanted to ask your opinion about this matter. In some groups or literature, the definition of maturity is that there is a certain level that you have to take, a set of capabilities and then together bring them to a certain level, and that gives you a maturity level.

Lauri (41:15):

And then you need to take yet another incremental set of capabilities and then together develop them so you get to another level of maturity, and then you could say that those maturity levels are somewhat static. Whereas this capability point of view is that organization and teams are in continuous flux, and then you are constantly learning and constantly developing and there's no such thing as maturity level, there's just an ever ongoing improvement of your capabilities. Where do you stand with this, considering that you have now seen tens and tens, or maybe hundreds of teams, so you have seen their capabilities? Is there such a thing as a maturity level, or is it just this continuous improvement of daily work and capabilities?

Henri (42:08):

Rather tough question, but if I start to digest in small batches as we do in DevOps, I think you need definitely both. You need the maturity and you need the capabilities. As we have done it, we have this maturity assessment of teams and there are some levels that when you are in certain capability, when you are in the MVP level and when you in an optimized level. So there are some levels, but I don't see those really, really that important. Of course those play a certain role, but you need to have certain capabilities to reach the maturity. But still, I think that it's... Because you ask, is it a continuous evolvement of things? I would say yes, and probably the maturity assessment for the next year have improved because we have learned. So I would say that the levels are not static and we haven't grouped those capabilities in a way that this set of capabilities are forming this maturity level. And that goes on, but we have grouped those probably from a bit different point of view.

Henri (43:46):

And then what comes to maturity as well, I think it includes the mindset part as well. So you can have the capabilities, you can have the technical capabilities from the architecture testing development point of view, but then when it comes to maturity, it needs the mindset too. And that takes time, as you said. Again, the capabilities might reveal the shortcomings in the maturity as it includes the mindset as well. So I would say continuously building the capabilities, and then the maturity comes there as a team. And at OP, we're aiming to build end-to-end competent teams and it's really a tough journey for major part of the teams. So there are a lot of things to be taken into account once you're building end-to-end capable team.

Henri (44:49):

And then probably again, one thing is that the maturity in team is not enough because then you have the bigger organization. In our case, you have the tribes there, then the organization have several tribes. And if you have the team that is mature enough and they are a five-star DevOps team, so still it might be that how the tribe is working is hindering the progress of that five-star DevOps team. So you need to scale that as well into the drive level, into the organization level, and then going all the way to the corporation level, there might be that, let's say a budgeting process or whatever, then might hinder the agility of a single team.

Lauri (45:44):

Interesting. And come to think of it, this spring, must have been the May issue of Harvard Business Review. There was an article about Agile C-Suite, which goes back to the very point that you just made in that organizations can adopt agile leadership style in different levels of organization. It doesn't have to be a team level, but it can go all the way up to your executive team. And they all have something to adopt. It's a different thing for different levels, of course, but they are fundamentally based on the similar things, which is give trust to the people and let people figure out how they should do things instead of trying to micromanage them and give time for learning. There are these same patterns that repeat irrespective of level of organization that you're talking about.

Henri (46:43):

Yeah, exactly. And I think we are in a happy place from that point of view since we're able to communicate with the C-Suite and the executives, if not in a daily basis, in a regular basis, and we have a connection and the feedback channels in place that we share the same understanding and we're able to have good discussions and therefore improve our things at all times.

Lauri (47:15):

Fantastic. Hey, we are running out of time. We had one hour booked for this. And we have used that effectively. I very much appreciate you taking the time. And if there are people out there who would like to get in touch with you, is there a way to find you online somehow?

Henri (47:36):

Yeah, yeah. I'd say LinkedIn is the best media for this purpose. I'm happy to share ideas and discuss with people about this topic. So feel free to contact.

Lauri (47:53):

Wonderful. Well with that, I thank you for your time and it was an enjoyable talk.

Henri (47:59):

Thanks.

Lauri (48:01):

That was Henri from OP. We referred to quite some terms and materials during discussion. So be sure to check out the show notes for links to our take on the latest DORA State of DevOps Report, the Agile C-Suite article referred to from the Harvard Business Review, and blog post and pages about our DevOps transformation offering. And of course, Henri's LinkedIn profile. With that, I say thank you for listening. Remember to select your battles and use DevOps tools to reveal shortcomings in the culture.