DevOps Lead Tuomas Leppilampi tells us about starter metrics, feedback loops and self-service metric tools. Have you assumed linearity? Here’s a listen for those who might have!

Heidi (00:04):

Good morning, good afternoon, good evening, good whenever you're listening to this podcast o'clock. Thank you for tuning into DevOps Sauna, Eficode's podcast about all things DevOps, and automation.

Heidi (00:17):

We're sitting the Sauna Lounge with Toumas Leppilampi, one of our DevOps leads and we're about to talk about metrics, feedback loops, and how numbers can make us better.

Heidi (00:30):

So, to start off the podcast, I want to talk about one of the new objects in my life, which is the Oura Ring. I got it last month and it's this Finnish invention that tracks how you sleep. It's a pretty little wearable. You wear it on your finger and it tracks how much REM and deep sleep you're getting per night based on your body temperature and heart rate, I think. Lots of the people in Eficode's Helsinki office have the Oura Ring. Turns out we're the target audience for that thing.

Heidi (01:01):

In my case, it's more about me getting one step closer to Idris Elba because Prince Harry owns an Oura Ring and Idris DJ at his wedding, so we all have these reasons for doing these things, but it did make me think about metrics because I'm not the best sleeper and seeing how exactly I'm sleeping has been really helpful because sometimes it's not only about quantity, sometimes I've slept six hours but it's been a super-efficient six hours, so I've gotten an amazing quality of sleep. Sometimes I've slept 11 hours and it's been neither here nor there. And that's helped me feel a lot calmer about my sleep and also see how my sleep in trending in the week. My readiness is up on a Monday and then it's down by Thursday and that repeats itself. There's something about how concrete those numbers are that have helped me in my life relates to what we're talking about today because metrics are a really huge trend in DevOps and they really do up your game. They really do make you better.

Heidi (02:03):

I'm by no means the expert which is why I'm very glad to be interviewing Toumas today. Hi Toumas.

Tuomas (02:10):

Well hi there Heidi. How you doing today?

Heidi (02:13):

I'm doing okay. I've got my Oura Ring to help me sleep but now I've got you to help explain metrics to me and our wonderful listeners. So my name is Heidi Aho. I host this podcast. I'm the content writer at Eficode and the more important introduction for us today is Toumas Leppilampi, our guest. So Toumas, tell us about yourself?

Tuomas (02:33):

Well, it's nice. You took me to my childhood just now, because people used to call me Limpy Lampi, which is Limpy is like love and whatever, but actually, it's Leppilampi.

Heidi (02:45):

Oh my goodness. I got your name wrong. I was just like yes you do have a really cool last name. It's like favorite lake or something and then I'm like, I made that up. Could you just change your name to Lampy Lampi.

Tuomas (03:00):

I'll be thinking about it. LimpyLampi, Leppilampi, it will all work.

Heidi (03:05):

A couple of days into your next electronics festival, if you could just call your lawyer and change your last name then that would my work a lot easier.

Tuomas (03:13):

It's interesting that you talk about your Oura Ring.

Heidi (03:18):

Oura, is it Oura?

Tuomas (03:19):

Oura I think it is.

Heidi (03:20):

Yeah.

Tuomas (03:21):

So yeah. Those are the little metricy things in your everyday life that give you the possibility to push for actions. So metrics is not just about collecting data, it's about turning them into actions and that's what my professional life has always been about. Taking information, taking ideas from customers. Ideas that they didn't really know that they had in the beginning, but actually talking with them long enough to understand what they really need and turning those ideas into actions. So it's really about the same thing. I'm thinking about DevOps. I'm thinking about software development in general. It's all about having good ideas and turning them quickly into something that we can, not necessarily measure, but feel and it's something tangible.

Heidi (04:19):

Makes that idea more concrete.

Tuomas (04:19):

Exactly and we get the feedback. I think DevOps is all about getting the feedback loop shorter, from the ideation to production to actually getting it out there. Metrics is, I think, is a cornerstone in that area and I just don't think we're doing enough of it right now.

Heidi (04:39):

In the Nordics or just in the world?

Tuomas (04:40):

In general in the world.

Heidi (04:41):

The world needs more metrics.

Tuomas (04:44):

Well, the world needs more actionable insights. It needs more ideas and Eureka moments that actually help you act in your next questioning in your next position.

Heidi (05:00):

In a fact-based manner and not just using your gut, as wonderful as guts are. Could you quickly do the obligatory, this is where I've worked, this is how long I've been at Eficode majiggy.

Tuomas (05:12):

So, I've worked for many years in a company that I was actually a part-owner called Assure. We did a lot of analytics. We did a lot of analysis projects and consultancy and also a product that was aiming for creating easily accessible analytics for people. But even earlier ...

Heidi (05:35):

Big datay, or ...

Tuomas (05:37):

No. At that point, this was like 10-12 years ago, so big data wasn't really a word.

Heidi (05:43):

Pre-big data.

Tuomas (05:44):

Yeah. Kind of like that. And it was more a task management, software development area. So it's in that development loop that includes also requirements, analysis, and testing. So in that loop, we worked for several years. Before that, I worked for Nokia for several years, and since my time in Assure, I've been working at Eficode, which is not too long, only some months but I know some of the key people here for a long time past and to be honest, I think I was one of the first customers of Eficode.

Heidi (06:25):

Really?

Tuomas (06:26):

Yeah. Back in Nokia days.

Heidi (06:28):

So you were on the customer side of things?

Tuomas (06:31):

Yes.

Heidi (06:31):

And now you stepped over.

Tuomas (06:33):

Yeah. So I know Risto Virkkala, from those days.

Heidi (06:38):

The CEO, yeah.

Tuomas (06:38):

Yeah from those days and I watched him and Marko Klemetti our CTO build this company from the ground up, so it's been really interesting to see how Eficode has been developed and is now, I would say one of the world leaders in DevOps area. I'm really happy to finally work here.

Heidi (07:02):

That is so fascinating because as far as I know, definode?

Heidi (07:05):

Can I just make up words throughout this entire podcast? That would be great. Abstract realism. I was going to say that Risto, when he tells the story of Eficode, he's always explaining how we were doing DevOps before DevOps was a thing, back in 2005-2007, before the term came into existence, thinking about ways to make software processes more efficient. I think it was software process efficacy. Some awful, awful term. DevOps is better. However, it's so interesting that you are actually there at the very beginning of our little story.

Tuomas (07:45):

Exactly and it was, agility was coming in and I also find that DevOps is just another way to reach agility in a better way. So agile is really what to be. We want to be able to move real quick and find new ways, find new avenues ...

Heidi (08:05):

So are you talking about agile as the adjective or agile with a capital A, like Agile Manifesto?

Tuomas (08:10):

Yeah. I think it's kind of the same thing in a way. In a way, agility is just ways of being able to move quickly towards new areas that we suddenly realize that okay, we need to go there because the market ... suddenly something happened in the market. Microsoft put out a big software suite and suddenly it just bit off a big area of what was our business case, so we need to be able to pivot and this is something that metrics will also provide you a possibility to understand. Metrics is a great way to decide where you could pivot when you need to pivot.

Tuomas (08:55):

When you suddenly realize that the markets been moving on and your initial ideas that you had, in the beginning, are probably not the ones that you should continue, then you could look at your data, you could look at your customer feedback, you can look at things like this and help you find a different way.

Heidi (09:14):

Yeah. So this agility and DevOps as agile infrastructure, not just agile in one bit of your software production but all the way through to again, your customer. It's really important.

Heidi (09:24):

That leads into a thought that I just had, which is we've been speaking about the benefits of metrics almost in theory, could you go into more detail as to why it's important for enterprise DevOps and Enterprise DevOps is one of those dirty words because it's little businessy but I'm talking about DevOps on an at scale, in a huge bank or something, how can metrics be helpful there?

Tuomas (09:50):

Sure. Well there's an example, a large company in Germany called Bosch, they have a well-known DevOps transformation journey story.

Heidi (10:02):

This is public knowledge?

Tuomas (10:04):

Yeah. You can find it in the internet quite easily. I find it very interesting because they've been very open about they failed initially. First, they thought about let's go towards agility and DevOps with the areas that are easily transformed.

Heidi (10:27):

What were those?

Tuomas (10:28):

Software development and this kind of area. It's not so easy to transform finance, HR, these kinds of things.

Heidi (10:38):

Yeah. Because it's more legacy systems. People are less tech-savvy. There's a lot more barriers there. Okay.

Tuomas (10:46):

Yeah. There's a lot of aspects that are more difficult in those areas. So they decided. They did split the company in two. These are the parts that we want to turn agile. These are the parts that we find too difficult to turn agile so we'll just leave them out for now.

Heidi (11:04):

They'll be brittle forever.

Tuomas (11:06):

Yeah. And essentially, they just failed in that time, they failed. They're pretty open about it in this text that they've published. They took everything back and said, okay, this was not the way to do it, so how do we do it. What they started to do is they started to push out waves of agile little teams and doing DevOps ways of working and they started to measure those teams and their productivity and their velocity and quality and through those measurements and through those waves, they realized which ways of working would work in different areas and different organizations. And that's how measurements and analytics helped pave their way towards this successful transformation after the initial blunder that they did with this split organization that they tried first.

Tuomas (12:12):

That's a concrete example of how it's helped an enterprise company reach a successful DevOps operation.

Heidi (12:21):

So when you speak of waves, you mean little tiny pilots almost.

Tuomas (12:25):

Like a bunch of projects.

Heidi (12:28):

Metrics

Tuomas (12:28):

20 here, 10 there.

Heidi (12:29):

And they're driven by metrics. These little balls of metricy driven piloty projects and you just throw them out there people gobble them up and some of them stick.

Tuomas (12:39):

Yeah. Some of the good stuff sticks and they get the information about how do they stick in different areas, rather than trying to do a big bang in a large part of the organization. So rather just push out 10 teams here, 5 teams there, collect information, collect the feedback, collect the metrics, and collate them to create a further strategy so that you can then take it to a wider audience.

Heidi (13:13):

Well, that sounds like a more agile way of doing it as opposed to planning out this major transformation of a huge department, kind of just iterating these little MVPs.

Heidi (13:22):

What I like about my job is that I get to interview customers for case studies and a couple of months ago I was interviewing this Danish company. I can't say names which is frustrating however, I recognize the name so, it was a recognizable open brand and they were saying that making objective decisions as opposed to subjective decisions was a journey that they were on and suddenly when they have these metrics, it brings to the table the ability to make fact-based decisions and at first they were functioning as a startup and leading by the gut and trying things out, but the moment you have metrics, that really changed the way their leadership made decisions. I found that obviously encouraging because this is something that we care a lot about, but also, surprising that a name that I recognize are only at the beginning at this fact-based decision-making journey, which I thought was relevant there as we're talking about enterprise DevOps.

Heidi (14:26):

Okay. Let's move it on. Let's move it on. What's the difference between metrics and spying on people?

Tuomas (14:37):

This is a common on. Common, not a misconception but maybe a worry that "Okay, we'll increase data collection and we'll use that data to maybe drive teams or something like that." And I think that's the problematic area there. Are we actually driving teams from the outset? Are we actually externally driving teams or do we give the metrics and the tools we got in the metrics as methods for people to improve their own work rather than the manager from high about are saying guys you should do this rather than that to be more productive. I'd actually think about the audience for the metrics in a very detailed way. If I talk about team velocity, how many story points, how many requirements are we churning through in a specific time frame? I don't think that's something that we should be looking above the team or at least not in a program level or the sea level. That's something that the teams should use as a method, as a tool for themselves to understand what worked and what didn't in their own progress as a productive team.

Tuomas (16:12):

So spying on people is easy when you give the wrong metrics to the wrong people.

Heidi (16:19):

Okay. So there should be this transparency. Everyone should have access to the metrics that are relevant to them.

Tuomas (16:26):

Yeah. That's interesting because I think there's a level of transparency that's needed, but I'm not sure if everybody needs to see everything. It's good that there's transparency to an extent. But velocity for an example, it's a hot potato because for one team, velocity is x, and another team, velocity is y.

Heidi (16:52):

Could you give examples?

Tuomas (16:53):

So for example, for team A, they are estimating their requirements on a one, three, five scale. They could say, small ones are one, middle ones are three, and the big ones are five. The huge ones are eight. And another team could estimate with one, two, three, four, five, six, seven, eight. Maybe in hours, so that two means two hours. So, if you try to compare the velocity, between those two teams, you will fail and you will get some people irritated at least because you're comparing apples and oranges. If you go higher above in program management and so on, it's difficult to use these kinds of metrics as a ...

Heidi (17:52):

Performance management?

Tuomas (17:53):

Yeah. Yeah. Performance management is very difficult and it should be left, a lot of it should be left to the teams themselves. So that they can use it as tools. That's my main point there.

Heidi (18:04):

Yeah. Because DevOps is like this continuous process of improvement, so organizationally but also individually, if you're not getting better, you're getting worse, so you want to have the tools you need to be getting better and to see actually I am doing that faster or that process is smoother now. [crosstalk 00:18:23] for me personally or us as a team.

Tuomas (18:26):

Or we're estimating things better now. We thought we're going to use 200 hours to do this and this thing, and we ended up always using 300 or 350 and suddenly the more you go on and then you get feedback from what you estimated, you realized that you're getting closer to that 200. When you're estimating 200 you're getting closer to that and that's better for everyone because you'll now at least your own velocity and that's plenty. That's enough.

Heidi (19:00):

That's actually relates to an aha moment I had recently. It was actually the same interview where this customer was saying that actually the main value is almost in the speed and the quality they've achieved. It's the predictability . It's that they can line up this entire corporate apparatus of marketing and sales and they know that if they lined it up, they've lined up the TV ads and everything that the product will be done on time. This predictability is so valuable to them. Also, he was talking about investors as well. He could send the right signals to investors whereas if you're not sure when the thing is going to arrive, then that's a lot more chaotic and difficult.

Tuomas (19:44):

Yeah because you might lose a window of opportunity and if you think about the market these days, these windows of opportunities can be really tiny.

Heidi (19:53):

Tiny little slits. [crosstalk 00:19:58]

Tuomas (19:58):

If you're like okay, we're going to do this in 200 hours or whatever, and then the end, you realize oh crap, we're going to need more time or we're going to need some more people, it usually doesn't really help to just push more chefs in the kitchen so the predictability is very important

Heidi (20:17):

So we're using metrics to get the big picture. In order to get the big picture, you need to understand the nuance of it, that's why this idea of measuring teams against each other is a bad idea. Spying on people also bad, also, why would you do that? Kalle was talking at some, Kalle Sirkesalo, our platform director and a big fan of this podcast, was talking at a German conference last week and he said that companies can measure test coverage and automation but they shouldn't collect metrics with a view to comparing two teams against each other. So I think that rounds off our conversation about metrics and spying. Why not?

Heidi (21:00):

Let's move on to this topic of continuous improvement which we have touched upon. Is it possible without metrics?

Tuomas (21:07):

Yeah. I mean, we're collecting data at a rally huge exponential rate these days. Data past oil as the most valuable resource in the world a couple of years ago.

Heidi (21:24):

How was that measured?

Tuomas (21:26):

The economists measured it somehow.

Heidi (21:30):

The economists. I love it. Do they sit on a cloud in the sky?

Tuomas (21:33):

They have this magazine. The Economist.

Heidi (21:38):

Sorry. The Economist. Sorry. I thought you said the economists, there was an "s" at the end. Sorry. Words. Words are tripping us up but they're also providing us joy.

Tuomas (21:46):

Yes. Exactly.

Heidi (21:47):

Thank you for bearing with me. No. I love The Economist.

Tuomas (21:50):

Yeah. So that's a fact. And it's also a fact that we're measuring only just around one to two percent of all the data that we're ... I'm sorry, analyzing only one to two percent of the data that we're actually collecting. So there's a lot of opportunity there. So I would say continuous improvement without data, why?

Heidi (22:19):

Well, it's probably easier because you don't need to deal with the data. Sorry, that's probably stupid, but that's the main thing, you don't have time. You're just doing your job and you don't have time to do the data.

Tuomas (22:29):

I think that's a part of why we have 98% of the data that we're collecting still not analyzed because people just don't have ... they have a nice bike right next to them and they're pushing the bike without actually taking a moment to stop and hop on it and just start driving faster.

Heidi (22:49):

Yeah. Pump the tires and ding the little bell. I don't know. I don't think all of the data needs to be analyzed. It's like a library, you're writing an essay. You've got all these infinite books and you just have your question in mind and then you read the relevant ones to answer. So the goal is really important there as well. What do you need to know?

Tuomas (23:12):

That's a good point because it brings me to a subject of where do we actually start from when we start analyzing data? Do we start from, "Oh, hey, we have all this nice data, we should be doing something with it." Or are we starting from, "We have this problem of not understanding from the 50 feature that we have in our backlog, what should we do next?" "Now we have some money, which ones of these 50 features should we do next?" That's a real tangible problem for which data can provide a solution. And not just, "Oh, we have bunch of datas, let's start figuring out what to do with it."

Tuomas (23:54):

I'd always like to start from a tangible problem, a tangible challenge, and then use data and tools and technology and methodologies as a way to reach an answer for that. I'd rather ... That's why I'm a little bit iffy about some people, for example, "Let's do a machine learning application?" A lot of talk about, "Yeah, let's do an AI tool?" Rather than "Let's solve this problem. And if AI or machine learning can help, that's great."

Heidi (24:30):

I want this shiny thing, but I not sure what I wanted to do.

Tuomas (24:33):

Yeah, exactly.

Heidi (24:37):

What if you've got a company that hasn't gone to grips of data yet, and they know neither what metrics they should start pursuing first and nor do they know what questions they want answered. So what's some of the starter questions about metrics that are easy to grasp and get benefits from?

Tuomas (25:00):

Sure. Well, for me coming from a test management and this kind of background, I've always seen bugs or defects as little diamonds or nuggets of gold that you sift through from the sand.

Heidi (25:18):

What if it's like a Pearl because the sand makes the Pearl, so a bug is like the Pearl in the oyster.

Tuomas (25:24):

That could be. Well, the bug is a diamond because if you find it, you found it before the customer, and that's so important. So I think it's really important that we first think about bugs so that we have ... when we ship the product it is bug-free as possible. And it's from the right places.

Tuomas (25:50):

Of course, we cannot ever achieve a hundred percent coverage without testing, but at least we know as Carla was talking about the coverage, coverage is super important, that we covered the right places of the product. The nice to have, we don't necessarily need ... A lot of people ship software these days in alpha or beta stage. And they'll just wait for people, the community to find those less important bugs while still keeping them as their community. So it's kind of a fine line, but I think bugs and so-called bug escape rate, bug escape rate is quite easy to measure because it's the relation of bugs you found internally in your testing cycles and the bugs that customer have found after you have released. You just relate those into a percentage and see how you do as you go on with new releases. And then you see if your process is to getting better, your testing and coverage process is getting better. And you're getting less bugs in production. I think that's something that's very natural way to start, but I would have to say-

Heidi (27:11):

It's a bug.

Tuomas (27:11):

Yeah. I have to say a lot of companies are still struggling with actually achieving this metric.

Heidi (27:18):

Why isn't it just two numbers?

Tuomas (27:21):

Yeah. But if you think about modern software, for example, it's built of many releases coming together as large set of components. And then when you ship that large set of components, and you have a bug somewhere, how do you know which release actually inserted that bug or which team? And it's really hard to do, of course, you can do a major ... All of these bugs that we found in all of these releases and then in the customer side, but how do you know where that bug originated from? So that's really difficult sometimes to say.

Heidi (28:08):

Okay. So if you chunked it into this village of mice, like your application as an elephant in a village of mice, it's still difficult to see exactly where one mouse came from? Because the team's working on so many releases.

Tuomas (28:23):

Yeah. And then there's often rolling releases. You have little hot fixes here and there, it becomes very spaghetti very quickly. For some situations is a difficult measure, but it's something that you should try and catch or you should try and measure as soon as possible.

Heidi (28:51):

And what are some ways for companies to measure that successfully that you've seen or come across?

Tuomas (28:56):

If they have good processes in understanding how a requirement swims all the way to the product release so that you have an audit trail so to say. So you can say, "Okay, this feature came from this release and that was tested by this team and these methods with this coverage and that coverage was covering this requirement." So you can actually see the whole-

Heidi (29:26):

Trace that.

Tuomas (29:28):

Yeah. Trace all the way to the ideation.

Heidi (29:30):

Do you need a pipeline for that?

Tuomas (29:33):

Pipelines help because the quicker you do those mid-process things, the easier it is to track whenever it goes live. So yeah, definitely continuous integration, continuous delivery pipelines, help in understanding where did that idea actually then go in production, and in the features that we shipped.

Heidi (30:02):

I'm starting to understand why even a household name hasn't fully mastered this. Because it is like this huge project to get your metrics right. Yeah, it takes a lot of work because of everything.

Tuomas (30:18):

Yeah. And it's not just metrics, it's about the process in general, it's about the tools you have in between, and how well do you tag your things? For example, with bugs, it's many times that people are just raising a bug and they're probably exploratory testing for example. They are just testing, trying out different things. And they don't really know necessarily which release they're testing or what is the actual feature that they're testing, they just found a bug and then they reported it. And probably even outside of JIRA or whatever system they're using. Then if you have a bunch of those, then how can you ever know where that bug originated from? If it's just floating somewhere in the air.

Heidi (31:05):

Oh, yeah. Especially if a consumer found it, then that's going through your customer care. It gets even messier.

Tuomas (31:14):

Yeah. There's a lot of steps on the way.

Heidi (31:17):

Okay. I can say this, starter metric sounds easy at all, but I guess maybe as a starter metric, you can just get the Oura Ring because that's been pretty easy.

Tuomas (31:26):

Yeah. And then when we're talking, I mean, there are different kinds of products. I went immediately to the kind of enterprise deep end with complex systems because that's [crosstalk 00:31:37]-

Heidi (31:37):

That's where you like to swim around?

Tuomas (31:41):

Yeah. That's where I've, I've grown, but for smaller companies, that's something that's crucial, and that's very important. Those are for the big ones, but for the small companies, it's easier to actually track because then usually you have one product and a couple of components and it's easier to do that. Maybe if there is a kind of an even [inaudible 00:32:05] or metric would be coverage. So that you will know which tests are covering your requirements? And you have these automated tests, your manual tests, when you tag all of them with the requirement, this is the requirement with testing-

Heidi (32:23):

Stamping your stamping tag on it.

Tuomas (32:25):

Yeah. I love stamping, I love data fields. People filling data fields, that's music to my ears.

Heidi (32:33):

Electronic music to your ears.

Tuomas (32:35):

Yeah.

Heidi (32:35):

That's some daily EDM in [crosstalk 00:32:39] that data. Which software development metric is easily deceiving?

Tuomas (32:46):

Well, as we were talking about velocity, velocity is easily deceiving between teams and organizations.

Heidi (32:52):

Yep, you've mentioned that. Yeah.

Tuomas (32:53):

Yeah, that's one.

Heidi (32:54):

Deceptive. That one is deceptive?

Tuomas (32:59):

Yeah, that's one deceptive. And also if you're trying to figure out when are you going to be ready with your release? And in the beginning, you look at your release burned down or something like that, release turndown is like-

Heidi (33:15):

Which is?

Tuomas (33:16):

In the beginning, I had 50 requirements, like 50 requirements to develop. In the end I need to have zero requirements to develop. This will be my line from the 50 to the zero in the timeline that I've defined for myself for the release.

Heidi (33:33):

Mm-hmm (affirmative).

Tuomas (33:34):

So in the first week, for example, let's say it's four weeks or four months. We have four months, in the first month, I did 10 requirements. And then you say, "Okay, I did 10 requirements. I can see the line starting from here and I'm going to be ready at this and this date because I did the first 10 in this and this time. So we assume linearity, we assume that the line will continue as it started."

Heidi (34:06):

Rarely is it a straight line.

Tuomas (34:09):

Rarely as a straight line. So this is why estimation again, is really important that you estimate things in a realistic way. And also that you understand the characteristics of your release process, maybe you do the easy ones first, you do the low hanging fruit first. And then, then, in the end, you tackle the hard ones that you have to do. And then in the very end, you tackle the hard ones that are nice to have so that if you have to drop them, that's fine. But then you have a characteristic after release, after release, after release, you might learn from it. And then you can see it always has this jump in the end. So, that's the way you learn your own team and what they can do, but it can be very deceiving in the beginning if you assume the linearity.

Heidi (35:05):

And their specific curve, I'm guessing it'll vary per team and per project and per company even.

Tuomas (35:10):

Yeah. But this is also a possibility for us to start using things like machine learning so that we can find these patterns, that in a certain product area, for example. If you have product area, you have internal like just development of features within the product. And then you have integrations to other products. You can see from the data that you collect, you can find patterns. In these internal features within the product, we always have this and this kind of pattern, but with the integration teams, there's a different kind of pattern. So, machine learning and these kinds of technologies and methodologies could help us find these characteristics more easily.

Heidi (35:55):

But how do you know ML isn't just measuring what's happening? Like what if you're just doing it your certain way. And then machine learning analyzes that, "Okay, well, this is how you do it. Isn't it kind of circular? How do you know that the way you're doing it is optimal?"

Tuomas (36:18):

Well, machine learning models, the way most of them work. I'm not a huge expert in the area so take it with a grain of salt, but they learn from assuming something. They are fed data then they assume a scenario. So like this could be a scenario then they test it and they realize, "Oh, it wasn't right." So they fix it for the next iteration and they assume something else and slowly they get to the correct assumption based on the data that you feed it. So, you just have to keep feeding them a lot of data and eventually, they will get better at predicting.

Heidi (37:01):

You need to be interpreting that. Those predictions as well.

Tuomas (37:03):

Yeah. There's a lot of ... I think there's a big curve still, at least for us kind of laymen in this metric analytics area. I know there's great machine learning and AI engineers out there, but I have yet to see in software development metrics and analytics, some really well-functioning examples of these technologies.

Heidi (37:33):

So we're not there yet?

Tuomas (37:35):

I wouldn't say that we're at least ... the world at large is definitely not there yet. We might have some gaming companies, for example, who-

Heidi (37:47):

Mm-hmm (affirmative). Yeah. Unity is super.

Tuomas (37:47):

Yeah. They have to have a very closed set of activity. They have their games and they have their communities and then they have the marketplace and monetization and then can play within that area. But you compare that to an enterprise company creating some complex systems and with hundreds of teams, it's a totally different ball game.

Heidi (38:11):

I count three more questions in my question list, which is a metric that is telling me that we are nearing the end of our interview. Although I did once ... I was typing something like an interview that I recorded, and I swear this was like an hour and a half recording. I'm like halfway through, I was like, "We're nearing the end of our interview." And it was like 45 more minutes.

Tuomas (38:33):

You were assuming the linearity.

Heidi (38:35):

I was, I was assuming.

Tuomas (38:39):

That was deceptive.

Heidi (38:39):

Yeah. A CEO makes an asset of you and me-

Tuomas (38:42):

Exactly.

Heidi (38:42):

... as wordy wisdom. Okay. So this is a question I definitely wrote myself because I will be able to pronounce everything in it. So the industry has caught onto the fact that metrics is important and there's a need for more self-service metric tools such as Tableau and Google Analytics, et cetera. What are these tools, and availability of these tools doing to reporting culture and companies?

Tuomas (39:10):

Yup. That's a very interesting question. Thanks for that. Because I've been working with these tools for quite a few years now and they really changed the way companies think about availability of metrics. I mean, in the past, when we did a business intelligence project where we enhanced the metrics capabilities and analytics capabilities of companies. It would take years. I mean, in a certain finish large telecommunications company, it would take two to three years to create a data warehouse and some nice reports, maybe three of them, four of them in a year or two. They would spend seven figures of sums of money, big money to these projects and roll it out to a large number of people. And once they've done it, maybe the processes have changed. Maybe the teams have changed to use some other tools that this BI project used as their data source and a lot of things changed [crosstalk 00:40:19]-

Heidi (40:19):

Get old Waterfall model back again?

Tuomas (40:20):

Exactly. So with these tools, like the stuff like Tableau and now actually recently Microsoft's four, five yeas old tool is really picking up popularity now is Power BI. And it's something that really enhancing the capability of people to start doing metrics by themselves.

Heidi (40:45):

The Microsoft tool popular? Could you explain?

Tuomas (40:48):

Yeah. Actually, I've never been that against Microsoft.

Heidi (40:56):

Okay, good. I'm not either. I'm just kind of [crosstalk 00:40:59] most people in the office.

Tuomas (41:00):

Yes, I know. I know it's a thing. But Microsoft Azure has provided a lot of interesting tools and services, and Power BI it's actually becoming the Azure DevOps 2019 Analytics Suite and Power BI is integrating that quite well. Anyway, what it's doing is that for example, with Power BI, you can download it from the internet for free for yourself and start using it.

Heidi (41:32):

Wow!

Tuomas (41:34):

And that's pretty revolutionary for a company like Microsoft, and it's actually a pretty good tool. I've used also Tableau and Click and the other ones. But I would be really worried if I was working at Click at the moment because it's really powerful because anyone can start using it. And then if you want to scale it up, it's not really actually that expensive. And what it really does, it enables people to create these metrics that in the past, you used X amount of time to actually do. To have teams create these metrics. And then if they proved popular, you can then scale them. You can easily then share them with your organization and even across organizations with much less money than before. With Power BI and such tools, it's easier for teams to start creating metrics themselves. And when something catches on, they can scale them to a large number of people with a lot less money than before in these big-

Heidi (42:40):

When you have to build it yourself?

Tuomas (42:41):

Yeah, exactly. And it's much more intuitive. They're actually fun to work with. So you can take some data from any kind of source and then you slam it on your visualization tool and you'll have some kind of metrics in minutes.

Heidi (42:57):

Metrics in minutes?

Tuomas (42:58):

Yeah. It's actually true. So if you're having data in a semi-nice format you can metrics in minutes. I think that's actually a good [crosstalk 00:43:10]-

Heidi (43:10):

Yeah. Do you want to set up your own podcast called Metrics in Minutes? Could be a two-minute podcast?

Tuomas (43:18):

I'll give you 20% of the income.

Heidi (43:19):

The income? Sure. Great. I'll buy another Oura Ring with that. So relatedly, why should companies invest in a reporting business intelligent platforms when the software development management tools have metrics and reports available already?

Tuomas (43:39):

Yeah. This is a question that I get a lot when we start a project that a customer ... and then they say, "Oh, well, JIRA, provide these and these kinds of reports." And, "Oh, From Jenkins, I get these and these reports." And from ServiceNow in customer, like production side, "Oh, they have nice metrics." Yes, All of these management tools have nice metrics by themselves. But when you collate, when you actually pull information together into one place or one portal, it can give you insights that you didn't see before. And also what you can do, which is, I think one of the key things in analytics is you can balance your metrics. A good example for example is the productivity. You can push for productivity. You can say, "We need to do 50 requirements a month, at least. And we want to rise up to 100 requirements." Okay. Time goes by, you're at 50, you're at 100. Great. What else happened? Did we actually push the amount of customer bugs up at the same time as we were speeding ahead-

Heidi (44:59):

Probably

Tuomas (45:00):

... trying to do a really productive activity.

Heidi (45:03):

That's how good your test automation is.

Tuomas (45:05):

Yeah. And it's also like the whole process. I mean, desktop automation, pipelines, anything, but if you get your customer bugs in ServiceNow, for example. And your internal information in JIRA, you can then in a metrics portal or similar kind of a dashboard, you can put those side by side and say, while our trend of productivity went higher, our trend of customer bugs also rose. So you want to balance these things, predictability, productivity, quality, all these things' effectiveness.

Heidi (45:45):

Yeah. A happy medium, and then you can see that big picture through a platform.

Tuomas (45:53):

Yeah. So that's one thing. And the other thing is that a lot of people especially, program management or something like that who are not every day in these tools. They also want to see it from somewhere, and then they're like, "Oh, what was my JIRA password?" "Oh, I'm just going to skip it for now because it can be hours to log in or something."

Heidi (46:15):

Yeah. CBA.

Tuomas (46:16):

Yeah. And so in that sense, it's easier for them to look at them from a single place. And when it's easier, then they use more active.

Heidi (46:34):

Yeah. It's almost like this meta activity of you're doing your job. And then the metrics is analyzing how you're doing your job, which is why it feels like you'd never have time for it because you're too busy doing your job. So if you have like a dashboard that makes it really easy and visual, then actually you can be doing it constantly. And it's a lot quicker and more helpful.

Tuomas (46:54):

Mm-hmm (affirmative). True. And then you can jump into a meeting with your pad and you have everything there. You can just share that in the screen and say, "Okay, this is where we are." Rather than thinking about, "Okay, let's look at JIRA, let's look at this, let's look at that."

Heidi (47:08):

Yeah. You're not spending an hour creating some report for a meeting that happens four times a year.

Tuomas (47:16):

Yeah. That's a big one. We're moving from the act of reporting to the act of actually getting insights from data. Because the act of reporting was when we actually build these reports. But now if we have these automated systems, we have more time to actually even look at the data and think about what is the activity that I should do based on this data.

Heidi (47:42):

So the energy that you were using in creating reports, you can now use an analyzing them, which is the far more valuable activity because that's where you get the fuel for you to change your actions in a certain direction and such be steering your team or organization, or just your own self, just steer yourself in a better direction.

Tuomas (48:02):

Sure.

Heidi (48:03):

We started off by talking about your role at Eficode and what you were doing before you joined us. I think that would be a nice way to end the official part of our interview.

Tuomas (48:14):

Sure.

Heidi (48:16):

What a consultant like you do in an analytics or metrics project at a customer, and what does success look like in that project?

Tuomas (48:25):

Great. Well, we go and actually interview all loads of people, we interview what do they do currently? What does good look like for them? What does trouble look like for them? How are they actually measuring if they succeeded, how are they measuring if they're in trouble, what should be the red alert, red flag for them at the moment, and we interview, we talk a lot, we try to understand every step of the way of the delivery life cycle? So it's not just test automation. It's not just automated deployments or releases or something like that. But it's also requirements analysts. It's also from the customer side, like customer feedback, bugs in production, we try to understand the whole nine yards of their product development and release and maintenance life cycle.

Heidi (49:30):

What's the difference between that and a DevOps assessment or a DevOps development plan, which is also something we do?

Tuomas (49:35):

Yeah. DevOps assessment has a lot of the similarities. So our develops assessment guru, Mika Mattila. He was one of the first guys I talked to when I started off at Eficode. And I realized that our interviews, we actually happened to work on the same customer when I started and he was starting DevOps assessment there. And we kind of ... what's this word?

Heidi (50:04):

Realized?

Tuomas (50:05):

No. We calibrated each other so that we wouldn't be asking the same questions because they are very similar. And he wants to know about the daily life of developers, what do they do with their GetComics and these kinds of things, not so much my area. My area is more about, think about customer feedback or a business analyst or something like that. I try to take it more towards the business end while DevOps assessment takes it more towards the development area, and how can we help developers who are the goldmine of any company? I mean, they are the guys who are actually developing this stuff.

Heidi (50:52):

Valuable assets.

Tuomas (50:53):

Very valuable assets. So I'll try to think about the business more and the external, the outside, whereas DevOps assessment is looking at more, I think, from the inside. So how are things working in there?

Heidi (51:08):

Definitely, yeah. Mika Mattila is an elusive figure because he's always our customer, and he pops up in the office once every two years. It's like, "Oh, he's here." This like a snow leopard of a person.

Tuomas (51:19):

And then about success? I start with the interviews and I move on to create something, I call the metrics catalog. So it's more like a pizza ingredient list, which has just loads of different metrics. Like I was talking about before, like velocities and bug escape rates and these kinds of things, these coverages and whatever. I just do a lot of definitions of metrics. I might do them in Excel so that I create some dummy data and I just create, how would it look like? It would have these kinds of bars and this kind of a gauge. And it would be meant for this kind of business activity. We would be trying to drive this kind of success and these kinds of people would be using it.

Tuomas (52:16):

And then I go back to those people and say, based on the discussions we had with you, I made some examples of what your metrics dashboards could look like. And this could be created, as I said, in Excel and PowerPoint or whatever, just drawing it there and showing, is this something where we want to go? Once they are reviewing, and we're having this open discussion, we're already starting to look at the data. What does the data look like? What does it allow us to do? And then when you see the gaps, "Okay, so for this nice metric here, we are missing a crucial piece of information, which you are not yet collecting?" You might think about changing your process to collect this data as well.

Heidi (53:05):

And you might them do that?

Tuomas (53:07):

Yeah, as a process change. And that's again, where we touch with the DevOps assessment. [crosstalk 00:53:12] Because it's more about process changes as well.

Heidi (53:16):

Yeah, pick up on that as well.

Tuomas (53:17):

And the successful outcome would then be a dashboard or set of dashboards that are useful for them immediately out of the door, but also a platform in a dataset that they understand, how was this data calculated? Why was it calculated like this? And what can you do going forwards when you want to develop it and just take it apart and put it back up again it's a better one? They need to understand why we did it the way we did it in the first place and how it's calculated and all that? So that's as important as the dashboards' is the understanding for the customers that how it's actually built from the ground up.

Heidi (54:01):

So you're teaching them how to fish for numbers?

Tuomas (54:04):

Yes.

Heidi (54:04):

So they can keep...

Tuomas (54:06):

Yeah, they keep developing in their own right.

Heidi (54:09):

Yeah. And their metrics, right?

Tuomas (54:10):

Yeah. That's true.

Heidi (54:13):

Thanks, Tuomas.

Tuomas (54:17):

Thank you, Heidi.

Heidi (54:18):

I think we're done with the official section of our interview. I was curious to talk to you about Robocon? Robocon, am I pronouncing that right?

Tuomas (54:40):

Yeah, Robocon. Yeah. Rabucon.

Heidi (54:40):

Rabucon. Me here is you are organizing it?

Tuomas (54:41):

Yeah, from Eficode I'm responsible for helping the organization of Robocon 2020, and I'm very excited.

Heidi (54:42):

That's in January?

Tuomas (54:43):

It is. Yeah. In the beginning of January or February. Yeah. But it's a very exciting seminar, a couple of days in BRX and then we're going to have some cooperation workshop e-things, a lot of pizza and drinks afterwards. So I really wish that people would come not only to enjoy what you can learn and experience in Robocon but also to get to know Eficode during that couple of days.

Heidi (55:13):

And the wonderful people that Eficode consists of. What is Robocon for people who don't know yet?

Tuomas (55:20):

It is a conference for robot framework test automation platform. So robot framework is interesting, it originally started to be developed in Nokia, when I was in Nokia as well in a very close, because I come from a test management background. So I was following its inception in a way from afar. So I'm really happy to see how it's grown to be internationally acclaimed test automation platform with this keyword-driven concept.

Heidi (55:59):

Natural languagy.

Tuomas (56:00):

Natural languagy so simple people like me can also understand, where do we do the automated tests consist of, and what do they actually do without having to jump through the hoops and loops of scripting.

Heidi (56:16):

And it's also applicable for robotic process automation as well or RPA for that reason?

Tuomas (56:21):

Yes. That's very true. Yeah, you can do a lot of things with it actually if your imagination will run wild. But yeah, you should come and listen to the smart people in Robocon 2020 to really understand where it can lead you and your organization.

Heidi (56:38):

I like smart people, and I want to learn more about robot frameworks. So maybe we should have a podcast there or write something. And I know Dila took a really cool video. Dila is sitting here recording us, a really good video of Robocon to capture the vibes. So I'm sure we'll be involved.

Tuomas (56:55):

I might invite someone if you're in Eficode, a new friend and a person from my team who was really knowledgeable about that and really fun to talk with. So I might invite him as well.

Heidi (57:07):

Great. That sounds like fun and games. And I'm guessing there are tickets available for this Robocon?

Tuomas (57:16):

Robocon. I think they're available. I will yeah ... let's provide the links and information with this podcast.

Heidi (57:24):

I know we tweeted about it. They're on Twitter, Robot Framework Network Twitter. Just Google Robocon you will find it. Google does all that for us.

Tuomas (57:34):

Yes. And more.

Heidi (57:37):

Tuomas, I think we are done here. This has been such a pleasure. I know it's like quality over quantity, but can we just say that this is probably the longest podcast we've done to date? And it hasn't felt like a long time.

Tuomas (57:50):

Well, thank you so much.

Heidi (57:51):

I feel like the time has flown, the time has flied.

Tuomas (57:53):

The time is flying.

Heidi (57:56):

I know, the words have morphed into other words. And on that note, I think we should pry say goodbye for the day. Thank you for listening. We appreciate every single one of our listeners. And please do give us feedback at podcast@eficode.com. Just email us, give us your ideas for topics you want to do, or any other tips, hints, tricks. We know all our listeners are super bright.

Tuomas (58:23):

Yeah. And that would be very interesting to also hear about, if there's a wish for a deeper subject within metrics, analytics reporting, these kinds of areas, it would be very interesting to hear what would float your boat as a future subject?

Heidi (58:41):

Yeah, definitely. I think for a second season, if we could have you on again because you were so very quantifull and qualitative-full in your words and podcasting, then that would be great.

Tuomas (58:57):

Excellent.

Heidi (58:58):

I'm ceasing to be able to string a sentence together. So let's say farewell for the day and have an amazing day or evening or night. Bye.

Tuomas (59:07):

Thanks, bye.