Measuring the overall success of product development can be frustrating because there are so many metrics to choose from, just like there are so many tools to collect, analyze, report, and act on the metrics. We invited Eero Jyske, VP of Engineering at ICEYE, and Henri Hämäläinen, Agile Coach and Product Organization Coach at Eficode. They discuss R&D effectiveness from their experience, and what software organizations should do about it.
One of my favorite metrics, certainly, is just number of releases that your organization is able to push out per day. That's the mindset to have, or per hour, because I'm never happy, but that's kind of the mindset of... Get away, certainly, from any thinking of having a release every two weeks or even less than that. Just every single change you do, just ultimately, just push it out, and that's a healthy organization. That drives so many good behaviors. It drives so many good indicators in the company if you have that in place.
Hello, and welcome to DevOps Sauna. Measuring the overall success of product development can be frustrating because there are so many metrics to choose from, just like there are so many tools to collect, analyze, report, and act on the metrics. We invited Eero Jyske, VP of Engineering from ICEYE, and Henri Hämäläinen, Agile Coach and Product Organization Coach from Eficode to discuss about R&D effectiveness and how to measure it. Let's listen in what Eero and Henkka have in mind.
Thank you for taking the time, Eero.
Thanks for having me.
And thank you for coming, Henkka.
Thanks. Thanks. Very pleased to be here.
We have done something like 50 or so episodes now over the past 16- 17 months, and this is the first time I've done this face to face. I almost didn't think that all of the episodes have been done remotely, and it feels different to be here, but it's glad to have you.
I specifically actually asked Henkka if we could have this live.
It actually feels so much better to just meet face to face. We've been meeting so much online already that it feels like it's enough.
Yeah. So you know each other from the back in time.
Yeah. Years back. I don't even know from how far. Maybe 2005-ish, something like that.
2005, yeah. The Nokia project where we work together, I joined 2003 and then Henkka sometime later after that.
Yeah. A bit later, yeah.
But somewhere around. That's quite a long time. 16 years, maybe. 15 at least.
Yep. Today, we was about R&D effectiveness, and for people who have been listening to DevOps Sauna for a long time, they might realize that this is a slightly different topic than what we've had before. DevOps, by definition, we haven't been talking about this level of conversations. We have been more into the nitty-gritties of Agile, Cloud, DevOps, technologies, culture, but R&D effectiveness is in a far, far higher abstraction level, because we talk about the organization and how organizations should be doing things, and there's always this quarrel between if it's R&D effectiveness or is it R&D efficiency? Well, we are going with the effectiveness, but I think both ways are right. Can we get the record straight on what is R&D effectiveness? What does that mean for our audience?
You want to go first?
I can try, yeah. For me, the effectiveness overall comes from the end result. You need to be successful in whatever you do as a whole, and that's effectiveness. That's basically why we're here today, is how to measure the whole is very, very difficult, so of course, we're going to have to take smaller pieces from it, but as overall, you kind of need to look how your products and services are actually doing in the field, and that's how you can measure your effectiveness. Of course, we can kind of go deeper on this, but that's the difficult part always for me, is that when you try to look for two small pieces, you always miss something from the effectiveness. That's the thing, that you need to first understand the whole, and then you can start kind of going into details.
Yeah, I guess I agree. I mean, start with the end results, right? So ultimately talking about the products that you're building, and if somebody's willing to pay for them, that's a really good metric. Are you doing well? Are you making profit? But then even if you're making a ton of profit, like let's say Apple, I guess it's hard to argue that they wouldn't be doing well, but could they do be doing even better? You don't know, really, and that's the trick with everybody, that are we actually doing well? Okay, we are doing better than the competition, but if everybody else is just way worse, how would we improve? I guess internally, as well.
I mean the thing that we'll certainly come to later here, as well, is that at least one really important thing for any organization is that you constantly keep improving against how you're performing internally, as well as externally. You want to make sure that you fix any issues, and then you measure your yourself against how you were before. If you're better, great, but you're still like, "Okay. Well, I was..."
Let's say you'd be measuring with story points, which by the way, I wouldn't recommend, but if you use story points, if previously, you were doing 10, then all of a sudden you are doing 30, is that good? Oh, it's probably better, but is it still okay, or are we actually doing well? As to Henkka's point, there are just many, many, many metrics that are things that you want to pay attention to. But I think in the end, still, if your company is making profit, probably okay.
But maybe I'll just continue a bit on that topic, because there's no ultimate level of effectiveness. You'll never reach it. Or if you say that, "Hey, now we're effective," then you stop improving, then you're going to be in trouble. That's the thing. Like one measure of effectiveness is that you constantly keep on aiming and going towards the effectiveness, and I think that's one of the metrics. If you're not improving you, you definitely are not effective.
Yeah, absolutely. The thing I keep telling my teams, and just important thing to know about me, that I'm never happy, you know? The positive thing about that is that you constantly want to do things better, and it is, in fact, something I keep telling people, that, "Look, we need to have people in our team who want to celebrate successes," because that's ultimately not something I do well, right? But it's coming from this mindset, really, of, "It's never good. We could always do better. If we did something great, then we could always do better."
I guess there's the thing that they say about Finns in general, that if you have some other nationalities, they may celebrate, and it's a good thing is you celebrate these successes, but Finns are like, "You're splitting an atom," and you're like, "Okay. Well, maybe we should split a smaller atom," or something like that. There's always something you can do better. That's very low-key.
Yeah. The first thing that a Finn says when somebody reminds us that we are the happiest country in the world, it's like, "No, it's not happiness." It's contentment, though, but it's definitely not happiness. We are not the happiest country in the world.
It's like whatever that is, but it's not happiness.
Absolutely. I remember while living in the States, there was this happiest country on the planet ranking. Finland wasn't number one, then. They were probably third or something, and Denmark was the first one. Then 60 Minutes, they sent these reporters to Copenhagen to figure out like what's going on. USA, of course, the best place, they were like 17. Then they were interviewing these people on streets in Copenhagen, and yeah, people were like, "Oh, wow. We are the first. The best country in the world. The happiest. Hmm. Well, okay. I wouldn't have known." And then they said that, "Well, I guess we think it could always be worse."
Right? So you're happy with what you have.
So effectiveness has parts to do with what you do, but it also has parts to do with how you do. Is that the right interpretation? So it's not only that, "Okay, whatever we are doing, we're just doing it in a great fashion," but it's also that we are doing the right things.
Yeah. Absolutely. Yeah. Yeah, the end result does matter, for sure, and I think you always need to have the eye also on the end result. Okay, there might be some wrong ways to go somewhere, but definitely, there's no one right way to go somewhere. I think that's the difficulty overall in product development. I used to say 10 years ago when I started to be a kind of a coach for organizations, that I definitely knew how every organization should work, and then after that, I haven't.
I had the best way to do everything in mind 10 years ago, but then all the organizations have something different, and then you kind of realize more and more, and you find another way to be effective and great. I think that's the beauty and the difficulty of it.
Did you still find some like fundamentals that have not changed, or?
Yeah. I guess one thing has always been about this vision, and kind of needs to be this product vision, and there needs to be this way, like where we are heading, and then this top line on vision and flexible on details. I think that's one of the key things, right? You need to know where you are going, and then you actually can find the ways there. I think that's one of the most things I've seen. If the company has the good vision, then it definitely is possible to get there, but then how they go there might differ.
Yeah, absolutely. Yeah. Often, one of the... Well, maybe I'm like Henkka. Today, something that I believe in firmly is the... I keep talking about vertical teams and then horizontal teams, where vertical is the product vision, like what are we building, who is going to buy this, and then what's the value here? Then you have the horizontal, like technology teams who focus really heavily on the how, right? And then ultimately, if you're VP of engineering like myself, the role is about balancing the... I mean, I want the product side and the what to push extremely hard on what is it that we need to be building, and then I bring the balance of like, "How much? Okay, well, we can do that, but this is the how much we should invest in scalability and quality and things like that, so that there's a trade-off."
I don't necessarily want the product team, really, to care about those things that much. I mean, of course, tell us what's the vision, like how many users are we going to have, but I want them to be really gung-ho on, "These are the things that we really need to build for something to create value." But equally, I'll push back on that we have to do this kind of technology investment, as well. If you visualize it in a way, it's that if you only do the what, you're going to ultimately have this, maybe quoting some famous Nokia CEO, a burning platform that it's just going to collapse, so it's never going to... You're going to go bankrupt in the worst case.
Then if you only do the technology investment, you're just going to build some... What's the cathedral in Barcelona or whatever? It's just never going to be done with it. The sweet spot is somewhere in between, right? Focusing on the how part. I see it as: I want to make the angle as steep, as vertical as possible. There is no inherent value in doing any of this how. The only reason you do it is to maximize the speed of development on the product side. If you find an organization where you have a really nice, good collaboration between the VP of product and VP of engineering, that's probably going to do well.
Yeah. Definitely. I think this goes back to the kind of organizational force theory I quite common kind of bring up, is that you cannot build an organization perfectly ever. You need to have the balancing forces, and that's what you actually meant.
You need to have a force to go for the how, and you need to have a force that go for the what, and maybe then also for the quality, which kind of is similar to the how, but you need to have these balancing forces there. I think that's the difficulty. You cannot ever build it so that the processes and organizational chart will actually do it all, but you need to have these strong forces that actually will have some kind of a... Maybe fight is not the right term, but these kind of strong opinions on where this is going, and that kind of balances it out.
Constructive conflicts, as they say.
Yeah. So it reminds me of this yin and yang image-
... in that there's black and white areas, but then inside one is a black spot and inside the other one is a white spot, so there's a little bit of an overlap there. Of course, from an economical standpoint, you could make it very simplistic. You saying that the question, whether we are doing the right thing, can be seen in the top line, because people are willing to buy it, and the question, whether we are doing it the right way, can be seen in the bottom line, because we can actually make some money out of it, which would be overly simplistic and I wouldn't be willing to subscribe that, but to extrapolate the question about how to recognize an organization that is or is not effective, you both have seen like countless organizations. So how do you tell an effective organization apart from an uneffective organization R&D, from an R&D standpoint?
You want to go first?
Yeah, I can do it this time. I think, going back to Henkka's comments earlier, there's number of things you want to observe that are going well. Obviously, the things that you already quoted, of just having products that we deliver. If somebody's willing to pay for qualities at the right level, et cetera, et cetera. But then things like employee happiness, so people are happy actually working in your company. Generally, also, maybe another rule is that you want to have the product vision, of course, so you're doing some right things, but after that, if you have smart people working for you, talented people who are happy, you're also probably doing very well.
Then if you go for raw metrics, I'll kick off a few, but flow of just getting things through the door. How quickly are you able to push stuff out? One of my favorite metrics, certainly, is just number of releases that your organization is able to push out per day. That's the mindset to have, or per hour, because I'm never happy, but that's kind of the mindset of... Get away, certainly, from any thinking of having a release every two weeks or even less than that. Just every single change you do, just ultimately, just push it out, and that's a healthy organization. That drives so many good behaviors. It drives so many good indicators in the company if you have that in place.
Yeah. I'll continue on that thought. It's like, of course, just to kind of, again, get everybody understanding that like releasing doesn't mean always a commercial, really. It means that it's potentially releasable, right? That's what you should measure. Not how often you release to the customer, because that might be a product management decision. A product decision, right? But how capable you are releasing, even every hour, like you said.
Absolutely. Here, I'll maybe just clarify. Maybe with the release, better word to use would be even have a change.
Yeah, yeah. Yeah.
So you're changing something.
Yeah. Build, then after, from the build, then have the release, and then have the release available for customers. You would look into the cycle time versus lead time. How quickly does it take for the organization to get a requirement for backlog and getting it checked into a code base ready for deployment-
... and how long does it take for a requirement in the backlog to satisfy the customer requirement in the front line? So the end to end.
Yeah. But there might be things that why you would kind of delay the release to the customer because of your go-to-market activities or something like that. That's the reason I kind of would take that out of the kind of capability to actually release.
Yeah. I think technically, it's just a few go-to nitty-gritty details. I mean, I'd build the capabilities of still being able to push all the changes out to production and have them available under some condition or flag or something like that. None of these activities would hold you back from releasing-
... so if your R&D engine can just keep on pushing stuff out, and it's actually somebody else's decision or concern, even, when to publish it to end users, but don't build branches in your software version controls, and just work truly on one branch and push everything out as quickly as you can.
Yeah. Or did you have something on that?
Yeah. You earlier mentioned about the story point. I sensed that you have a strong opinion about the story point.
Well, I don't know. I mean, it's just never proven to be very effective. I mean, many of these... It tells you something, but I'd much rather just focus on having, to Henkka's point, a clear vision what the team is actually building, and the team will know whether they're being effective or not. Measuring story points, yeah, I don't know. You can game these and...
That's what I was about to say. It's like if you measure it, that's always a game, right?
Always. Every measurement is a game, and then you can play your story point game, and then you kind of end up... If someone puts focus on story points, you end up actually start gaming on it, like, "Hey, let's split it. Let's have it as a bigger one," and then even though it's not intentional, but you can show progress, and everybody's pleased that, "Hey, now, you get better numbers," but it doesn't tell anything about your capability, actually, to kind of get the release out or something like that.
Absolutely. And think about this, especially when you start tying these into any bonuses, that's when it really goes out.
And the same thing with like finding bugs or fixing bugs, being that some sort of efficiency metric.
The worst idea ever. You want to have, and maybe I'm still searching for the best possible, like these hard metrics, but certainly, these haven't proven to be the ones.
But if I can go back to kind of where you started, the whole answer was this kind of a happiness of the team, and the kind of a product capability building team. I mean, the whole team. I think if you measure the product people and the ones who are building the system and then maybe if there are separate people on DevOps and testing side, then if all of those are happy, you for sure are very effective.
I think that's this soft metric, even though it's not really used, but this measuring the happiness of people, it would need to somehow be connected to the kind of the work, because there might be some things that are affecting the happiness that are not related to that kind of building of the product, but if you can somehow connect how happy you are doing this product, then I guess that would be a very good metric. Again, I don't have the perfect solution for this, but I think that will tell you a lot.
Yeah. And maybe if you want to maybe ask about how do you measure this happiness, but I think one practical way, at least what I've learned that people generally just want to do is work smart, right? Nobody wants to do stupid work and then repetitive, and that comes from what we covered earlier, or mentioned earlier, about the willingness and desire to continuously improve. Generally, if you have competent people working for you and you let them invest in doing this kind of improvement of ways of working, then they will be happy. There's not much more than actually people even need.
I mean, of course, one key thing, I think, in organization which makes a big difference is that if you're making a profit or not. If you are not making a profit, that often leads to suboptimal work being done in a number of places, because the organization is in a rush to get stuff out to the market. But truly, when you start to be profitable or break-even and then you can just have the team invest in continuous improvement, that's the best source of happiness, I've found, at least, for technical folks.
Yeah. You talked about... What did you say? The rigor on vision and flexible on the details? I would argue that teams and organizations who understand what their vision is, and they can see themselves being part of that, then that invariably leads to their happiness, especially if they have freedom to decide what to do and how they do that.
That's the purpose, right?
Like, "Why do I do this stuff?" Yeah.
Yeah. When we were preparing for this, there was something that Henkka, you said, and I'd like to repeat that and then ask you for more, but you said that the biggest problem with measurement is that it works.
Yeah, exactly. It might sound funny, but that actually is that the problem, is that there's this quite much proof that when you start to measure something, people will start actually investing time and effort on it. To some, it might sound funny, because that's why you measure things, but that's the difficulty of it, because if you are not like what we've been discussing, if you cannot measure the whole thing, which eventually means that you need to see it in half a year, that what's your bottom line, right?
You need to have some measurements in between, and all of those will actually start guiding the actions of the people. I think that's the difficulty of it, is that metrics work, measuring works, and that will lead onto something, some behavior, and quite easily, that behavior is wrong. I think that's the difficulty of all of these things, and I think we can get back to this one bit later, but you need to be very, very smart on your measurements, because of those real guide there. Those are then good in change projects, but then if you are just measuring the status quo, then that might lead to kind of a wrong behavior.
Yeah, absolutely. And think about... You've already mentioned this, like bug counts. Why wouldn't the team, in an extreme case, just keep implementing issues into their code and then so that they could fix more bugs, if that's a specific one?
Yeah, if that's the metric.
Well, how long it's been since I've been in an organization where that was a thing, but like lines of codes produced by a single developer.
Right? Because ultimately, you want the exact... Well, not the exact opposite. You want people to produce code, but you don't, certainly, want them to implement long solutions, but really go for tight-optimised solutions.
Yeah. Or even, if possible, reduce the amount of code, then it's like...
That's always. If you add one line of code, it always slows you down. Always. Every line of code slows down the development. That's what I've said, is that if you remove the stuff, then that would actually improve your efficiency.
Let me try to connect the highest possible level of abstraction, which is R&D effectiveness. It's probably the lowest level of abstraction, which is the selection of programming language. You alluded to the thing that if you measure by the bug count, then people might be geared towards writing code that increases not decreases bugs. Roughly speaking.
People are not malicious, but they might be inadvertently do that. Statistically speaking, I heard something like one in every 20 lines of code has a bug. So if you select a program language, which is dense rather than verbose, so it takes fewer lines to implement a function, statistically speaking, you get fewer bugs because there are fewer lines. That would suggest to let go of high-level languages and to go into the lower-level languages, or other way around, and just evaluate it on the basis of how many lines does it take to implement the function.
And I think it's a fact that if you can write it in less lines, the likelihood of having bugs should be less. But of course, often, these constructs can be rather complicated, so you need to know what you're doing. Make a couple of points. I read something, I think was yesterday, on LinkedIn, or somebody posted in some other social media. I forgot what the law was. Something like flops law or flips law, something like that, but it basically stated that a good programmer programs well in any language. A bad programmer program poorly in any language.
It's still like a competence game. I think the strong opinion I have, though, and it has become increasingly strong as I've progressed or gotten older, is the strongly-typed languages versus loosely-typed. I think the whole loosely-typed languages just leads to this illusion of productivity and getting a lot of stuff done very quickly at the expense of then building systems that have really complicated issues.
That's why I'm looking at Golang and all of these with welcome open arms, and bring back the strongly-typed languages, and just reduce that. Let the compiler actually help you do your programming job.
One thing that often is not taught is that if you think about bugs or think about things that like are considered as bugs, I can't recall the exact number, but I think it was something like 20, 25% come from something that is not existing in the code, meaning that it hasn't been ever have written, and that's not working because of that, then it doesn't matter what's the goal, what's the language, you know? I think it's because you never thought of it.
Is that about the fact that the requirement is not well-defined?
Something like that, so the requirement and also the happy case, or non-happy cases are not thought, really. They are this kind of a-
Yeah, or some like timing issues and stuff like that that quite often actually cause many of the bugs which are not something that you've ever written the code. You haven't thought that this kind of might happen, so that's the kind of... I can't recall the exact study, but I think it was something like 25% of the bugs come from something that is not existing. No one has thought, really, about this at all.
Yeah. It is that, like sun is always shining and everything is great and nothing ever goes wrong scenarios. Those tend to be implemented. For me, that's clear. That's easy to understand. It needs to be able to do this, but then you have all these different paths that can go wrong.
You could argue that much of that can be solved with automated testing, because when you have fabricated test contents, for instance, you have automated test cases and things like that, you'll very effectively be able to test the conditions that... You cannot conjure those thoughts yourself, but you can develop a test setup and test contents that, statistically speaking, happens to catch that situation. That would be one...
I think that's a good point of discussion, is that how much you actually can build up with testing, how much of this effective and finding bugs you can build up with testing. I've even went to the extreme of that. The idea of testing isn't to find bugs, or isn't to build quality. It is actually to kind of make sure that there are no bugs. In a way, you have it in order to make sure that the quality is there, but you wouldn't ever rely on testing to find any bugs. This goes back to the kind of a thinking about, can you measure the amount of bugs somehow? Have you been effective? In an ideal world, you shouldn't ever find any bugs on testing, because then you've actually done a proper work on developing it.
And you're doing quality assurance.
You could see the act of finding bugs is basically quality control, where you're controlling that nothing goes out with bugs, right? But the assurance part is the actual building the, as Lapa said, just having enough testing in place to ensure that when you're making changes, that they function, and that actually, there's no regression, and things work as expected.
I think this goes back very deeply to this effectiveness discussion, is that if you have to rely on your testing, on the quality, then most probably, there's something wrong in how you're doing, but going back even further to this force discussion, if your quality... I honestly have seen an organization that had a quality force that they actually were... The quality was the limiting factor, and they were finding lots and lots of bugs, and they were like investing heavily on automation and quality.
What it actually caused is that the effectiveness was very, very low, because they kind of affected the whole organization to not get these releases out because of they had such an extensive, like two days of their daily set of testing. That's very difficult question overall. I think. In ideal world, it sounds like it's good to have lot of automated tests, but then on the real world, that might affect, also, negatively.
Well, I mean, I think it's still, like the problem you're quoting, of having multiple days for testing, that would most imply that it's manual, to large extent.
No, it was actually an automatic in this case. It just took... There was something like 3,500 automatic test cases from the UI level.
Yeah, but I mean, I think it's maybe not knowing the exact detail here, but I would still think that it's more of a... You want to pull in the QA themes as well as the developers to the goal of being able to release multiple times a day, and that should imply that you know that you don't need to test everything, so you can do focus tests-
... or then your test automation just runs blazingly fast and you're able to execute this so fast, but these UI test cases can often be... They're bound with real-life execution times, right? Because you want to simulate real-life human cases, and they just take time, unless you can honestly scale it to be executed on 1,000 parallel servers or whatever. You need to bring the QA along for the ride. It's not a silo that you have a QA separate from the developers, but working together on the same goal.
How do you approach the sort of metrics and effectiveness when you think back to your introduction of vertical versus horizontal teams?
If it is so. Let me just make it black and white, that the vertical teams are responsible to have a good what, that, "Okay. We have the functionality, which is a customer. It's a functional requirement, and the role of the vertical team is to have a good response and a good solution for the functional requirement. Then you have the horizontal teams that are responsible for non-functional requirements. Whatever that functionality is, it needs to be secure. It needs to be scalable. It needs to be available. Things like that. You'd have highly-performing, and things like that.
So when you look at the effectiveness metrics, how does that conversation vary talking to the vertical teams versus horizontal teams?
Oh, well, that's a good question! Yeah. I was thinking about that while you were asking the question, like, "Would I have a really good slam-dunk answer for that?" But I think I was revolving towards the thought of, these metrics that you want to identify and have the teams execute towards should be shared, in a way. There's not some things that are for the horizontal and the vertical team, right? The nature of things that they want to do in terms of what for the product is different.
Specifically like the scalability aspect. That's typically where things come from. The scalability of being able to serve more users, the scalability to have more data in the system, and the scalability of the organization to have more developers that can actually work on this asset than be not like everybody's trying to do change the same components, but you can actually have an architecture where different teams can work on different parts.
But things like bugs, they should be certainly common. They should not be like, "These are your bugs and these are our bugs," and you need to fight over them. Some of these metrics, you certainly want to make sure that the goal is the same for everybody. If I answered your question there, but... Siloing is what you want to... I mean, there's a healthy friction there, but the mission is still the same. It's not different.
It's Lauri again. To succeed today, you need to make the most of agile practices at scale. A successful DevOps transformation starts with enabling the leadership team and upskilling every role throughout the organization. Successful chains at scale relies on the team's capabilities and tools. Many teams have also found the recipe for efficiency by adopting a managed services approach to software development tools. You can find links to our training and managed services offering in the show notes. Now, let's get back to our show.
Now, I recall what I was about to say a bit earlier, but this goes back to exactly your point about in effectiveness, I think it's important to understand that in software world, basically, in none else, you always build on top of something, right? When you measure effectiveness in a product line or something like that, you are always kind of a building a new thing from... You can measure the end result, right? That, "Hey, how well this end result actually is working?"
But in software world, your end result has always the effect of reusability, right? How you actually can build on top of that, and now, it goes back to the coding language. If you try to kind of squeeze it down into very short, right? The kind of reusability part of it is affected. In my thinking, in the R&D effectiveness, you need to think about the customer value, of course, but you need to also think, "Well, how does this affect our future?"
I think this is the important part, what has to be understood, and that relates to the coding language discussion, also, right? It's that you need to always have an eye that, "Hey, we will continue tomorrow and day after tomorrow on these things we've done," so you should always kind of have that reusability part as an important metric of your effectiveness.
I think it comes back to another really important point, that, for example, these big players in Silicon Valley do quite aggressively, that they reimplement large part of their systems on a regular basis.
Pretty frequently, actually. They change programming languages. Some of the biggest players implement their own, like Google with Golang and what not, so that they optimize for their needs, but some other companies change programming languages to also ensure that things actually get reimplemented and you just don't carry stuff over from your previous systems and copy-paste the code to another, right? Because that's one of the biggest, I guess, challenges that I, at least, feel like I've been tackling most of my career, is the fact that...
I mean, you're working on a train that's moving, right? So it might be missed wheels, or it's certainly missing like some compartments and toilets and what not, and you're building them as you go, and some people are actually using it at the same time as you're trying to do your development work. That's always the biggest trick of... Things are blazingly fast to develop if nobody's using your product, right? I mean, then, it's just like, "Keep on going." If it breaks, you're like, "Okay. Well, fine. We'll fix it."
But the moment you have like thousands, tens of thousands of users, you need to be pretty careful how you go, and that will start to slow you really down, but maybe this is one of the vertical/horizontal big investment discussions, to your earlier point, that happens. It's this, when do we just bite the bullet and rebuild the whole thing? Because the thing that drives complexity is, maybe back to Henkka's point, also, on the requirements, it's just like, "Okay. Let's build this." You're not really necessarily thinking about, "What are we going to have in two years?"
And then you're building and building, and ultimately, you have something really complicated, and you're like, "I can't possibly extend this anymore." But what you have, though, is that you have a very clear understanding of, "This is what the product needs to be," so you can't just ditch everything you built, rebuild it again from scratch, because software engineers are incredibly fast in doing things if the requirements are like really clear. You get tons of stuff done. You really burn most of your time just either going back and forth on, "Should this button be red or green, or should it be here?" Or what the flow actually should be. That's really spend the most of your time.
We have this thesis. We have this future of product development thesis, and one of our theses is that your current technology will kill your effectiveness, and I think this is poorly understood.
Whatever you are doing now, it will kill your effectiveness in, whatever, three to five years, or seven years, depending on what you do. I don't have an answer who... I think this goes back to the forces. You need to have the product force and you need to have the technological force. If you only rely on the product force, meaning that the product management gets to say, they will never kind of understand this part, and that's the reason you need to have the strong technological focus in your company who kind of will drive you to actually build up the things again, and kind of...
Yeah. I think it's like... Maybe, even if they would understand, which some of them probably can-
... it's just the fact that I was saying earlier, that I want these people to focus on... Focus is incredibly important in any organization. It's just like, you can't do multiple things at once. One person is doing one thing at a time. I mean, they won't be doing multiple things at once, and it's just on an organizational level, that also scales to be true. The organization can really just do one thing at a time. You can split the organization into smaller pieces.
Another law that I always preach is the Conway's law, right? So your architecture looks like your organization and vice versa. You can change either one, but that's the way to build multi-threading in your organization, so you truly have areas that are not dependent on each other. Then they can execute, they can plan on their own. They don't need to wait for something else from some other team, but otherwise, individual teams or units working on stuff, they can only do one thing at a time. That's just the way it is.
We could easily go to the kind of a scale that's our frameworks, and all who kind of are... Actually, the whole idea is that we build an organization model to how to handle complexity, but you should actually build an organizational model that don't have the complexity.
And I think that says, again, but maybe that's another discussion overall for the guests to have, but I think that's the idea that came to me from this, that we're talking so much about collaboration between all the teams, and then suddenly you say that, "Hey, do..." By the way, I do agree, but you're saying that, "Hey, by the Conway's law, if you build an organization that is very connected, that means your system will be very connected."
Absolutely. It will be.
I think that's a good finding and thinking that there's the other side of the connectiveness in inside the organization, then you actually will end up building a system that is very complex and connected.
Think about, again, people leading engineering organization and the people leading a product organization, right? The conversation to have is the what is actually the vision? With the product team, for example. What are the products that we truly want to build? Because we can build an architecture to match that if we would have that vision, but we rarely have strong enough understanding on like what that needs to be. Equally, we'll come up with something on the architecture side which is good for those use cases we're building, and no scalability and all that.
And the whole kind of platform thinking. For me, quite often platform means that we have no idea of the vision.
Like, "Yeah. Let's build a platform so then we can use it for whatever."
Yeah. It is kind of like it's a bit of a cop out in the organization, right? We don't know all the use cases, so we'll build a... So maybe the chances are that you don't know any of the use cases.
Yeah, exactly. Yeah, I know. There are very good platforms, don't think otherwise, but too often, it is used like a... Because if we don't want to choose, we build a platform, and then we can build whatever top of it.
Yeah. What I often say myself, we can do anything. I mean, the important thing is that we all collectively kind of understand what are we doing and why we are doing it in a certain way, and the important thing being that we made these assumptions today. They may change, and if that happens, we do need to invest in rethinking how this... I mean, we're not building some massive, great system that will scale. We'll try, but we probably won't scale into all the ideas that people may have. In that sense, it's just fair then to be able to tell the engineers that, "Look, force, these are the new requirements. If we can reuse what we have, great. We can also blow it up and just rebuild it."
What I said earlier, it's really fast to build stuff when you know what you're doing, if you have a good vision.
There must be a trade-off between... Let's be prepared to whatever happens, and let's decide to be really good at something.
And generally, that's the Silicon Valley mindset, right? Be good at something. I mean, that's how you build a business. You do something really well and then you replicate it.
You don't try to do it like, "We'll prepare for everything you can possibly do under the sun." But yeah.
You, Henkka, mentioned the concept of measuring change versus indicators.
So if we think about all of this, what we have discussed, if you could open that a little more, and then we could seamlessly transition to the follow-up question about the North Star metric, because I believe some of that is connected.
Yeah. It's important to understand, again, that the metrics work, right? So metrics will affect the behavior. That, then, for me is that if you want to have a change, let's say that you want to invest on a quality, then you can actually measure the quality very close with amount of test cases, your pass rate, your bugs, and then you can get the change happening to somewhere, but then it is important to understand that that's a change metric.
After you get to a certain level, you should stop measuring it, or at least have a transition as an indicator metric. What I mean with an indicator metric is that those are like, you have your metric of how warm your house is, right? If it is somewhere around 22 to 24 or whatever the normal is, you don't need to care, right? That's the indicator metric. But then if it goes high or low, that's the point you should have a plan, right? What do I do if it is 19 or 18? That's what I mean by indicator metric.
You have certain indicator metrics in your organization, but those are just in the background and you don't need to care about those, and then you actually have these change metrics which are purposefully on red, meaning that you are going to watch something. Then when you get somewhere, then you actually kind of stop it to be a change metric, and then maybe think, "Should this be an indicator metric that we won't ever go again to this direction?" That's what I mean. There's change metric, indicator metric. Those are different.
So the North Star.
North Star is a concept of, what would be the one metric to tell about the effect if there's only one thing you can measure? So what would be a north star metric for product organization effectiveness? Try to maybe say... Do you already have, Eero, in mind, or?
Well, I think with the way you framed it now, I was thinking about this, for the development organization alone... If I'd have to pick one, I don't think it's realistic to have one, but the number of releases and the pace. The flow is really a good way to look at it, because that leads to a lot of, or should lead to set of other good things, and that's how you want to have the North Star metric, obviously, that it leads to 100 different good things, so you're going and selecting it that way.
I don't really know if I have anything better than that, but the way I was listening to Henkka talk there, it's like, can't neglect the importance of just thinking about, "Are we making money with these products, and how are those just business metrics looking like?" That is really, really important. Really important aspect. I don't know if you have anything better.
Yeah. I think we could maybe answer in... If you are in a SaaS business or something like, that you actually get the quick feedback of the customers, then I think you should have the North Star metric somewhere from there, right? You can get the quick feedback of the customer, but then if you are in the B2B world or some kind of other world that your sales cycle is such a long that you actually can rely on these customer metrics, then I tend to agree with Eero that like this ability to release to customer, meaning that the readiness to release points, I think that would be maybe the best, because it's hard to see how an organization could be too good on it.
I mean, what's the limit? If you can release four times a day, why wouldn't it be better to release five times a day? I mean, there's no limit for better on that one, basically. You should always have a well-working software and capability to release. I would either maybe go from those.
Last April Fool's Day, we announced a press release of a continuous deployment keyboard. That was, of course, a tongue in cheek, but every time you press the button on a keyboard, it'll build and deploy. That's the ultimate outcome where you go. Of course, a lot of code is going to not compile because you couldn't get to the semicolon yet. Every ADF of data was compiled, because that's how you press the semicolon.
I mean, it's original.
But that's why it was April Fool's Day.
Yeah, but you get feedback on every...
Yeah. But it's a good point, actually, that Henkka making here on the... I mean this number of releases per day is obviously very SaaS-centric, which is, granted, my background, and the last 10 years or so working on that. If you are doing something else, it's a bit different.
There are still many, many organizations who really can... When there's hardware included, it might take even years to build up the product, and that's all right, but that doesn't mean, again, that you wouldn't have the capability to learn fast.
I mean, you have some prototype version of the hardware. You kind of release against it, and you learn and test, and I think there is the capability to actually get the release integrated to the system and then get the feedback. I think these are the same, even though you cannot release it to the actual customer outside, but you should be able to release to the kind of integrated environment and get the feedback of them.
Yeah. I think one more item I do want to mention, I mean, I mean the business metrics are important, of course, but I am a firm believer in the, and I think this is the third time I mentioned it, employee happiness and happiness of people working there. It is just a really good indicator if people are competent, people are happy working. You could argue that, "Well, they might be happy because they don't have to do anything, or they can do whatever they want," but that's not a driver.
Yeah, I haven't seen that.
No. I mean, people want to do a good job and they want to do meaningful work, and if you publish your financial results, they'll also be happy if they're good, and they'll be unhappy if they're-
They are proud.
Yes. So I mean, that's a really... Maybe that's the thing. If you measure one thing, it's your just employee happiness is the thing.
If I can spend a few minutes with this, is that too often, it is seen as an HR thing, right? Are people happy? It's seen that, "Do we have all the bells and whistles?" I think those don't really matter. If you're doing an interesting job with an efficient organization, with a good purpose, the people will-
The good colleagues.
You will be happy, and then the bells and whistles don't matter. With bells and whistles, I mean all this kind of what type of coffee machine you have.
These perks don't really matter. I agree that that is very, very important, and I think that should be something to understand and measure.
Yeah. We can introduce a new term. Bring your own perks. We'll just give the teams fund and say, "Okay. You don't like coffee. Well, you go get your own perks. Here is the money."
There was one, as we mentioned, on Netflix specifically, right? Their compensation package has always been, and their philosophy has been, "Okay. We'll just give people big salary, and then also..." I mean, they have perks, and they have free lunches, I believe, but then for some of the things that you would normally get dictated or given to you by US companies, they would just give you like, "Here's $30,000, and you can spend it however way you want."
Specifically, of course, you spend most of it on healthcare, right?
Because you could choose not to have healthcare at all, or you could spend it all on really good healthcare, but the culture that they wanted to build is like independence on every single thing on the organization, and encourage the fact that, "We expect you to make decisions on a daily basis, every single one of you, and it starts with your compensation package." These are all cultural things, of course. Not everybody can or would do the same, but that's something they did do.
One last thing which we cannot ignore is the considerations for the technology solutions around measuring R&D effectiveness. Of course, we can walk around and ask people how happy they feel, and then we can take a sample and calculate different factors, and then we can feel good about that. I wouldn't consider that as technology solution. Now, especially considering that those metrics, some of them are going to be change metrics and some of them are going to be indicators of the current situation.
Some of them have to do with the culture and the organization and people, and some of them have to do with what is actually going on in your pipeline, for instance. What are your considerations for the technology solutions? Then maybe if you have good examples, good examples of measuring R&D effectiveness.
Yeah. I think we all have good examples.
But I don't answer directly to the good examples yet, but for me, it's always that when you are a SaaS company, you basically have no excuse on not having the metrics there, right? You have actually live data of your system and live data on building the system, meaning, all of this, like agile and AAL and tools and product development tools, right, and build pipelines. You have all the possibilities to actually have the indicators there, build up that, "Hey, everything is fine. Our temperature in the organization is good. Everything is all..." So you should be able to have all of these in a SaaS environment, and now, getting back, have anyone done it properly? Not really.
Yeah. It's still a lot of work to do and to build it, right? Maybe one point I want to make is that these things are... I have sort of found myself now to be kind of a scale-up guy, right? At previous organization where I was, we grew from... When I joined, we were, I think, five people in Helsinki, and then we were 160 globally when I left. Then also in the current organization, we have tripled the team during the COVID time, so there's heavy growth, right?
It has a big impact on, also, what you want to measure or you don't want to measure. It's a completely different organization if you have five or if you have 15, if you have 50, if you have 100, 150, right? So that's a big impact on this, as well. Maybe I'll use that as a bit of a cop-out for my own organizations where we haven't done this, because there were just tons of things to do, but I know in the current team we have, we are investing a lot in continuous deployment, and we are investing a lot in monitoring just getting these data exposed. The biggest thing slowing you down is, again, I think the moving metaphor of the train which has no wheels, or you have to change those wheels and you have to build stuff as you limp along. That's what slows you down when you do these things.
Yep. One good example I could tell, I haven't asked from them, so maybe I don't mention the name of the organization, but they were in the scale-up phase also. They actually had this as a SaaS service... In their office, they had these metrics about the customers from daily, like how many customers you got in, how many left, and that was like a big TV in their office spaces. I think that was kind of something that everyone at least knew and had this kind of idea of where you were with this system. "Hey, now, we are getting customers. Now, we're losing customers." And you actually got the feedback for the product development organization. I think that is definitely something in a SaaS, especially in SaaS consumer business, you should have easily available about the data.
Yeah. As long as everybody in the organization, also, you couple that with the feeling of being empowered to take action immediately on this-
Yeah. True. True, true.
... and not just observe like, "Ooh, things are going wrong. I'd better update my CV," because that's maybe the classic reason of whether... People tend to think that information not being shared is because, well, it's power, but the other thing is just there's something bad about it, which is, of course, a really bad reason not to disclose anything, because however bad it is, people generally think that it's even worse if you're not disclosing how things are actually going.
I'll just continue on your earlier thought on why building metrics is something that is not... It's ever really a thing when things are going all right. I think that's one of the things. You always tend to invest on metrics and measuring when you have a clue that something isn't going well and you need to figure out what it is. I think this goes back to the very, very original question, what is effectiveness? How you measure it, and then when you tend to actually have something wrong, you tend to build a metric in order to kind of understand.
I think then my success, then, as a kind of a final thought, maybe you should have these very simple indicator metrics from the very beginning, that, "Hey, let's just let this whole thing be running if these indicate the metrics are all right. We get the releases out. We get the customers be okay, and we get the people to be happy. If though these are all right, we let's focus on the content. Let's focus on the vision. Let's focus on continuous improvement." Maybe that will be kind of my final thought in this topic.
Yeah. Not to try to dodge the question that Laur asked, "Do you have any good examples?" I think just maybe I'm uneasy answering confidently that I have good examples, because I think the way I feel is like, with my teams, for example, we've emphasized many of these things that we're talking about here. We've emphasized employee happiness importance. We've emphasized the number of releases going out.
That's actually something at AlphaSense, my previous company, we did measure as an actual metric, that how many are we pushing out? There were some other metrics, as well, but that was the only one I really remember, like what was a key metric for myself. That did work very well, and the driving, again, the overall organization to behave or to be able to work towards a common goal. Sure, we had tons of pain in the beginning, and folks also questioned, "Is this really a thing? Why are we doing? What's the value here?" But I think that eventually, it did work out well as a good metric for us to push for.
But in addition to that, you're... Maybe that's something to take away from this conversation, as well, to my organization, is that we should really have more rigor in tracking some of these things, and not just like, "Yeah, we kind of want to improve these things, but are we really measuring?" And just maybe going back to Henkka's point, you made a couple of times this, if you measure it, it will improve. It's still like, once you set the metric that you will be-
You will change the behavior.
Let's put it this way. And improve, or...
Yes. Yeah. True. Yes. Something will change, most likely, so you want to make sure that what you have as a goal is something that you truly think that will make a positive difference.
Yeah. Exactly. That's the reason I wanted to say that improvement is always an opinion, right?
In this context, right? Because of the only thing that you genuinely improve is your bottom line, maybe, and then everything else is kind of a opinion whether this takes us that direction, and that's the reason I always say that metrics change the behavior, and then that might take from your right direction, and no one knows until that kind of the bottom line actually shows up.
That's the difficulty of all of this.
Yeah. Not long ago, we had an event with our customers in Switzerland, and we had somebody using a number from one of our customers, saying that they do 90,000 builds a day. You got this wave of, "Whoa," in the audience, where you put out this number, and first of all, you can put out the number, and second, the number is big. It's so big that if you haven't done a pipeline like that, then it blows you away. Like, "How can anyone do 90,000 builds a day?" Anyway, that's a good number to bear in mind, and I think that also, what you alluded to earlier, that the number of builds is a good metric there.
Yeah. Not knowing exactly doesn't mean that it's releasable, right?
You'd probably get some feedback out of it, and you can react if something's gone wrong, but I remember maybe 20 year-ish ago, company where we worked for at that point, the mindset was still such that I remember still really having build machines dedicated to do builds on the phone software, then we were trying to set up this system where you push a change and it would always do a build, and we would get a new version. The conversations back then with the version control, the configuration management teams were like, "No, we do this every two weeks, so you give us the latest and then we integrate."
Then we were like, "So why can't we just do it like this? You have the machines. The CPU is idling. It's right there." I think we ended up doing it ourselves. We bought the machine and we just did it ourselves, and it was hugely helpful in just getting instant feedback. We had some test automation and then we had some folks who were able to test the intermediate builds and whatnot, but the whole organization was wired towards this every-two-week cycle, and then there was this massive integration, smoke, and then pray, sacrifice a goat, and then let's hope it works, right?
The whole idea of agile is about learning, rather than basically delivering value. Of course, the delivering value is there, but the most important part is fast feedback, which actually equals learning. I think this is somehow misunderstood in so many places. You concentrate on so many other things, rather than... And that's the reason of the builds and releases, what we've been discussing, right? You should have the fastest possible way to get the feedback from the big kind of environment. I think that's the point.
We really could go on forever.
Yeah. I said this earlier, like, "I guess this episode could easily take all day."
But I think it's our time to stop. I'd like to thank you, both of you, Henkka and Eero, for joining. It's been a wonderful conversation on R&D effectiveness. I'm going to reveal that we have secret somebody else to talk on the same subject, R&D effectiveness, but they are not going to talk about in terms of metrics, but they're going to talk in terms of how do you run the change through your organization, and how do you figure out what to do, and then how do you make it happen? It's going to be a different perspective.
Thank you for listening. If you want to continue the conversation with Eero and Henri, you can find their social media profiles from the show notes. If you haven't already, please subscribe to our podcast and give us a rating on our platform. It means the world to us. Also, check out our other episodes for interesting and exciting talks. Finally, before we sign off, let's give floor to Eero and Henkka to introduce themselves. I say now, take care of yourselves, and remember to push out new releases like there's no tomorrow.
My name is Eero Jyske. Been a software guy since as long as I can remember. Just shocked to find or realize that I'm going to reach my 25th anniversary working in software business next May, in fact. I have an anniversary date in my calendar, actually. Before that, even worked, had an Omega computer, PCs, a lot of programming before going to university and then being in the software business in actual work. Did actual work in the early stage of my career, and then moved on.
Drifted to management, and worked a long time in Nokia, and then did a gig at scale-up after moving back from the States, the gig at a scale-up here called AlphaSense, which I mentioned earlier. We proved that from five to 160. After that, I guess I got the seven-year itch. I got the opportunity to join a space startup, so I'm working currently at a company called ICEYE here. I'm based in Espoo. Similar scale-up journey, and taking a lot of the learnings I had in AlphaSense and trying to apply them to the space industry, as well. We are hiring, so check our website.
I'm Henri Hämäläinen. I'm nowadays working at Eficode. I've been kind of an Agile Coach, Product Organization Coach for 10 years. Before that, the career at Nokia, where we met with, with Eero. I had had the luxury to work with so many different organizations, like Finnair and ABB and KONE, and then on also the real SaaS companies, and have been seeing maybe 50 to 100 different kind of organizations, and that always gives you perspective on how to do things differently. Yeah. Like I said, I knew how things should be done 10 years ago. Nowadays, I've learned that I still know the all, but always learning like it is relates to this subject. Thanks, everyone, for listening.