We invited some exciting people to our podcast to share some backstories around the themes we covered at The DEVOPS Conference. In this episode, we have Lauri Huhta, Data Engineer and MLOps consultant from Data Nuggets, and Yishai Beeri, CTO of LinearB discuss data-driven engineering, what is behind the old problem of data integrity in the software industry, how to solve it, and how to approach metrics right
We're getting those kinds of questions from you know people that want to go into metrics and Dev metrics. Like how will my engineers feel? Is this going to mess up my culture? And our answer is your developers need to choose the metrics. Okay. We provide a set of metrics. They need to choose what metrics to focus on, what metrics they believe in, and where to start. And if you're doing this as a bottom-up approach, then it's their choice. And it's a tool for them to improve, not a tool for management to say you're doing better than she is.
Hello, and welcome to DevOps Sauna. This year, The DEVOPS Conference happened on March 8th and 9th. And if you happen to miss the event, don't worry. You can listen and watch all speeches online. We invited some exciting people to our podcast to share some backstories around the themes we covered in the event. Today we have Lauri Huhta, a data engineer and MLOps consultant from data nuggets, and Yishai Beeri, CTO of LinearB. Lauri and Yishai discuss about the data-driven engineering, what is behind the old problem of data integrity in the software industry, how to solve it, and how to approach metrics right. Join me in the conversation. Thank you, Yishai, and thank you, Lauri, for joining the DevOps Sauna podcast.
Thanks for being here.
Lauri H. (01:37):
Thanks. Good to be here again.
Yes, actually, we had a session with Lauri talking about AI in DevOps. And of course, when we talked about AI, it relates to the subject of today, which is data-driven engineering.
As I was preparing to this episode, I took a look at what LinearB has said, and there was one particular sentence that stuck out, verbatim. We analyzed 733,000 pull requests.
What did you find out from that analysis?
First of all these pull requests, this is data that we have collected from our customers.
They represent work of Dev teams, you know, hundreds of thousands of Dev teams working on what is maybe entirely commercial software. So this is not open source and these are all people paid to work on software for companies that write products or write products software for others. And when we look at those public, as we were looking mostly at the, the dynamics, not the content, not the actual codes, obviously, but the dynamics of how does a change to the code base, progress from initial commits through the pull request and a review cycle, or a merger quest, if you're happening to use that term review cycle back and forth, additional commits, eventually getting merged.
And when we look at those, you know, very different pull requests in terms of size, in terms of how many files are involved and other aspects. I think the most striking thing, uh, that we found is there is a lot of wait time, a lot of idle time in those pull requests. Pull requests are a sink by nature, right. I'm putting a change, waiting for someone to take a look and review my, my codes. And there is a back and forth conversation, which is almost by definition asynchronous.
I think pull requests really started as a way to do software with open source and distributed, very loosely coupled groups of people working together on a codebase. And even when they're done inside the same room or the same virtual room, now that we're all remote, we're seeing a lot of idle time, a lot of wait between the stages.
And it was interesting for us to try and understand what's going on. What are the types of idle time? Is it some person that has started to work on something and then they're coming back after a day, or two, or is this back and forth between people? We've seen both and we, we can quantify different behaviors when things are transitioning between people or when the same person has just been disrupted and needs to do something else. And then get back to continuing a code review or adding more commits and so on.
But I think again, the most striking thing was that, oh, so much of the time, this pull request is doing nothing. Nothing is progressing. There's no one working on. And it's just waiting for someone to pick it up.
Lauri H. (04:35):
Yeah. I remember I've been talking with my colleagues in the past. And when we were following these same kind of metrics, you could say that you can follow these things in your project management or in your, like, Git your following your repositories and you can find the parallels like how long it takes that the feature starts and goes through the processes and you release it and the same, but in the aspect of code.
And I feel like the distinction of this is quite important to make, because when you're following a ticket through like project management system, it's usually something that has been decided. It's like follows a structure. And that has been planned in your sprint meetings. It comes usually from a higher-up needs what you have to actually produce. But with when you are analyzing, like pull requests or like these kind of get data in general, it gives, I think, much better view of how the processes are because that data is usually quite natural.
Like, it's shows clearly how do the developers work in there and what happens in there. Like my colleagues sometimes put like, okay, like a Jira ticket and the data behind it. It's often like how the processes go. But the Git data is more like how happy to people are working on it. And if you can see that no one is reacting to your pull requests, you are probably not going to be a happy developer in there.
Yeah, I think one other major difference that we see between these kinds of systems as these isn't represents like partial views into the Dev process. If you're trying to understand what a development team or group is working on and how they're creating value, you can look at Git, you can look in JIRA, or the relevant systems of these roles. That data is going to be split between these kinds of systems. And there's a major difference between the Git side of the house in the JIRA side of the house. And that JIRA depends on people seeing what they're doing, right. The actual work is not in JIRA. It's being tracked there.
People need to move issues or tickets between statuses and assign them. Everything is about reporting and seeing that I did something or that I planned to do something. Whereas in Git, it's the actual work that gets logged. It's the actual commits, the actual conversation. If you're doing the layer on top of Gits for pull requests or what.
And we have found that, as a source of truth or as a ground truth for what's going on in the Dev process, you want to focus on sources that do not depend on people reporting. So Git is, you know, our solution begins with Git and Github, GitLab, or the review provider you're using. And then looks into additional systems to tie all the information together.
But if you're relying on people that are saying, this is what I just did, or this is what I intend to do, then you're getting a patchy late and incorrect information because we're all human. And we don't like to report on time on everything that we do.
Lauri Huhta (08:03):
Yeah, exactly. Because we feel thinking about that, like a JIRA tickets and let me feel sprained panning is every two weeks, you don’t really feel the need to move the ticket in the UI anywhere. But when you are thinking about like a piece of code that is connected to an issue to get moving forward with it, or like, to be happy with, to complete it, you will accommodate it. Like all developers, we'll accommodate their work.
Let's say at the end of the day or when the feature is ready or when the block is ready, but with JIRA, you don't really win with it. Like it's easy not to care. And then you've been working for like two days on the ticket that is still in a backlog. And here comes to user error that you said. Then you could do the trick off. Okay. Let's put it into the next in-progress for a while. And then it's like, well, it's been there for a minute now. I will put it to done now. And there's quite fast cycle time for you.
Yishai (09:06): Yeah, I think there is, there is no end, you know, where there is all sayings about what is infinite and what is not. So I think a developer ingenuity about bypassing processes and rules is one of the infinite things. I think you want to get down to the like understanding why developers and other humans behave like that. And you solve these things, not by just adding more process or even by like enforcement, you can force. I've seen companies that have a merge Git in GitHub that forces, you know, things on the ticket, on the issue on the JIRA ticket.
It has to be in progress, has to be assigned to you. Otherwise, you can't merge. It's doable. It's going to force people to cheat as, as little as they can, but instead, if you focus on understanding why they can't be bothered to update the status of the JIRA ticket or to create a JIRA ticket for your work.
You know what we've seen and, you know, back to the, all of these, you know, hundreds of thousands of millions of PRS and chemists that we've researched. Even in organizations that really try hard to tie their getting zero work together using, you know, referencing the JIRA ticket from their branch name or the PR or the commit. Even when they're working clean are still 20, 30% of Git work that is like, logged anywhere and is not connected to energy or ticket.
It's just like shadow work. No one knows about and beats us because we developers. There's a bug or I'm seeing something bad in the code, I'm going to fix it. I'm going to be nice. I'm going to open up PR get some code review to all the tests. And then the JIRA ticket that needs to be created to match all of this is the least of my concerns.
That's the non-fun part of, you know, this crazy fix or improvement that I made than just, you know may the app run 10 times faster or did some other nice thing. So you want to solve this at the source specified. Is there another way I can cause this information to get created and streamlined my experiences as a developer.
So that this happens organically. Don't force it on me. So I will cheat. I can create a dummy your ticket. I have a script for that. Yeah, that's easy. But if you can be smart and just help me create the tickets automatically and just give me a nice button to approve it and bring this to where I live.
Like, I don't want to open bruh, bring us to, to, to my slack or my IDE or whatever window or my shell. Now it's just a click away from me. And oh, yes, that's great. And we've started to rely on this flow. And so we create this JIRA tickets, to begin with, but then we just add them in, in our flow automatically. You know, just because we're dogfooding our own product, that's the way to solve.
So for some of these discrepancies and lead data.
Lauri H. (12:06):
Yeah, I think it's good. What you said, like bringing it to their, like their field, like bring it to the IDE and bring it to the shell. I think that that's good if you need a reminder to do it, if you need to actually. If you just need a reminder, do it, but if you wouldn't care about it, cause like, that you grabbed metrics that usually is followed by like the management and that project managers and that kind of people, developers don't really, well, by default, they don't really care about what are the stats behind the JIRA ticket.
But to closing that gap. Like we can remind them, but if they don't care about, we can help that by making the reporting as transparent as possible, like showing that these are the metrics that we are tracking the whole team with, and actually showing the importance like that could help in that situation. Then also. But I feel like that you can't, as this is like, so prone to user error, you can't get it perfect. But there's ways to help it in different areas.
Yes, I totally agree. I think a reminder is not always enough. So if you're just reminding me of a task that I hate, and it was going to distract me. Then yeah, maybe you've helped me avoid some mistakes, but instead of just reminding me, combine that with making that task, like that easy. I've already wrapped this in a gift wrap for you. Just click, okay. And it's done. You don't have to actually do the task of then at four, you just confirm it. When you take that extra step and you're doing correctly and you're doing it smart, this no longer becomes a tedious thing for me to, okay, I have the reminder, but now I have to go and do this. No, it's already done.
And all of these data reporting tasks, there is nothing of intelligence around that. Like all of the information about what I just did or, or my code is in the branch. It's in the PR. You just want me to wrap this and give this another title for JIRA.
Come on. You can do this for me, request my time when you need my intelligence, and otherwise do this for me. And maybe just let me control this with an okay. And maybe tomorrow, let me click, you know toggle something that puts us on autopilot. I can fix this in retrospect. I can. There's always an undo.
Just do this for me. And don't ask me to do these mundane things anymore. But I want to maybe differ Lauri on whether Devs care or not about metrics. Like there's a lit little level that says, yes, developers do not always care about the metrics and their managers care about. But I think from what I've seen, it's a combination.
If the metrics are the right ones that the team believes in is not just some management top-down motion that says, we need to measure ABC. You need to be turning out more lines of code. So the metrics that I believe really represent better work, better productivity, happier teams than I, as a developer, would care at least enough to make sure that we're doing good and that my… If we can get those metrics to align with what I want to happen, I want to push my work fast. Then want my PRS to be shiny and great and never fail. I don't want to get up at 3:00 AM with downtime and production bugs.
That's what I care about, right, as a developer, I want to finish my work. I want to do great stuff, great feature. Fix all those nasty bugs and I want this to happen quickly. I want to help my teammates, you know, do the same. If the metrics align with my, you know, basic desires here, then yes, I will like, not just cooperate, I will own those metrics as part of the team. If that's the culture, if that's the way metrics are introduced to the organization and then managed. They're not giving us a directive from the gods above, you need to measure this.
And this is your goal for the quarter. I've seen this get embraced by the workers.
Lauri H. (16:11):
Yeah, you're right. You're right. Like, not all developers care about, but there is people who care about, I think it's more like from a developer point of view, you really need to put some effort into the reporting and into the KPIs that you are like presenting to the team.
Because if the people who collect the metrics about like the progress of the project, if they don't understand the full life cycle of the projects, it can be not that motivating to follow the metrics. Like for example, if we are following productivity in some kind of metrics, like making new features that like following later cycle times, and maybe lately, you've been just gaining a lot of bugs that you are fixing and you are working as hard as you can.
But it doesn't really show in the report card. You are not bringing in any feature. So I think it's like really, really important to carefully choose to metrics and always, always look at the big picture. And I think this is where I, as a data engineer, I unfold myself that when… ‘Cause there is a lot of data.
All the services that use, there's tons of data and you can just step into it. But when to stop, like how deep should I go? Like, should I go to like needs peak? Like I can get all the information about this one developer's work. What should I do with it? How should I present it to the team? And the answer is you don't take it to the one person. And I think from what you said in the beginning that you analyzed all those 700,000 pull requests.
And you mentioned that you didn't go to the core. You didn't go that deep to see what are they writing? So it's like really important to think, like, what KPIs are we presenting? Which of these would show the whole picture. And do we have to go any deeper than that? Because if you start to bring out a person instead of a team. That can be really risky if you want to follow these metrics for long, or if you want to motivate your developers to actually use these systems in a way that then produce good data.
Yeah, I totally agree. I think there's multiple things to think about when, like you're talking about how deep should I go with metrics? There's, like on the code side, we don't have access to code basis. It's not a part of what LinearB does. And while there is probably, you know great information in understanding the kind of the changes made in PRS, like are people refactoring functions or adding parameters, or writing new codes, new functions or changing APIs. That's not part of what we are.
We're more interested in the process side, but when looking at, how do I use metrics an you know, you're not measuring just, you know, for fun, you need to be measuring things that you can utilize to improve.
So the choice of which metrics to use what to measure, what to show, what to actually avoid measuring. First, it comes down to, you're going to get more of what you measure. So measure the right things if you want that behavior. If you're measuring code lines, you will get more code lines.
Do you want more code lines? I'm not sure. Same goes for a lot of technical metrics that used to dominate the, this field 15 and 20 years ago. The things that are easy to measure are probably not the ones that you want and your developers will either gain them or even, you know, unknowingly get biased towards those metrics if your organization leans on them. And you want to make sure that these gets you the right before. Rather than a technical artifact, like lens of code or number of comments, or even number of PRS that people merge that alone, like these metrics are dangerous in the sense that yes, you'll get more of those.
You will get your, your metrics will improve. Does that mean that your process has improved? And the other part that, or angle that you should be aware of. And you know, is the culture and the personalized. If you were using metrics to stack rank developers and saying, this person has better than that person, then you're probably missing the points.
You're going to encourage the wrong behaviors. We believe that most of the metrics that actually matter are team metrics. They're not even measurable at the personal level and trying to go for personal metrics. And for comparisons, is going to kill your morale. It's going to kill your culture.
So you have like a, I don't know, end up with some backstabbing or complete the break of trust between management and developers. And not even for a good reason because all of these metrics are much less powerful than team metrics. Yeah. We're, we're seeing, we're getting those kinds of questions from, you know, people that want to go into metrics and Dev metrics.
Like how will my engineers feel, is this going to mess up my culture and our answers? Your developers need to choose the metrics. They need just like, we… Okay, we provide a ton of metrics. They need to choose what metrics to focus on, what metrics they believe in, and where to start. And if you're doing this as a bottom-up approach and letting the team decide, okay, I'm going to focus in the next, I don't know, iteration or quarter or whatever on this thing, which I think we have a problem.
Because the numbers show us we have some issue in, you know, our PRS are too large or our pickup time for PRS is too long or anything, then it's their choice. And it's a tool for them to improve, not a tool for management to say, you're doing better than, than she is. And we need to remember that developers are kind of, it's kind of an art, right?
We're not just putting on brakes on top of each other. So recognizing that metrics will never capture everything in about how people work, but they are a lens to let me see where we have problems, where we can improve
It’s Lauri here, again. Many businesses are in the middle of what's called a DevOps transformation, or they're considering to launch an enterprise DevOps initiative. If this includes also your business, we have a webinar recruiting for you. In this webinar, we present topics related to how key metrics can help you better understand the progress of your DevOps transformation and how to use them to shape the right behaviors within your organization. You can find the link to the webinar in the show notes. Now let's get back to our show.
Two things come to my mind from this conversation. I have a five, six years background with a purchase-to-pay business process, which is basically, you, as a company wants to buy something. And then you want to receive those foods and you're going to receive the invoice and then you're going to pay that invoice.
So you had three important steps in that process. You have the permission to make the purchase. Then you received the goods or services. And then the second step is you receive an invoice related to those delivery of goods or services. And then the last one is you make the payment to the supplier and the basically three steps in the process.
And invariably it happened in three distinct services integrated together to deliver this. There's a purchase to pay process. I have heard you talking about slack, talking about JIRA, and talking about Git, and of course, an IDE, maybe is the fourth one. I'm trying to piece them together in what is the universe where this data is going to be collected?
And then the second question is, what is the relationship between the engineering team's metrics and the engineering leaders’ metrics? Are they abstraction from one to the other or are they distinct set of metrics derived in two different ways?
Well, that's an interesting, interesting couple of questions. So, first of all, I like the, you mentioned universe of, like, I like the analogy. ‘Cause I'm thinking about this, like, kind of like a solar system or any kind of a filler formation, the way we think about this, there is a core. And if we're talking about software engineering, this is going to be Git that is the source of truth of what is actually happening to the codebase.
And this is how Dev teams typically deliver value. Incremental code changes to code base and deployments of that to the world. There is the systems like JIRA and signals from your pipeline, your delivery pipeline that let you know about both the planning side, what is going to, what would the, what do we need to do?
What do we plan to do, timing, order of things, priorities, and so on. And so that lends context to the work. Otherwise, it's just, you know, you need to read the codes to understand what's going on. And then there's additional satellites that are, you know, going to be appearing in some of the constellations, but not in others.
And the more you have, the better picture you have. So things like Figma, or planning and designing system which you know, is an artifact where some people work on things that eventually become code. Or higher-order roadmap, planning systems like aha and whatnot. These are all are interacting, like layers on top of the, what I call the left side of the cycle time, what happens before you start coding?
Then you have feature flags and other runtime or semi-runtime artifacts. Like a feature flag is a change to your product. And if you want to look at a feature or a fix that you've deployed, but it's, it's in production, but it's not open to anyone. So there is like additional stages beyond deployment, which relate to feature flags or even how many customers or users have access and are actually using this.
So information about actual runtime. Usage, is it getting used by a thousand customers, nobody, a million? These are all signals that people care about as part of the development process, even though they are after development. Then you have testing systems or, you know, Excel sheets where people do Q&A or manage their quality work on.
So there is like a variety of disparate systems that can provide some signal or some information about some progress in parts of the process. We've obviously not integrated with all of that universe yet. And there's always a place where you say, oh, I have the core. All the rest is an enrichment. But I'm not going to rely on that, but I think one should always think about all of these signals as additional context, additional information that if you can be smart about, you can do so much more to help the developers, to help the teamwork better, to understand bottlenecks. Right.
Is this waiting in a QA queue somewhere? And why? That's information that you can maybe get from a testing orchestration or manual QA tracking system. If that's what you're using in your organization, is this thing stuck in the CI, right? Because my CI is very heavy. That's a different kind of bottleneck, but it's relevant, right. Longer CIS mean less productive developers, we all know that.
If my CI is a 30-minute thing and I'm like waiting for this machine to churn, I'm not going to be as productive as a 32nd CI cycle. It's the fact of life. So there's the core and some very common sources that cover most of the process.
And then there is a lot of satellite that you want to be able to pull in as additional sources of light and information to how you understand the large, like the larger, the process from initial idea all the way to a happy customer or happy user.
Lauri Huhta (28:32):
Yeah. I think thinking in today's age, when you are following some metrics or team metrics, all this kind of which surround around software development, I think it's usually just like project management JIRA called code management. It's going to be good. And there's going to be the continuity integration, as you said, which is, well, whatever Jenkins, get up actions nowadays. What I've been battling with myself lately, I've been trying to like read a lot of books around these topics, like how to build teams that, how to build effective teams. And thinking that the data and data world, like how do you measure and how do you build productive, highly efficient teams.
And I think this, this comes back to the topics we discussed, like how deep you want to go with your data. Are you going to measure one person? But I think one book that I want to highlight is a. Uh, blue called code metrics. It's written by Charles and Alexander. It's in there or Riley media library. And the book draws a lot of parallels with the book and the movie, the money ball, which is based on the famous baseball analytics, where they actually then measure every single, like movement of the people in the team and trying to use those analogies to then collect data that is in this universe, which we want to measure.
It definitely is not the core information that you need, but the satellites that you said that you want to bring in some additional information, and I've been battling with myself, like I've really liked this book. It says good things because it brings my biggest takeaway of those metrics, whether that's, it measures interpersonal metrics. So we've been talking about, well, CI, you can't really affect if it's good code, if it's good product, the CI we'll probably be good or that we are reflecting the data.
But we've been talking about this Git data that is about to code or JIRA about the project and well, how you move the tickets in there. It's reliant on that. But to bring kind of like the actual ways of working, because it talks about this kind of defensive and offensive metrics and the most important deficit metrics where like a whole, well, do you support the team?
And that was like a huge thing in there because some senior members of their team, well, they might own might not deliver a big tickets. But those are like the most valuable members of the team to help the juniors and the other people. And how do you measure that? How do you pull this information and use it?
So it's kind of, maybe if you have any opinions on it, but it sounds good. It sounds useful because people don't only call it and move tickets. They do other things in the workday. Then, if we are going to the level of measuring people on the person level, so…
Yeah. So first of all, I think we, probably another constant fact of life is that the top drawer of metrics is in the sports industry, in particular in the US. All those stats about combinations of people and teams and weather and results. Probably a lot of very deep things going on there that we can learn from. And there's a lot of good analogies because we think development is a team sport. So yeah, you can count rebounds for a specific person, but the interesting things are in the combinations. We have found that at least two-thirds of some of the developers time is not spent coding.
Right. We all do other things that are part of our work. Part of the larger code effort. They're not coding directly. The whole review process is not coding. And then there's the planning, the meeting. Sometimes you need to think about solving a problem. That's part of the reason why we and LinearB did not focus on the codes as the main analysis point.
There are a bunch of very useful code metrics that you could like, if you want it to go into complexity and, you know, security signals, code smells. There's a lot of things you could measure in. But we've focused on how to optimize non-coding time, like how to optimize all the rest, because that's where most of us spend more of our time.
While I agree it's a, I think the useful metrics are the ones that look at the combinations in how someone is, when you're looking at how someone is performing. The question should be, how are they performing in the team for the team. And I'll like one example that I use when I'm, you know, showing customers and prospects around LinearB, one of the metrics you can see about specific developers is the number of reviews they've provided.
And that's a good way to understand if someone is ramping up in a team. If you have a new developer or a new person on the team, and they're beginning to ramp up their reviews. That's when, you know, they have established a place, they have learned enough, they are now able to contribute on the higher level.
Typically, you see that you see that number going up as people gain both the confidence and the knowledge and the stands from which they can review other people's coats in a new team, or even just as junior, junior developers growing up. So that is an example of a personal metric, right? But always in the lens of how is this person helping the team and totally agree with your, you know, identifying that the best and most crucial developers on the team are not necessarily the ones that are writing most code.
I think sometimes there is an art in writing less code or even negative code, I think that's the PRS we liked the best, or the code base shrinks. And like that glue that the way that people help each other and help the team eventually do better is the secret source you want to be measuring. Another thing that, you know, maybe some sometimes different from what you typically see in sports metrics like you should be asking, why am I even measuring things, right. In sports, a lot of like, there's two main reasons to measure and talk about metrics.
One is entertainment, right? People are talking about those metrics. So that me, as a couch player, can have more in context and it's fun. And the other, is there the serious metrics that go into the actual team building and coaching and strategy, right. They overlap. They're not the same. And there's a lot of depth and money and technology and, you know, deep data going into the second type of metrics and steps. You should be asking, what am I going to do with those without information?
Right. Just having data for that for the sake of data and for fun is nice, but is this something I can use to improve? So a metric is useless. If you're not going to be able to action something on that, either to learn, to try and improve, to track and improve. Otherwise, it's just a number on a dashboard no one cares about. That criteria, really clamps down on the number of metrics you need and the type of metrics that you can use.
Because if you're just measuring things. Okay, nice. Is it something you can do something with the result? What is the insight? What is the action you can take? And we really like to focus, not on just metrics that you can learn from, but also, can I provide you something that for every one of those metrics or numbers, something that will solve or help mitigate the problem, to begin with?
Right. Not just in retrospect, I don't want to see in a month that I had a problem somewhere. Show me the PR that is beginning to become a problem while it's still small, so I can fix it. And then the metric improve on its own. Ideally, you can like prevent or preempt most of the problems. And you're doing that by saying, okay, what is causing those metrics to go up?
Let me focus on those instances, the specific PR, the specific instance of a behavior that you want to change, and let the people change it and fix it directly. So, you want leading indicators and you want actionable insights that help you solve the problem. Don't show me that my car is heated up. Show me that it's heating up right now and what should I do about it?
Lauri H. (37:25):
Yeah, there's really, you need some experience on. Well, if you have like a good metric, then that will talk for itself most of the time. What actually a funny thing that I've noticed when I'm presenting a new metric or a graph or just something, presenting data, visualizing data, and presenting it.
I've yet to give it a name, but I feel like it's some kind of syndrome. If you have a one-hour-long meeting where you are going to press enter it, talk about it for like 15 minutes before showing anything, then to show it. Because I've noticed, well, this applies to all level of people, managers, developers, and every people. If you start your meeting, share your screen, and there's a graph everyone's eyes will be glued onto it without understanding actually what it is? What is it trying to say? Well, of course, this says to the quality of the metric also, or the visit at the station. But I've noticed that the more you can talk about it, the more you can plan and actually like bring it into words.
And everyone is on the same page before actually showing anything, is so beneficial when building these KPIs. Because if you started by just showing the graph, everyone’s eyes, it would be glued to it, that will be questions for the rest, 50 minutes of the meeting about it. We will not go to topic it and they want it to be the end product. So, it's like…
And the most of these questions are going to be, just to say, why this data, it not be… This is not right. This is not correct. The 10 you're seeing here, this would be an 8. Yeah. So people love data, love talking about it. I love arguing with data. That's a natural phase we all go through.
I totally agree. And this goes back to context, right? The data is useless without the context and you should be good at setting. The context, like, what is this actually good for? Why are we measuring this? What is the meaning of this going up or down, or being constant or changing that is more important than the actual numbers in many cases.
Let me try to make an interim summary or what we have discussed. And then we'll see if there's more gaps to fill in this one. But the data has been spread across multiple different systems and it looks like constellation. There must be a big sun in the middle of one of the solar system. And then you have these planets around it. And in this case, it would be Git.
You don't want to build, and another ticks and basically the approach today, that relies on people's discipline to carry out a separate instance of reporting information, but rather go to the, where that work is happening and then find a way how to guide the organization through those supportive systems and knowing that they are supportive systems.
And then to find the right metrics would be, there was a wonderful quote on it to go back and check that, is engaged the team to define their own metrics on a team level. So then they don't feel like those metrics have been given them from the god or from the gods.
But they are actually something that is meaningful for them and they feel like they want to own them and start driving them. And yeah, I think, I think that's pretty much. I heard some, I heard you say, Yishai, that when you bring the context of work into the IDE itself, instead of making a separate tool, that would be, that would be great as well, but that would be my desperate attempt to try to condense all of this into one minute. Am I missing out on something there?
No, I think this is a great summary, Lauri. Maybe we'll, like one additional aspect that we've touched on is going beyond metrics to, everything you said about where the metrics come from, what to measure those, you know, solar systems and obviously like in any good story. There are many sons or multiple sons because organizations have multiple Git instances, they have acquisitions and history. And then you have multiple sons on your horizon. But I would say the, you know, going beyond metrics to the area of how I help developers with automating a way mundane tasks and with solving the problems when they're still small, which is a jump from metrics to action.
And it, in many cases, predates metrics and the flow of the day. So I know that some metric is important. I can identify initial problems or instances that someone should take should care about now. And now I can do some, like real-time orchestration of the, between the humans and the machines involved in process to make sure things are okay or things are improving.
And you know, the easiest example, we talked about PRs waiting for someone to start reviewing them. If a PR is waiting too long, alert the team, let the team know. Then they can solve this by starting to review the PR. And this is done without thinking about the metric, right. It's just thinking about PRS should not wait too long.
The metric will improve because now PRS are not waiting too long. So that final piece of automation or orchestration is I think, what is needed to complete that, the whole view of, how to use data to improve their teams.
Lauri H. (43:0):
Yeah. The challenge usually comes, especially, well, I'm lucky in a place that I do consulting, so I can build a solution which works there. Of course, you always want it to be generalized to work anywhere. But I feel like, with LinearB, you have to be able to product that looks at everywhere. And when you talk about these integrations that bring it to my life, bring it to my IDE, bring it to my, well, communication system. In this case, slack, there's a huge challenge.
And I feel like building those kinds of integrations or trying to help in those areas can feel like kind of neat peaky work like building. Let's say if it's a slack bot, whatever it's going to be, but when it plugs into like the ways of working, it's much bigger thing. Like if it's in your world, if it's in your IDE, if it's in your slack, the meaning is much bigger than the work may be seems to be and the effect in data. And that we can actually improve the data if we just follow the methods or the prostitutes properly. That's just like a side product of like making the work easier for you.
Yeah. We really believe in, if I can make the day-to-day work of the developer easier, more streamlined, reduce friction, remove context, which is defend my calendar, defend my time. All of these things developers care about. If I do that, right, I will have the hearts and minds of developers and their fingers. And the metrics, that Dev process, the managers will be happy as a result, but that is our core, like, approach to improving you, you make it easier for the developer.
You remove friction and then organically and naturally, the important things surface and get better. So once you find that bridge between what makes the developer happy and what makes a good process, then you've won. That's our approach.
Lauri H. (45:19):
Then you have one.
Then you have one. And we believe that you could only win when you have the developers. If you're just building dashboards with metrics for managers, you're not going to win. You're not going to really succeed in improving fundamentally improving the way the Dev team works.
Thank you for listening. As usual, we have enclosed the links to the social media profiles of our guests to the show notes. Please have a look. You can also find links to the literature referred in the podcast in the show notes, alongside other interesting educational content. If you haven't already, please subscribe to our podcast and give us a rating on a platform. It means the world to us. Also, check out our other episodes for interesting and exciting.
Finally, before we sign off, I would like to remind again that all the speeches from the DevOps conference are available online to listen and watch for free. You can find the link from where else that in the show notes. Now let's give our guests an opportunity to introduce themselves. I say, now take care of yourself and keep on zero-day delivery.
I am Yishai. I'm the CTO for LinearB. I'm based here in Tel Aviv. I've been a software developer and then manager in various capacities my entire career. I love thinking about problems and solving problems and love my work at LinearB, where we focus on helping developers improve.
Lauri H. (46:44):
I’m Lauri Huhta. I do consulting as a freelance in my own company called Data Nuggets. Mostly I do like team metrics, team project management metrics. At the moment, I'm also finding it really exciting. I'm working on completely different, like in, I've been working in all like the mining industry. I've been working in health care. All that data is really exciting, but I think I excel in the team measuring metrics.