Marc and Andy are joined by Cisco Senior Developer Advocate Adrienne Tacke. They discuss her upcoming talk on code reviews and more at The DEVOPS Conference Live, scheduled for October 23rd and 25th, 2023, in Stockholm and Copenhagen.

Adrienne (00:06): In terms of the actual reviewing, I don't ever think that can be AI-ed away. You need the human to make that judgment. That's where, I think, it falls short.

Marc (00:21): This season, Andy and Marc are back with a fantastic group of guests.

Andy (00:26): I've been to depths that remain classified. And Marc keeps his head in the clouds. With our combined experience in the industry, we can go from the bare metal to the boardroom. Enjoy your time in the DevOps sauna.

Marc (00:46): Welcome to the DevOps sauna pregame podcast. We are back in the sauna and we are interviewing the guests for The DEVOPS Conference, Scandinavia that is held in Copenhagen and Stockholm. We're super happy to have Adrienne Tacke in the sauna today. Hello, Adrienne.

Adrienne (01:03): Hello. Thank you for having me.

Marc (01:05): It's so nice to have you. We made time across the globe today. Where are you today, Adrienne?

Adrienne (01:12): I am based in sunny Las Vegas.

Marc (01:14): All right. Sunny, Las Vegas. We are in dark Helsinki yet at six in the morning today. I'm here with my usual cohort, Andy Allred.

Andy (01:23): I would like to say hello, hello, but I'm going to get some coffee and come back to that. 

Marc (01:30): All right. Good morning. And good evening, Adrienne. You are speaking at The DEVOPS Conference in Stockholm in Copenhagen. And we're really excited about your topic. Could you tell us about your topic and kind of what are you going to be talking about?

Adrienne (01:45): Gladly. My talk title is called Looks Great To Me Going Past A Bare Minimum Code Review. And it's an amalgamation of a lot of things. I'm actually writing a book on code reviews. And through my experiences and the research that I've done, I just want to tell everyone, we're kind of doing wrong in code reviews, what we could be doing better. And then more currently, what's affecting us today? I know, maybe people have heard of the DORA metrics or not, but I'll go a little bit into what those metrics are, how they are related to code reviews, and how we shouldn't be worried about those metrics necessarily. I know there's a recent article that came out that said, "Yes, you can measure developer productivity." And people are now worried that we're going to be tied to these metrics solely, which is a valid concern. But we're going to talk about a bunch of those things. So if you're interested in any of that, definitely check out my talk. 

Marc (02:46): All right.

Andy (02:46): We talked last season a lot about open source, and how important it is to contribute. And anybody can contribute in any way. And there's multiple ways. And I kind of took that to heart. And I was going through some open-source tools and working on one of them. And I noticed what the documentation was wrong. So I, hey, I know what to do. I'm going to make a pull request, update the docs and push it off. And I did the pull requests. And I got back LGTM. And I was like, what the heck is this? Looks good to me. Okay. Yeah. Great. So when I saw your topic, I was like, okay, yep, there's definitely more we can do here than LGTM <enter>.

Adrienne (03:28): I'm glad you brought that up. That's the reason, it's a play on that phrase because we use it so often. And the way that we use it is it's kind of lost its meaning. When we see that now, it's like, either you really didn't do a thorough review, or is it really good? Like, did you actually take the time to make sure to check everything is okay and great? And the other side of that is, there's a lot that the author of the PR also has to do and is responsible for, to make sure they earn that looks good to me. So I'm really happy you brought up that phrase, there's a lot behind it. And I definitely want to make it worthwhile. And we actually end up using that phrase in the context of code reviews.

Marc (04:15): It's an amazing topic. And the funny thing is, like, I started doing code reviews with my team developing code in the 90s. And this was before a lot of the buzzwords that we use today, agile, and I'm not sure if extreme programming was really well-defined in those days, but we did a lot of that we did a lot of ensemble programming, the whole group of people looking at the code while one person types nervously and takes a lot of feedback and comes around. In the early 2000s, we started doing formal inspection processes where your code doesn't get through unless someone else has looked at it. And it was interesting to see how many different types of interactions people could have around reviewing one another's code. 

Adrienne (05:00): I agree with you in the process of researching for my book, which is called "Looks Good To Me: Constructive code reviews", I did find also a lot of what you're talking about. The first code review, I think, was attributed to Fagan, when they called it the Fagan Inspection in IBM, where it was this really formal process. It was a full meeting, there are people with roles, so there was a moderator, you had to have pieces of paper of your code, and you had to go there and pretty much present your case. It's almost as tense as a courtroom. And you really needed to go through and look for anything that was wrong with the code and see how many errors were in there. And so to come from that really, really formalized process to all the different kinds of review that we have today, it could be the tool-based one where that's the most common through a pull request in a tool or facilitated through a tool or online, or even what you're talking about where pair programming is also a type of review because somebody is looking at your code, and is making suggestions and is kind of pointing out those same things, you do the same activities in this kind of review process. So it's funny that you bring all those up, because when a lot of people complain about the code review process, it's like, you should have seen how it was way back when. I think we should all be very thankful. It's not as stringent as before.

Marc (06:29): One interesting thing that you made me think of, when I moved to Finland in 2006, and I moved into a very international team, and some of those people they grew up where you would get a very limited amount of computer time. So they would essentially write their code offline. And then they would go in, and they would quickly type the code in, get a result. And that's all that they could possibly get. So those people, they did a lot of reflection on the code. And they did a lot of peer review of code. And then they didn't really debug the way that we do today where you have this fast IDE. And you can just run through sloppily, make some code, and see if it behaves in a vague way like you wanted to or not, but those guys were like debugging your code, what are you talking about? Our code just works when we submit it. 

Adrienne (07:21): That is really, really interesting to hear. I mean, we're spoiled today, like what you said, we've had all these tools to help us, we have a lot of things that can -- even now with AI, write code for us, there's a lot more that we're offloading to other things, and there's a lot less put on us to actually do the proper writing of code or do the proper debugging. And you know I kind of wish we would take the time and have the due diligence of those developers or those programmers that you spoke of because the context and the environment that they had to code in, they had to be extra careful, right? You only had this limited time to use the computer and make sure everything is correct. So that, I would argue, made them more diligent in how they wrote their code and tried to make it as great as possible, be their own first reviewer. Now, it's kind of like we just throw it over to the reviewer, or throw it over to the IDE and say, hey, just take care of this for me. And so, a lot of what I talk about in my book is we need to take more responsibility. There’re no other words around it, or no other things about it, we just need to be more diligent in how we prepare our code. When we prepare a proper pull request, sometimes we don't fill out the description, or the titles are really poor, you don't know what it's actually about, or the changes that we submit, sometimes it's 50 files. And it's like, really, you really think a reviewer is going to go through all of this thoroughly? So yeah, a lot of what I talk about, and what I will talk about at my talk at the conference is we need to be more diligent about what we do because it's our responsibility. And it's our job to also prepare a proper pull request and to make sure our code reviews are as successful as possible for both parties for the author and for the reviewer. And then for the greater team overall.

Marc (09:18): I haven't written much production code for a while. I'm mostly doing things on the QA process and a cultural level of things. And Andy wrote some code for one of my customers recently in Go, which is a language that I haven't worked with. And I first looked at his code, and I was like, there's no comments here. So I said, Andy, let's do a pair programming, where I'm just going to go through and I'm going to comment your code, and then we kind of sat down, so he agreed to it, and we sat down to look at it, and he was like, "Hey, man, you know, there's different schools of thought. One school of thought is that you have above every function block or class or whatever, you've got a comment that explains what you're doing, but I tried to write my code so clearly that it explains itself." And I'm like, well, let's see about that. <laughs> And actually, when we started reviewing it, the way that he structured his functions and his variable names and things, it was really clear what was going on. And I learned a huge amount by just looking at, okay, I take all of my background and being a developer, for which is pretty old now. But when I applied it, and I looked at it, I learned this language through his code. We used to say that if you want to learn about the world, read. If you want to learn about yourself, write. But you're limited if you don't do the reading part, if you don't read a lot of other people's code then the code that I think you produce is not going to be of the same -- it's not even about quality, but it's just about the efficiency of how you do things.

Adrienne (10:46): I agree. And that's another point that I will make is that a lot of the code that we write is not for machines, it's for us, it's for humans to read. And so that should be of the utmost importance is comprehension. And I love that you brought up how you are faced with something that you are not familiar with because sometimes or a lot of the times an excuse is that somebody gets assigned a PR to review and they're like, "Well, I'm kind of unfamiliar with this part of the codebase." Or, "I'm actually not familiar with XYZ." So here's an opportunity to instead of reassigning it to someone else to take that chance to learn. And the other point that you brought up is actually holding a discussion offline to talk about this more thoroughly because you can go on back and forth through the tool, say, through comments or through emails and say, try to figure it out that way, but that does not lend itself well to your education of the code. By being able to pair with Andy and actually go through the code together, that's beneficial for everyone. There's knowledge transfer there, you've gained some more knowledge about the code you're actually reading, you expand your knowledge on the code base overall because now you are familiar with this, and you're not just pushing it off. And that's something I see a lot is, I'm not familiar, I'm not going to be the proper person to review this. So they push it over to someone else, and then they never learn it. And so, there's a constricted pool of reviewers who can review the code. And you get into these isolated scenarios of very specific people, or only one person like, what I call the single senior developers’ syndrome, who's stuck with a bottleneck of them always reviewing all of the PIs because they're the only one that is knowledgeable enough. So expanding that, trying to transfer that knowledge to the rest of the team. And if available for you to sit down and have a conversation like that with something you're unfamiliar with is really, really great. And something I encourage for everybody that faces this issue in a code review.

Andy (12:59): And Marc has told the story a couple of times, and I'm still continue to be humbled when I hear that what he learned from it and whatnot. But also, from my point of view, when we went through that review, and I thought, well, I know what these things are, this is so clear, but it was clear to me. And I try to use variable names that mean something and not just X or 4 X in, but use something that means something. So it's descriptive, and then it's easier to read, because we see a lot more, or we spend a lot more time reading code than we do writing code. So making it clear for the human, not just the compiler is very, very important. As Marc was walking through the text, or walking through the code with me and say, so where does this go? What does this do? Then I also learned that looking at it from a different point of view, this is how someone else who's not familiar with my thought process when I did it looked at it, if I just change this little bit, it's even more clear to me who wrote it. So the review process can be good, not just for getting quality code, but just understanding how different people would approach the same problem, how they look at the different functions and variable names and what that means to them. And then you're able to get such a better understanding on both ends of what that code is trying to do and what it's actually doing.

Adrienne (14:21): That is an excellent point that you brought up. And I love that you are focused on making it readable. You'd be surprised how many use the code review to say, hey, look at how clever I can make this code. And then you get a bunch of comments back like, what is this mean? And what does x mean? What does this variable mean. So it may be clever to you, as someone writing awesome code, that's like only five lines, but that point that you bring up where in the moment that you are writing, you have that context of when you are writing that code, and someone with fresh eyes, who is not within that same context, who will bring up all of those points that you've said. So you're already doing half the work, which is thinking about it and really trying to make your code clearer for someone else to read. And part of that is when you do end up putting up the pull requests and opening it, it's part of that due diligence to say, what are those things that are currently in my headspace in this current context, that should be either written into the description or added as documentation or XYZ? What are the supplemental pieces of context and nuance and decisions behind why I'm writing this code the way that it is. Adding all of that extra context into the pull request, just would give Marc all that extra context. And it's not to say that you would eliminate this discussion that you've had because it's obviously beneficial to have that discussion, but it just helps put them in your frame of mind and to help them review with all of the contexts you have had. So that's another point that I will really drive home is to make sure to have that context available for the reviewer and for others who may be reading the code that you write.

Andy (16:08): And I found that a lot of times the worst code I have ever read was written by myself six months ago when I was in a different frame of mind. So it's a little bit of a selfish thing too that just for my own benefit, I need to learn to make sure I write more clear code and better comments and better variable names. And the review process. It's like, yeah, I don't care what this does for production or for Marc to understand. This is like for me to understand what I need to fix it in six months.

Marc (16:37): There's an old phrase that I learned a long time ago, which is, I explained it once, I explained it again, I explained it for a third time, and then I understood what I was trying to say. <laughs> And there's so many wonderful things here. I think this is such a great conversation. One of the things that you brought up Adrienne was that having the discussion might not be part of the native workflow of doing a code review. It might be okay, I go and I got a list of pull requests, and I go through this stuff. And I understand some of it. And it looks good to me, and I don't understand some of this. And it reminds me, there was one tool that used to have by default, "I'd rather you not submit this". And that was like one of the default responses, like if you weren't ready for that pull request to go through. And some people would take such a fence to just the label, like, why doesn't it say something else? But the point I was trying to make is that you can still have a hang out, or you can still call the person or have a Zoom call or whatever, and go through the code, even if your native process is not necessarily to do that, and you learn even more, but some people they have such a narrow view that okay, well, the code review is it's pass or fail with comments kind of thing.

Adrienne (17:57): I'm glad you brought that up because there are so many points to this. It depends on your team, are you a small team that's all in the office, or this is way more accessible. And you can just tap someone on the shoulder and say, hey, I'm looking at your PR right now, are there a couple things they don't understand? Do you have time to do a quick coffee chat and talk about it? Then you have teams who are global, who are on different time zones, where it may be harder to just have a Zoom call and to coordinate and have those things. But trying to have some shared time where that may be possible. And having all those questions ready with that agreed upon time could be a way to make that work. But yes, the very, very narrow view of this is I write some code, I throw it in a PR, I throw it over to the reviewer, reviewer hopefully looks for it, hopefully catches everything that I did not catch, and then I go and fix it, and then "looks good to me". And everything is great. And that's the very base process. So even just understanding that is a good place to work off of. But there's so many side quests, so many things, plugins, you can add to this process to make it so much better and that is one of them. So I'm really trying to encourage people to expand what works for their team, if what works for their team is to say, hey, we're very communicative, you're all on the same page. And you're all okay with having these one-off conversations because it benefits the whole team, then continue to do that, even though your formalized process may just be through the tool and getting these approvals. If it's not that way, maybe there might be a different way to go through these things. And you have to be stuck on the online process through comments. So it really depends on your team. The last point I'll make about this is having your team understand what the process is and outlining it could even be a good place to start like what are all your different states? Do you have a draft PR state? What does ready to review mean? What does it take to get an approval? How many people have to approve? Even just outlining the steps of what your team's particular process is, is a very helpful for anybody that is coming on that's new, or anyone who is currently on the team. And if they decide something needs to change, they can always go back and rewrite that teams default code review process that they have.

Marc (20:26): Hi, it's Marc again. The DEVOPS Conference is coming to Scandinavia on the 23rd of October in Stockholm, and the 25th of October in Copenhagen. We can't wait to see you there. Now, back to the show.

Marc (20:44): And I just want to repeat something that you just alluded to, and I think Andy might have brought up before that the sharing of this type of information, you never know when the guy that wrote that code isn't going to be working on this code anymore. And somebody else is going to have to maintain that. And this idea that the code is not for the compiler, the code is for the humans because we're the ones who are going to have to maintain it in the future, and who knows what type of conditions we're going to be maintaining it in. And the more that we share this, and the more that we work to purify the ability to understand that, then the better life we're going to have, the less suffering we're going to have.

Adrienne (21:23): Absolutely, just get all of that information out even if it is for yourself. I feel like once you've had that experience, where you go back to your own code, and you have to either fix a bug or you have to refactor something, anything where you come back to your own old code, and you're like, "Well, at the time, I knew everything about this. I could try something about this, write anything about this in five seconds, and even if it's a month from now, that you have to go back to it, all of that context is lost. And so, if even if you wanted to be selfish, let's say do it for yourself a month from now, but obviously, the bigger benefit is whoever reads this, it'll be much better for them in the future with as much context as possible.

Marc (22:11): Excellent. You brought up the word context. And we're using the word context a lot right now in terms of things like AI tools, ChatGPT, and Copilot X, which is all about one of the things that prompt engineering is really about is it's about creating context for the machine to be able to narrow down its choices in order to give you the best possible information, be it code or be it something else that you're working on process related or whatever. Have you seen or worked with any things related to AI in the area of code review?

Adrienne (22:46): So this is one area that I have not, but I've heard and read about several tools, and have been in discussions where people are like, we can just completely automate the code review process. And I do not think that is possible, not today. And I don't think it should ever be fully automated. What I will say is, there are AI tools say to summarize, okay, this PR has about five minutes’ worth of reading time that you have to put into it, or this has a bit more context for you, maybe it will tell the author who's creating the PR, hey, you're missing documentation, or hey, you're missing unit tests or code coverage. I mean, you have a lot of those automated checks already in place. So in that sense, I like that, I actually love that, because that helps us with our due diligence of making sure everything is set up for success for both parties. But in terms of the actual reviewing, I don't ever think that can be AI-ed away, you need the human to make that judgment. That's where I think it falls short. One of the really popular examples that I've seen is that there are a lot of static analysis tools, so they can do what we can, they can look at every line of code over and over and over and try to find any code smells, try to see some complexity, there's all kinds of things that if we were to do this on our own, it would take a very long time. So we use that, we take advantage of that for it to do what it's very good at. These mundane tasks that are impossible for a human to do. But sometimes it can't tell us the intent of what the developer is trying to write. If there's a misspelling in the particular variable, and you're meaning to refer to a different context, it can't tell you that. Or if you have some sort of rule on your team that says, make sure your variable names are meaningful. How does the static analysis tool do that? Or how does any tool right now determine what meaningful means? That's where I think humans still play a very valuable role in the reviewing part and in writing code that makes sense. So I love the boom with AI. My stance right now is, let it help us, let it make us better in terms of making our code and all of the information and context around it more detailed and more filled so that everyone who is reading our code and reviewing our code is supplemented with this information, it will aid us, but it will never replace us.

Andy (25:25): I've started using GitHub copilot quite a lot pretty much all the time I'm coding or anything. And it's just amazing how it can- give me a function to do this. And it will give something and most of time, it's mostly correct. It's basically never absolutely correct, but it's close enough. Like, that's right. That's how you do this. And if I tweak this and tweak that, then I get it. But the other thing I really liked doing with it is when I jump into a new project with a new client or something, open it up in the IDE, and then just tell the chat that explained this to me, and it will go through and give what it thinks is going on. And I think that's very, very useful also in these kinds of review things that okay, this is what the machine thinks is happening. And then does that match with what I see? And if not, why?

Adrienne (26:15): That's an excellent point that you bring up. And it's that most of these tools that we do use, we still have to review it. A lot of the things even GitHub co-pilot or any tools, they are excellent at creating that structure for us. And if it's a lot of mundane stuff, like setting up unit tests or something, and you have to write it multiple times, yes, use it for that. But the point that you exactly brought up is, it's you still need to look at it to make sure it's correct. So in that sense, it's very good to get you started and to cut down on the time of these more mundane tasks, and lets you focus on what we're really good at as humans is to review it, to tweak it, to make sure it's correct. And to take advantage of it for our understanding, like you said, with explaining something, it could be right. But now that you have a base understanding from how it has explained it to you, you can now go from that point and either Google something a little bit more clear, or have it narrowed down what you're trying to do. So again, those things are always aiding us and making our jobs easier, but not replacing everything that we do, especially the judgment part.

Marc (27:26): Absolutely.

Andy (27:27): I've said a few times that for writing code, I think that senior developers should be required to use something like GitHub copilot or some other copilot, just because it helps you speed up so fast if you know what you want already and you know what you're looking for, but I think junior developers should be forbidden from using it. You need to go through the thought process, you need to figure out this is how code works. Where you cross that line, I don't know, but I think that there's a difference there that when you're learning how code works, you should be generating it. But when you're trying to actually make the code work better, yeah, take all the shortcuts, use all the heads, use all the guides. But then on the other side, when you're explaining the code, I think everybody should use it, why not? Explain this to me. What is this meant to do? How is this supposed to work? And use all the hints you can to get the best understanding you're able to have what the code is doing or what it's trying to do. So then you can make your review and your understanding. And you need a comment here or wouldn’t it be more clear if we rename this variable?

Adrienne (28:35): I totally agree with you. And I also agree with the valid concern you brought up, which is a lot of developers who are starting out now in this age, I can only imagine how it is to learn how to program. Now, there are so many things out there. And if you are using AI tools, how do you know what is correct? Right? It is generative. So if you ask it to explain something to you, that's one thing. But I know a lot of developers now, no matter what year of experience they have, are using it to write their code. And so, like you said, with a senior, you could easily point out right? You can say this is not correct, or this is not the, let's say the conventional way something is written in a particular language. You see those immediately because you've had that experience of writing it yourself. I'm happy that you brought that up because I'm not sure what to do when people who don't have as much experience depend on these tools to do that for them. I would hope that they use it as much as possible to do the explaining, and then try to work it out themselves. I think that would be the best of both worlds where you use something to help aid in your understanding, but then you actually do the hands-on part and write it yourself and actually have that knowledge stick and you can also find out what's correct and what's not correct. So ideally, there are mentors or senior developers who are guarding the developers with less experience to say, hey, don't fully depend on this, but absolutely use it to aid in your understanding.

Marc (30:21): Fantastic. I'd like to touch on the Dora metrics. This is something that we're still talking about the base four DORA metrics even before it became five with lots of companies and helping them set a north star, which is even before accelerate, that's from the DevOps handbook, pick one metric and use that. And then the interesting thing with this measuring developer productivity, this new idea that is out now. We used to say that measuring velocity is team-local, it's only for a team to optimise, it's not something that you put up on a big dashboard. And then management looks and says, hey, the velocity of this team is really high. And the velocity of this team is really low, we need to go do something. But this measuring developer productivity is really a touchy subject. Could you give us, Adrienne, your take on- first, maybe you could explain for the audience what's going on, what is this new measurement of developer productivity? And then what concerns might we have? And how do we address those?

Adrienne (31:21): Sure. So the four DORA metrics, I'm sure the listeners of this podcast should know what they are, but very quickly, deployment frequency, how quickly do you deploy to production, lead time for changes, how quickly can you get a commit into production, the change failure rate, the percentage of deployments that cause a failure in production and time to restore service or how long it takes you to recover from a really big failure in production. These are the original four, the ones that I will talk about, and how code reviews impact that specifically how long it takes for you to get changes out and deployment frequency. So there's this new report, was it McKinsey, I forget who it was. But there is a new report that is out saying, yes, you can measure developer productivity. And it's very much tied to these metrics. And the very valid concern that a lot of engineering teams have is that if these are the only things that you're looking at, and these are the things that you are measured against, and is tied to, let's say performance bonuses, or how well one team is doing against another, that's a very, very narrow view of seeing how well a team is performing. The good part of this is that, let's say your team just has no information at all about how they are doing, this would be a great start. And like you said, pick one, kind of focus on one and use that to improve your own internal processes, see how well you can make your team better and see how you can optimize your workflow, and then maybe start considering trying to make the other metrics better for your own team. I think when I first read this report, I said, this is great for a team to all get together, agree on it and agree to work together to hit whatever metrics that they are trying to achieve. Some will say, yeah, we need to be considered an elite team. And sure, if you want to use that to help the team move together in a single way to make their processes better, great. But I think the concern is that if you're not rated an elite or at a high performing team, and that causes negative impacts on the team, that's where a lot of people have concerns, there's a lot of nuances that happens that is not captured in these four metrics. There could be some random peak event that goes against everything, and you need to scour. That doesn't capture all of the overtime people may put in to fix something. And there are a lot of pieces about, let's say, your pipeline that may have been not as robust as you thought they were, but you were able to find them through these lower measurements. I think the point I'm trying to make is when the upper management or whoever is looking at these four metrics, and they just see some random numbers go up and down. And if these are all down, then that's not good for your team. I think the narrow-sighted judgment on a team is what is scaring or really upsetting a lot of developers about just using these four metrics. There are other things that we should be looking at like how well the team works together, how well is the developer experience how robust is your pipeline? How quickly can you roll back from such an outage? Were the pieces of your pipeline that allow you to quickly pivot if you need to, like there's a lot that are not captured in the in the four metrics that I think that's what is concerning a lot of teams.

Marc (35:02): The first thing that comes to mind here for me is that usually what happens in so many organizations is that they kind of reach a critical mass where the technical debt and the maintenance and the ability to get things through was pushed through having really bad incentives for a long time before. So it could be that you've reached a 1.0 situation, and everybody just smashed things together in order to make a deadline because otherwise the company's not going to have a product. And then you have a huge burden of interest to pay on that technical debt, which then hey, well, let's start using Dora metrics. And some teams that pushed back really hard in the beginning, and they built their pipelines well, and they put just the right amount of unit tests or smoke tests in there to keep things rolling, that team that was really slow in the beginning, and they were getting beaten up because they were slow at the beginning, now they have a little bit easier time. And then the high performers who were from the previous context, that were smashing things together in order to get it to production, now their numbers come down, and the other numbers come up, and people that think that this is a problem with the humans. But instead, the environment has created this, and it's about the environment of the working the developer experience, the psychological safety to make changes, and the safety net of having enough time to experiment, to do proper test automation, and all of these things. They don't see this as an investment opportunity, they see it instead of we have good people and bad people.

Adrienne (36:37): I appreciate you articulating that much better than I could because now I think I have a more coherent thought. And that's, it's absolutely the environment. These four metrics, looking at them, just these four, the environment is now going to change for the developers if they are going to be tied to these four metrics alone. And so, all of the things that would contribute to better metrics, but are not necessarily held in high regard, like code reviews. Code reviews, people argue they're going to decrease your deployment frequency, they're going to decrease your lead time for changes because depending on how long it takes for a PR to get reviewed, and actually go through and get deployed, that's going to lower those metrics. So having that be the incentive raising these vanity metrics, if that's how they are used. That's where the shortcuts come in. That's where the, let's just skip this code review. So that we can increase this particular metric, or let's just get these, push these through, looks good to me, it looks good to me, it looks good to me, approve, and then they don't realize that this now might affect some other things where now maybe more bugs are not being caught, or maybe some other things are just passing through without being seen or reviewed. And the other part about psychological safety is absolutely true here. If these are the four that you're going to be held accountable for, there are people who are going to make mistakes and fear for their job because they're now going to lower this metric if it's that very extreme situation. So instead of focusing on how to make it okay to fail, and making your workflow and pipeline more robust, people are now just -- they're going to be incentivized, and game the metrics to make them higher. And those usually involve a lot of shortcuts that are not a good investment in the long run. So thank you for bringing up environment because that's what I wanted to say it was, that's a lot of what the concerns are around these four metrics. Now, five, if people are being tied to them so closely, and only on these metrics.

Marc (38:53): I think you put it beautifully, you just inspired me to summarize a bit. We have a tradition on this podcast that we have. We ask two questions from all of our guests. And we have a new set right now that was inspired by a thought experiment that I started performing on leadership. And so, I have two questions to ask. And these are tough, I warn you up front. But the first one is, so Adrienne, if you are the leader of a team, and I am a trusted team member, and we have the rest of the team is gathered, and they are complaining that there is a problem. And I put up my hand I say, Adrienne, I can take care of this. As the leader in this thought experiment, what would you say?

Adrienne (39:39): Let me think about that for a second. Well, I'd likely try to talk to the entire team to gather as much information as I can about what the problem is because I don't like not knowing what is happening. And then to the person that I do trust that who said would take care of this, I'd probably suggest working together to take care of the problem. I don't want to imply that I don't trust them or that they are not able to handle this problem on their own. I want to approach this as let's try to fix this, whatever this problem is together. And I feel like that would help me not only understand what is going on, but also show that I'm willing to help fix the problem with the team. So I think that's how I would approach this scenario.

Marc (40:31): Beautifully put. So the second question, and maybe the more interesting one is, okay, so we've established a situation. And now what we would like to do is think in the future? How could we change the behavior of those others that were just complaining about the issue? And how could we make them behave more like you or I in that first part of the thought experiment?

Adrienne (40:55): I mean, if there are only a few, which in my experience is very realistic. There's a lot of people, again, that don't want to take the due diligence to actually do something, I would try to communicate with my team to say, what is stopping them from having some accountability in the project or in the application, and this is not in a interrogated way, or an accusatory way of like, you are all lazy, you should be more like me and this other team member, but try to see, are there bottlenecks that prevent them? Is there something about the process that just makes it easier to complain than to actually do something? So for example, if there's like a change that needs to be made, and it's a super cumbersome process for someone to do, and they'd rather not do it, but they do get impacted enough, either because customers are complaining, or it's causing a lot of extra work. Those kinds of things, I would try to find right away and nip it in the bud. What are those things that people are just avoiding because it's hard to do? If it's a more difficult thing, maybe a process that's dealing with upper management or a more cultural thing, that takes a little bit longer. You can't change that. But that still fully relies on communication with the whole team. So again, it's what are these problems. Are there easier things that we can fix on our own, whether that's internally on our own team process-wise that we can make it better so that it's easier for others to take more accountability and make changes? And then see if there are any cultural things that prevent them? Or I don't know, is there animosity between team members where they're like, okay, well, this is not my job, I'm just going to let the other person do it, those types of things. And try to see if there's any of that, and then find out a way to fix that. That's harder said than done, of course, but having an understanding of how your team is working, I think is crucial to solve any of that.

Marc (42:55): Brilliant, it all comes back to the humans and the environment that we're working in. Absolutely. It's been a wonderful experience, Adrienne, to have you on the podcast today. I can't wait to hear you speak and to meet you at The DEVOPS Conference, Scandinavia and Stockholm and Copenhagen in October. And I'd like to thank you so much for taking the time and staying up late in Las Vegas while we get up early in Helsinki in order to do this with us. Thanks so much for being on the podcast.

Adrienne (43:27): Thank you. It was a pleasure being here. I'm really excited to talk a lot more about code reviews. So you got a little teaser here. But yes, I will look forward to meeting both of you. And I can't wait to share everything that I have about code reviews, including my own personal experiences in Stockholm and Copenhagen. But thank you, this is really, really fun.

Andy (43:48): Fantastic. Thanks a lot, Adrienne.

Marc (43:50): All right. And thank you, Andy. So that's The DEVOPS Conference pregame podcast. Thank you, and we'll see you at the conference. Before we go, let's give our guests an opportunity to introduce themselves and tell you a little bit about who we are.

Adrienne (44:08): Hello everyone. Kamusta. My name is Adrienne Braganza Tacke, I am a Filipino software engineer and currently a senior developer advocate at Cisco. I do a lot of things. I fell into tech accidentally. And a lot of the things you'll hear from me are also accidents from speaking at conferences and writing books and creating courses. And even software development was an accident, but it was a happy accident. The more important things you should know about me are that I absolutely love desserts. So if you know any good dessert places always hit me up and tell me and I absolutely love playing old school computer games, Age of Empires two is my favorite. So hello.

Marc (44:55): My name is Marc Dillon. I'm a lead consultant in the transformation business at Eficode.

Andy (45:00): My name is Andy Allred and I'm doing platform engineering at Eficode.

Marc (45:04): Thank you for listening. If you enjoyed what you heard, please like and subscribe, it means the world to us. Also, check out our other interesting talks and tune in for our next episode. Take care of yourself and remember what really matters is everything we do with machines is to help humans.