Skip to main content Search

DevOpsConference talks

Evolving teams beyond DevOps | Johan Abildskov

At The Future of Software conference in London, Johan Abildskov covered some of the troubles that come from the disconnect between how we perceive making software and what reality shows us, emphasizing concrete examples and practices. In his talk, Johan highlights various topics that all software development teams should be aware of, including playfulness as a mindset and psychological safety vs. high performance. About the speaker: Johan is a Software Engineer at Uber. Johan helps build Odin, the stateful orchestration platform - part of the infrastructure that helps Uber serve more than 1M trips per hour every day. Before becoming a Software Engineer, Johan spent a few years as a DevOps and Continuous Delivery consultant, gathering experience across the industry.

Evolving teams beyond DevOps | Johan Abildskov
Transcript

Hello, I'm Johan. I'm a software engineer at Uber. I help build our internal container orchestration platform - for stateful workloads. I work in Denmark where we have an office of up to around 150 engineers. And that's part of my story that I'm a software engineer. Before I did that, I did DevOps consultancy at Eficode. So, being around in a lot of different organisations - and now being more in a single organisation - and more focused ona the success of myself - and my ability to execute, rather than enabling others to succeed. And I will be talking a bit about evolving teams beyond DevOps. And let's see where that story takes us. Let's call it my opinion on modern software engineering. And to address the "LLMEPHANT" in the room, - this talk is not about AI, because AI doesn't matter. It's just like the ball that we're currently chasing. Right? It used to be platform engineering or Agile or DevOps, SRE, cloud. The problem is that if we just keep chasing the ball, - and the way we chase, for instance, Agile is by doing Scrum, - we don't get better at chasing the ball. We don't become better at being agile by doing Scrum. We become better at Scrum. So, at some point, we just have kept chasing balls - and never gotten any better at ball chasing or maybe figuring out - that magical thing where when we get the ball, then what? So, this talk is not about AI. When I make the statement, let's evolve teams beyond DevOps, - we've assumed that we've kind of had a few states, - then we are at DevOps, and we want to evolve beyond DevOps. One of my problems with that is, we as an industry, - whether we are DevOps or not, suck at making software. We make bad software. We are bad software engineers. As an industry. My older son, soon seven, started school. They get an app for reading practice. Excellent, like games for reading motivations. The login flow is such that, first, I have to figure out - which of three different methods of logging in I need to use. When I select the right one, I get a drop down of around 200, - where I have to pick the right one, - to pick the right region of the country that I'm in. And then, I'll be prompted for the username and password. And if I put the check mark in after putting in username and password, - maybe we would expect that it saves the login on the device. That would be like common sense. It changes my password. This is like an iPad app, - the most user-friendly platform in the universe. It is targeted at kids. Why is it so bad? But it feels like folks there are like, - if they just have had an iPad in their hand, - they would have to make it deliberately bad for it to be so horrible as it is. That's how it feels to me. But I think we have to realise that's probably not the case, - that there's a team of evil engineers whose primary focus - is to make me angry trying to log into a kids' app. If so, I'm kind of flattered. But there are smart people that want to succeed, that make horrible stuff. Not in the sense that it's evil. It's not like it's trying to get me addicted - to a cocaine-like trip of social media - or influencing my politics or things like that, right? It's just bad. Objectively, it feels like. And one thing we could say is that we build complex sociotechnical systems. Right? So, it's commonly accepted that software engineering - is building and operating complex sociotechnical systems, - but that doesn't make any sense. No one understands what this means, right? It means hard to reason about sociotechnical systems. They're complex. When we do something, we as engineers like to think there's a reasonably causal link - between cause and effect. But when we enter complex systems, that's not as true. We have emergent behaviour. We have vastly distributed systems. It's hard to reason about what does my change do. So, we have hard to reason about sociotechnical systems. The "socio" part is that we have hard to reason about technical systems - with humans involved. The worst thing you could do to engineers. Hard to reason about, we also have technology, - so it's hard to reason about computer things with humans involved, right? That's kind of like what software engineering is. We're building software, we're building systems where computers do stuff. In practice, they're hard to reason about, thus they're complex. And unfortunately, there are humans involved. That means politics, means economics, - and worst of all, it means feelings. But that is our domain. So, let's talk about soft skills. No, no, no, just kidding. But we're still working with complex systems. In general, we're working with complex online systems, - software's running, it's always online. And if it's not, someone is angry. And systems tend to grow larger and larger, - at least most non-trivial systems. Often, software will grow larger than a person. It will grow larger than a team. It may even grow larger than a substantial part of an organisation. And then, the obvious question is to answer, so what? Why does this even matter? And I hope that we can talk a little bit about that. We talk about these giant systems and how we want to be productive, - how we want to impact our users, how we want to create value. But we've done this kind of thing as an industry. We have DevOps, we have Agile, we have cloud, - we have functional programming, we have automated tests. We have all these things - where we, over the last, let's just say 50, 70 years as an industry, - feel like we have grown. But it's still all the same problems, frustrations and confusions. What is frustrating? Unclear requirements, - changing requirements. I'm still surprised that my software breaks. Why can't we meet at deadline? Nothing changed. And then, we say we want AI or we want DevOps or something like that. But is that really where we want to spend our innovation tokens? Are we so afraid of being left behind that we keep changing the new ball? Part of the problem is that we, - I see myself as an engineer, even though I say fluffy things, - identify ourselves in terms of the code that we do. What do you do? I write code. Hopefully, that code gets deployed to production. But maybe I shouldn't see myself as a programmer. Maybe I should see myself as an Uber engineer. I work with a container orchestration platform, - particularly in terms of capacity. My team builds services that manipulate capacity. So, maybe it's better that I'm a capacity engineer - than a software engineer, because my code doesn't matter. Writing the code is just one way of interacting with the system. But it feels like it's the most important part of our system - because it is what I manipulate directly. But just as in many organisations, we've seen a team become the teams - that run Jenkins or Jira or something like that. We built some internal infrastructure. And then, somehow figures out we need to do this on Kubernetes. So, we spent a lot of time getting Kubernetes, - running some tools on top of that. And at some point in time, we forget - that our purpose is actually to run Jira and Jenkins. Because we spent all of our time running Kubernetes. So, at least if we agreed that we were Kubernetes engineers, - we were consistent about how we're working - and what kind of problems we're trying to solve. But our mental models, in general, are just wrong, right? We think in true and false. We think in absolutes. Likely, for the engineers of us, - that's part of what attracted us to this industry. Right? As Kelsey said, we've spent so much time and effort - in trying to make our machines deterministic. But we're also building such large systems - that they will always be broken. Everything is always broken, - and look at all the magical stuff we do anyway. We spend a lot of time thinking, right? We plan. We want to be sure that we do the right thing. We worry that, what if we do the wrong thing? We think that would be the worst thing that could happen. We do the wrong thing. But I think we will never run out of things to do - if we assume we have hired more or less competent people. Doing something that doesn't sound too stupid - is probably much better than doing nothing. The only point in time where we will run out of things to do - is when we are trying to figure out what to do. So, we try to sense in order to do things, - but maybe we need to do things in order to be able to sense. I don't know who came up with the term "ready for work". Like a Jira ticket, and it has a label. Is this ready for work? Like, someone wrote the ticket. You can work on it. No, no, no. But we have a process for getting stuff that's ready to work. Just do it. Do things. Making software is hard. It's easy to make it sound easy. We just write the code. You get data, you touch it, - you put data back. But it is so hard because we have all these tensions. Like we will run out of things to do. Why does the system not work as it should do? Are we getting behind on AI? We take our constraints. We are a highly regulated industry. So, we can't. There is a system. We are a large enterprise. We have bureaucracy. Those are the rules. Those are the constraints. Play the game. Optimise for it. In many ways, we have had like a series of years - where we think more about mindfulness, being present, being in the moment. A term that I've started to really like is called playfulness. So, it's shown that when we interact with puzzles, - like doing a Sudoku or crossword puzzle, - or playing a video game, - we have a completely different mode - of understanding failure and experimentation. We interact with the systems and rules in a playful manner. We try to figure out what are the limits. Where can I break them? We're not afraid to try something out. But in many cases, we get so paralysed - during our daily work that we're afraid we'll do the wrong thing, - that we don't do anything at all. One of our heuristics or rules of thumb is as an engineer, - make a diff a day, we call. We don't have pull requests, we have diffs. Everyone should make a diff a day, - that is like completed, full lifecycle. I've been in a bunch of organisations where it doesn't really matter - if there's a day where you're not there, right? Because everything moves so slowly. No one notices. Do something every day. Make something to completion. I don't care how many story points your ticket was. Build something. Do it. Get it into production. I know we have a reasonably mature test pipelines, - deployment strategies, all the right things. But we got there by doing. We like to think about software engineering or programming - as a mental discipline. And right, we are reasonable, smart people, we think. But execution is the only thing that matters. If you don't deliver anything, - if you don't change anything, - you don't have any impact. So, make a diff a day, change something, build the momentum. Build the muscle of doing something. We have the DevOps paradox, which is, someone figured out - we either can change things a lot, - or we can have reliability and stability. Then, other folks were like, well, if we change things all the time, - we actually end up with a lot more stability and reliability. And that kind of also works for psychological safety, - versus high performance, high standards. So, generally, we have this tendency, vastly overgeneralising, - maybe imposing Nordic values, I'm from Denmark, on everyone. I apologise for that. But we have this tendency to say, - either we can have a safe and healthy culture - where we're kind to each other, with respect, - or we can have high standards, - we can have high expectations, we can have high performance. That a safe culture and a performance culture are at odds. And I think that's just wrong. The problem is that we, - and again, overgeneralising, - as an industry, as a profession, - as a stereotypical nerd, we are unable to have hard conversations. So, I'm unable to say, "These are the expectations." We've talked a bit about this. Maybe a diff a day isn't a hard and set rule. But maybe you should do a bit more. Or I know you're very productive, - but you also need to consider how is your ability - to help elevate the team around you. Maybe you're lagging a bit there. Let's agree that this is a focus area for you. And the thing is, when we're being explicit and being transparent, - then we can have the conversations. And that feeds into both - not tolerating the brilliant jerks, - those that are very productive and impactful, - but have a toxic culture, - but it also goes the other way around, those that actually drag on the team. And I think what we can do, what we should do, - is try to set higher standards, but be transparent about it. And my naive hope - is that we can use higher standards, higher expectations, - more accountability to actually drive psychological safety up. Because now there's no more... We've reduced the friction between reality and our ideal, - what we're actually saying and what we're doing. How many here write code? How many write code on a weekly basis? Okay, like a tenth of the room. Usually, I talk to a bit more "doey" folks, - so I had to realign. Motivation is the key thing. How do you motivate people? There's this theory, a book - called Drive: The Surprising Truth About What Motivates Us. Kind of like, we are in an industrial society with knowledge workers. So, we kind of have the shelter safety part - more or less under control, especially in the tech industry, - high salaries and things like that. The basics are there. So, what's motivating people? Our purpose. Do they see the meaning behind the work they do? Mastery. Can we become good at what we do? Are we good at what we do? Where's the skill? And autonomy. To which degree do I decide my own work? And I reasonably well buy into this way of looking at motivation. If we then look at this chart, - there's a difference between intrinsic motivation and extrinsic motivation. Basically, does my motivation come from inside me or outside me? The extrinsic things are things like being punished. I don't want to be beaten with a stick, so I am motivated to do this. I need my salary. I like praise. So, I will do this, so I'm motivated. And those are things that we can explicitly impact, right? Come from the outside. As a manager, as a leader, - these are things that I can impact, that I can manipulate. Manipulate sounds like a bad word, but I can interact with these parts. The problem is that if we look at all the things on the intrinsic side, - the things that actually matter for knowledge workers at a high level, - those are all the things that we have no chance of directly impacting. How do we motivate knowledge workers? We can't. Probably, we can do stuff badly, right? If we don't have the need to have, - if they're not in place, that will be bad. But motivation is about creating the space - where folks can be motivated. Creating a culture where folks can have impact - and can solve challenging problems - and where they can see the meaning with the work they're doing. For me, as a very low-level engineer, - thinking about how the work we do impacts the ability - of restaurant owners, of drivers, of eaters, - to have a business, to have a day-to-day life, to get food, - is probably more useful in terms of motivating me - than thinking about terms of servers, CPUs. And I think one of my key points is that - motivation and control are very highly linked. Do I experience the world feeling like - things are being done to me, or that I do stuff? And this, of course, relates to autonomy. But it also relates to management. So, many folks have had experience - with having a brilliant jerk - in their team, in the organisation, right? There's this one person who's been here for seven years, - knows everything, and we are afraid - to keep them accountable, responsible for anything, - because we don't dare fire them, because we'll be lost without them. But we also don't dare trying to get them in line. So, we as team, as organisation and as a leader - have completely yielded our control to that person. They can just leave. We are powerless. What does that do to our motivation? What does that do to our culture? So, we need to take back that control as leaders. We also need to take back that control as engineers. The next time someone goes to you and says, - "Is it okay that we spend time on refactoring or automated tests?", - or I don't know whatever boring internal detail, - You say, "I don't know." You decide. That's what being a professional means, right? But if we don't ensure that we get that control back, - organisations, teams, then it doesn't even matter. Because then we are still just running after the next ball - without getting any better at running after balls. But none of that matters if we don't care. Like, give a something, a bad word about it. [imitating, sarcastic] "Oh, it's also so annoying that we have bureaucracy, - and this system works badly and..." Fix it. Do better. Stop wasting taxpayer money on random software systems that we built. Be efficient. Be professional engineers, managers, leaders. And that needs to be the motivation that we bring, right? We care about the work we do. We care about the impact we have. So, my failure here, my worry is not to be boring or wrong - or irrelevant, or just you all feel like I'm stupid, - and you look forward to telling me just after this. My failure mode is that I stand here, I bring these opinions, - make these statements, and nothing changes again. We all feel good because now we heard him rant. We can go there and say, "Ah, yes, he was right." Then, you go back, and nothing changes. So, I want you to go back and do hard things. Raise the expectations. Organise not around services or code or programs, - but around charters, capabilities and outcomes. The code is worthless. This talk is not about AI. It's the impact that matters. If you want to evolve beyond DevOps, grow your evolving muscle, - not your Agile muscle, nor your DevOps muscle or your AI muscle. Because you know in three years it will be something else. Focus on the fundamentals. Hire good people. Get out of the way. And then, of course, I don't know what will come next. But I hope that you can grow to build it. Thank you for your time. [applause] [outro music] [music ends]