The implications of edge computing on DevOps, AI-generated code, open-source licenses, the rise of start-ups focused on automating DevOps tasks, AI among developers, responsible innovation with Generative AI—it's all in this episode. Join in the conversation at The DEVOPS Conference in Copenhagen and Stockholm and experience a fantastic group of speakers.

Darren (0:00:06): It's going to be interesting and I hope that it doesn't make the internet unsafe or less safe for people, but we'll see how it goes.

 Marc (0:00:21): Welcome to DevOps Sauna Season 4, the podcast where technology meets culture and security is the bridge that connects them. Back in the DevOps Sauna. We're trying something new at the moment and we're going to have a look at the news that is interesting to Darren and I and some of our colleagues, customers. We're going to give you a little bit of commentary on some things that we see that are happening in DevOps and software development around the world. I've got my usual cohort, Mr. Darren Richardson on the line. 

Darren (0:01:03): Afternoon, Marc. 

Marc (0:01:04): Good afternoon, Darren. Always a pleasure to speak with you. Let's dig into the news. One of the things that's happened recently, there's been a lot of talk on edge computing and edge computing in terms of AI. We recently had Cheryl Hung from Arm on the podcast talking about edge computing, Arm doing a lot of work moving from edge to servers and doing a lot of support on edge itself. The news item that we flagged here is edge computing requires DevOps at scale. I think this is a really interesting topic. How does it strike you, Darren? 

Darren (0:01:41): Yeah, I think it's something that's not actually happening as much as it should. 

Marc (0:01:44): Yes. 

Darren (0:01:45): I mean, as you're saying, we've had Cheryl Hung. It's great to see players like Arm leading the way into the edge because as we know, DevOps tends to be something that's slowly adopted. It's never really something that's done right from the start or if it is done from the start, it's not done correctly. That's probably not the way it should be, but edge computing is going to become huge as we start shifting AI workloads to be more efficient. It's going to involve moving them closer to the person accessing them. What this is going to mean is it's going to mean AI workloads on CDNs, on content distribution networks, and it's going to mean AI workloads locally at the edge with edge installations, maybe even on embedded systems. When we're talking about this, we don't really have a great model for DevOps at the scale it will require to have all these systems being moved back and forward onto edge-based systems. That's something that I think we see companies struggle with now. I've had the privilege of working with a few companies who are doing it from the start and they're making some interesting steps towards it. I think it's something that we don't really have a mature model for yet. 

Marc (0:03:04): Now, the interesting thing to me, you brought up two of the most important items to me, embedded and software updates. When I look in this news item, it's talking about how IT needs to work more closely with DevOps. I'm not sure what that means exactly. It also talks about a different approach to how storage is managed. But to me, the interesting thing here is transforming from doing large-scale ML ops or machine learning operations whereby we're training models in large-scale systems, lots and lots of GPU, lots and lots of horsepower, to being able to have edge-based computing that has some learning capabilities and then the ability to update the software on those edge devices efficiently at scale. The marriage of embedded software, which is a very large and important software niche in and of itself, the things that's happening in autonomous driving, being one really good leader in this, and then looking at how the ML ops, machine learning operations, deep learning training models is moving from the data center into these very tiny microcontroller platforms. I think this is really, really interesting. The software update, to me, is going to be one of the big things to look for here. 

Darren (0:04:23): Definitely, but I think we should clear one thing up in that I think the models are always going to be trained in data centers. The machine power required for the training of models on a large scale is always going to require centralization, but moving those models to the edge to be used, to be leveraged, because once a model is trained, you then take it elsewhere to have it function. It's going to involve, as you say, this ML ops between generating a model in these data centers and then moving it closer to whoever is needing it, moving it onto these distributed platforms where it can be leveraged. It's kind of going to be MLOps, which is kind of in its infancy, learning from DevOps, which is often, I don't want to say an afterthought, but often it's considered later. We're going to have to merge these two together and kind of come up with these ways of storing things closer to where they need to be. Training, in my opinion, will continue to be data center. Use of machine learning models, once they're trained, will become closer and closer to people. I think quite recently there was some discussion of executing these models on phone hardware, so obviously they are trained on them, but when they are iterate, when they're being used, if we can get them locally in our hand, that will be a huge win for things like privacy if we don't have to pass data off.

Marc (0:05:52): Absolutely. This leads to our next news item, Starcoder 2, LLMs designed specifically to generate code are coming. This is something that we hear so much about and some of us even fear, are we going to lose our jobs as developers, as DevOps, practitioners, as software professionals to the robots? But now ServiceNow, Hugging Face, and NVIDIA have combined their efforts for advancing generative AI. I think this is going to be a really, we're going to see a lot of these big partnerships, aren't we, Darren? 

Darren (0:06:27): I think so, and given that NVIDIA can kind of control, really they control who can use AI at the moment, given that they're making the chips. So I think everyone's going to be lining up to partner with NVIDIA to get their hands on the GPUs they need to build these models, but yeah, we're going to see a lot of iteration in coding spaces. So we have Starcoder 2 coming. There was a software developer, like an AI software developer by the name of Devin that was coming out as well, and I think we're going to see a rapid increase of this, even in things like Copilot, just for one reason, because that's so damn useful. I was coding in Python the other day, I haven't coded in a few months, and having ChatGPT, even the basic ChatGPT just to give me a leg up and get me started to remember what I used to know saved me hours. So people who are doing this daily and they are seeing such a massive return on investment, it's too large a market to leave untapped.

Marc (0:07:33): Absolutely. And the neat thing for me, there's a lot of us in our generation that grew up, cut their teeth on software, and then the way that this industry goes, you end up in kind of leading others or maybe you get into architecture or other technical fields, but there's a lot of us that spend less time coding than we used to, and these tools can allow us to take the knowledge that we have from the higher level, from a holistic point of view and actually start creating, generating code that can be used potentially even in production systems now. And I think it's really cool. It's impacted me in a huge way as well. I think one of the things that we're going to watch for here is those of us that grew up and learned computer science the hard way over the years and have a lot of experience there compared to the juniors of today, there's been a lot of debate over, do we want to see, is it better for the senior guys who know what they're looking for and know what they want using AI tools or junior guys who may not really understand if the machine is, we talk about hallucinating in AI, I'm not sure I really like the term, but if the machine is not generating the most sound code choices, any kind of thoughts there? 

Darren (0:08:48): Yeah, it's going to be both and it has to be both. And that's actually the huge advantage we have in DevOps in that people are going to use AI to put out code and juniors are going to use it to put out bad code and seniors are going to use it to put out better code. And then we put it all through DevOps to put it through gates. We put it through checks, we put it through testing so that the junior starts to learn what is good code. It doesn't change anything except speed of delivery. And to expect that both sides of this coin wouldn't use it evenly is kind of wrong. And it actually leads into something that our CEO said in the DevOps trends we put out at the end of last year. So we have the idea of the way forward in AI actually being prompt engineering. That's going to become key. And it would not surprise me if we started actually seeing college level courses appearing in the form of prompt engineering in the not too distant future. And I think that's really going to be what separates the seniors and the juniors. 

Marc (0:09:53): I would love to see a test-driven, disciplined, prompt engineering course for developers. You know, be it just test-driven development or acceptance test-driven or behavioral or what have you. But I think there could be some really interesting content there on understanding how we test it, generating test cases, generating code that fulfills those test cases in an intelligent way using AI. Okay, let's move on. So a huge topic in the open source world at the moment, and nearly everybody is using HashiCorp. So there's a lot of things affected here. But the news is that HashiCorp and OpenTofu, the fight here is getting ugly now. How do you see this, Darren? 

Darren (0:10:36): Yeah, so a bit of background. Essentially, HashiCorp changed its open source license for a more business-centric license for its Terraform program. And what this caused is what it always causes in the open source community. A group created a Terraform fork called OpenTofu. So this is standard. But now what's happened is HashiCorp have accused OpenTofu of taking code from Terraform while it's under the business source license. And obviously, OpenTofu have denied this and said, yeah, perhaps they're confusing code that was part of Terraform while it was open source. And it raises two interesting questions for me. The first is, is open source inherently anti-business? Because we seem to see this a lot where when a company switches over licenses for open source to business source, it's for one specific and obvious purpose. It's to generate resume. And I liked to use HashiCorp as the example of that you didn't need to be business source, closed source to be profitable. And then they switched, which kind of threw my game off. Do you have any thoughts on whether open source is anti-business?

 Marc (0:11:54): On the business source license, this is anti-competitive for me. And I think that it's kind of unfortunate. And people are starting to confuse this type of license with actual open source. As a reminder for our listeners, open source means that the software is available for anyone for any use. And certainly saying that you can't use it for somebody that might compete against the maintainer is pretty not open source at all. But thinking that open source is not pro-business is, I've heard this misnomer for a very long time. Most open source developers are paid professionals sitting in organizations being paid in order to work on that software, or I should say very many are. And business can still provide enterprise-level support arrangements based upon open source software. There's a huge amount of business here. And sometimes when I see this, my knee-jerk reaction is that somebody comes in and says, oh, we'll make even more money if we basically close the door for early adopters, close the for the SMEs that are growing that could use this offering and then pay for support in the future and instead try to hog market share. And I don't think there's a good ending here. 

Darren (0:13:10): I think that's a great answer. The second thing about this one was about the case itself, because obviously there's been some back and forth with HashiCorp making the accusations. The various people have come on the side of whether they support it or not. And in my opinion, based on what I've seen, the files don't substantiate it. And I worry that the large company in this case is going to use its considerable power to basically crush the open source innovation that basically took them as far as it did. Now that they have a business, now they basically lose money by everyone who wants to use OpenTofu over Terraform. I have some concerns that it's just going to be a case of volume, like this kind of pressure from overpowered external force. And I hope it's something that OpenTofu has the ability to stand up to. But I think we also covered this quite a bit in our episode with Amanda Brock. 

Marc (0:14:11): I believe we did. And for avoidance of doubt, companies can make their choices as they choose. People will vote with their confidence with either subscribing to those options or not. I don't take a strong stand on this case in particular. But I do believe that open source is a model that we should be supporting more. And I think it's good for business. And that's based upon my experience. There's kind of a wild card here in the news that I'm just putting up a little bit. I wish these guys the best. I don't know what this case is really about. But here it is. Vilnius company CTO2B secures a million of pre-seed euros. A million euros of pre-seed funding for their startup to automate DevOps on cloud infrastructure. And it's funny to kind of single this one out. I expect that there's thousands of these types of things coming out all of the time. But you never know which one's going to be the next unicorn. And this is a little bit interesting case. Did you have a look at it as well? 

Darren (0:15:14): I did, yes. And there's a quote from the co-founder that says, we need to break free from the cycle of solving the same infrastructure problems to every new company. And when I read that, I was just like, this guy gets it. I understand why they ended up with a million euros of seed money. 

Marc (0:15:33): And I've seen a few different companies that are looking at similar related things. You know, we have the cloud providers, AWS, GCP, Azure. They all have their own tools. It can make things a little bit interesting. You get into their infrastructure and it's so easy to use the tools. You get a bit logged in. But yet, when we start looking at deploying DevOps tool chains, even ones from the big platform providers, we end up with having to do an awful lot of work to get infrastructure up and running and repeatable across them. Not to mention the actual workloads for the applications themselves. So I think that there could be something really interesting here. And I'd like to see what happens with these guys. 

Darren (0:16:16): Yeah, I don't disagree. There's the DevOps Pro event happening next month. I'm kind of hoping I run into these guys there because I'm going to be presenting. I kind of want to know how it's going because they've nailed something that we've discussed before, which is that everyone thinks their unique problem is unique. I'm sorry, I'm sure that everyone's special, but these problems are problems that we've solved before and we'll solve again. So I hope they have something behind their claims because this could be very cool. 

Marc (0:16:53): Absolutely. So look out for Andrius and Aleksej. Bring them on the podcast for us. I'm sure we would love to learn more about what's going on there. Okay, there's a survey that indicates that there's widespread reliance on Gen AI already among developers. Docker did a survey of 885 developers and find that nearly two-thirds of the respondents are using AI for coding, a third for documentation. Think how many people were doing documentation before AI made it easier. It was, I think, less than a third. 

Darren (0:17:30): Yeah, I'm pretty sure it was 5%. 

Marc (0:17:32): You know, nearly a third for conducting research, nearly a quarter for writing tests, and about 20% for troubleshooting and debugging, which are a little bit funny numbers. One of the things that I like the most about the AI tools and development is that explain this code to me or help me understand these references or show me this code in a language that I'm more familiar with or something. So this troubleshooting and debugging, I would kind of expect it a little bit higher. Most widely used Gen AI platforms: ChatGPT, Copilot, and Google Gemini. But really-

 Darren (0:18:11): I do think that- 

Marc (0:18:12): Yeah. 

Darren (0:18:13): I do think the percentages are accurate. My use case is either generating the code or telling ChatGPT that I'm too lazy to document it, please do it for me. So it doesn't surprise me that documentation is one of the largest tasks there. 

Marc (0:18:28): And then kind of a neat coda to this is nearly half said there's too much emphasis on AI. I think we're all suffering from that at the moment, but still, yeah, the numbers tell the story. And I wonder who this third of the population of developers that are not using AI for coding, what are you guys up to? 

Darren (0:18:51): I think as a bleak response, it's likely getting left behind. It's not an ideal situation, but AI is going to become more and more important. It's already in use by two thirds of coders. If people aren't adopting to the use of AI, it's going to continue to be detrimental to their workday and detrimental to their CV. It's, in my opinion, not the kind of thing that an individual can stand up against. Maybe things like the EU, entities like that can stand up against it, but not as an individual. 

Marc (0:19:26): You know, it reminds me of XKCD 1205, where if I remember right, there was a chart that describes how long it takes to do the task versus how long you should take to automate the task. And using AI in order to automate something or to take care of a tedious task for me is one of my primary use cases. If I have an operational problem and I just need a script that's going to make this problem go away rather than doing a whole bunch of manual steps, or I just need a little bit of code that I can dump into Excel that's going to sort this stuff in a unique way and take care of these things. That's some of the cases that I see. And I think they're getting left behind and they're not even being able to take care of these small things that could make a big benefit for their overall productivity. 

Darren (0:20:18): Yeah. I saw an example kind of out in that field, but from the 3D modeling industry where basically senior modelers who weren't adopting AI were being left in the dust by junior modelers who were just using AI tools to increase their output. So I think we're seeing more examples of that and we're going to continue to see more.

Marc (0:20:38): Yep. And I'd like to see this third of you out there as well, kind of renewing your skillset in computer languages and technologies as well. And AI could help you probably do an awful lot of that. Okay. KubeCon EU. There's a call to action to innovate responsibly with Gen AI. How are we being irresponsible now, Darren? 

Darren (0:21:02): Well, there's the fact that most of these models are just built on scraping huge portions of the internet. 

Marc (0:21:09): Yeah. 

Darren (0:21:09): Which they do largely without permission, without any kind of attribution. And then you end up in situations where huge companies can manage to bring some kind of deal out of them. We had one with New York Times and OpenAI not too long ago, leaving people who actually make most of the internet being individual people just doing what they do in the dust because there's no model for remuneration for these people. So we're using it irresponsibly in that way as like an input. We're also most likely irresponsible in using the output in that we're not doing things like checking sources, checking accuracy. There's definitely no fact checking coming out from ChatGPT. So we end up with this thing where we can have irresponsibility in both directions. 

Marc (0:22:00): There's a few different things that come to mind. The open source aspects of the things that we're building with AI. The more decisions that are going to be made for us, the more we go forward as a civilization based upon digital technologies, including artificial intelligence. Today, we don't necessarily know many of the tools that we are using have some AI models behind them. And of course, we're talking about doing some labeling to say that there's AI inside, which basically means that everything is going to have something made, hopefully not as bad as the cookie notifications that we have today. But like I'm from California and everywhere you go, it says this area is known to contain chemicals, known to the state of California to cause cancer, birth defects and other reproductive harm. So what they did instead of actually putting things in an intelligent way when I forget what it was like prop 20 something, the law that was put into force in order to try to inform the public, what they did is every garage, gas station, spray can, cosmetic, everything has this label on it. And I'm afraid that everything is just going to have this like AI inside now, or point to one thing like ChatGPT model or what have you. And it's not actually going to give us the transparency that we need to understand what bias was in the data set that is making decisions for my life right now, or giving me advice or any other things that AI is capable of today. 

Darren (0:23:38): And that's one of the real challenges. We talk about opening up the black box of AI, but it's very difficult to ask or determine from AI why it got to such a conclusion. So having visibility on the data set is one thing, but in order to interpret that data set, what we will need is AI. It won't be human interpretable. So we'll be able to make some assumptions, which is really all the AI is doing. But there's also another kind of more sinister thing I think we need to discuss when it comes to responsibly acting with generative AI, and that's cybersecurity. Because right now it is so much more efficient and effective for people to generate phishing attacks using AI. And with the availability of open source models, or let's say freely available models like Llama, people are going to be able to generate more official looking phishing. They're going to be able to replicate voices. I've already seen in a couple of presentations, people generating their own voices with things they haven't said, and it's believable and accurate. So we're going to have a huge implication in the cybersecurity world of generative AI. This is like at the lowest level when it comes to language models. When we talk about automated tooling, that's when things start getting frightening from a security perspective. And I don't know that we have examples of automated tooling yet. And we can see that because when it comes to logging, people actually still respond at the speed of a human. You block an ID address and the attacks stop while a person organizes a new IP address to attack from. However, if we start seeing this shift happening instantly, we might basically be seeing the dawn of automated attack tooling powered by AI. 

Marc (0:25:35): The individual level here is the first thing that kind of comes to mind where more than 15 years ago when social media was coming up and everybody's like, well, I don't have anything to hide. So why would I not give Mark Zuckerberg everything that I'm into, pictures of my children and all of these kinds of things? And now, essentially, if you think about the power that AI takes to all of this data that we've basically given away for free just so that we can look at cat pictures and find that person we had a crush on in high school, the ability to tailor make content that will directly push your buttons by combining all of the data that has been publicly available of you on all of the different social media platforms and every forum or everything that can be scraped from breaches and sometimes even these public databases or databases that have become public due to data breaches about things and how well-targeted phishing attacks are going to be for you. Your mother might call you on the phone and tell you that she's stuck somewhere that she very likely has been before and needs you to immediately wire a few hundred dollars and then you call her back and there she is. And yes, yes, what do you want? I need the money now. I mean, it's astonishing to think of how bad this could potentially get. 

Darren (0:26:53): And it's worth saying that these are nothing new. These are attacks that can have happened. 

Marc (0:26:58): Yes. 

Darren (0:26:58): All that's changing is the time to delivery. So we're not talking about years of preparation anymore. We're talking about minutes. 

Marc (0:27:05): Yep. And the scale that's possible at the moment, how many of these and how much, because there's a conversion rate to every type of scam and the greater the volume, then the more conversions you're going to get. 

Darren (0:27:18): And it's going to be kind of an arms race between like AI to generate things like that and AI to protect against things like that, building on things like public key infrastructure, ensuring the identity of people. It's going to be interesting and I hope that it doesn't make the internet unsafe or less safe for people, but we'll see how it goes. 

Marc (0:27:40): All right. And we've got one that's a little bit clickbaity and I don't usually like using this kind of language at all. I like being a positive and forward kind of person, but I'm going to play the card. And what we saw on Reddit this month was why does DevOps leadership suck so much? 

Darren (0:28:00): Well, I mean, I kind of liked some of the answers on Reddit, but mostly because they made me laugh. I feel like leadership is actually not that common a trait, not that common a skill. I feel like people who can lead are actually in a minority compared to the people who actually try to lead. So it doesn't surprise me that people would have poor experiences with management wherever they are, because I feel like chances are, if you have a manager, they are doing their best, but they may not have had formal training. They may not have a leadership mindset. So it doesn't surprise me that people are having issues with DevOps leadership. I don't know if I would go so far as to say DevOps leadership sucks. I work with some extremely knowledgeable people who I think show tremendous leadership in this area. 

Marc (0:28:54): I agree. I don't think DevOps leadership sucks at all. I think that there are companies and management within companies that have seen a lot of difficulty. I think it was a Dave Snowden that talks about the reasons for your success in the past are very well the reasons for your failures in the future, or the things that have worked for you are the things that are actually going to hold you back. Companies have built up a business. They've invested an enormous amount in what they view as IT. IT has been perceived as a cost center in a lot of companies like banks. They're actually building products for their customers and trying to differentiate, but you end up with this IT-driven organization that's supposed to be building products in a lot of companies. And the management of these areas just doesn't really understand how to transform into a product-driven company. They oftentimes need to look outside and find the right types of partners in order to do these types of transformations. There is also one of the things that I see, and it's actually the top comment here. It's like, let's put a bunch of high-functioning, low-empathy, autistic-spectrum folks in an environment that's constantly burning and then tell them that it's solvable, or pressure them or make them feel that it's their fault, something like this. So while I don't agree with labeling people with any of these types of things, when you have people who have very strong talents in an area, if the environment is not supporting their growth and usage of those talents, then they're not going to do well there. And they're either going to suffer a great deal or they're going to leave. And I see a lot of companies that have been faced with this. It's like, why are we not able to hire? Why are we losing folks? And why are the people we have not performing well? And it's like, well, because you need to have, I think, a more open and DevOps-driven mindset in order to be able to drive your company forward. We've learned a lot in DevOps in the last, what is it, 19 years that this has been really a thing. And we're able to lead a lot of companies forward. And I think that it's really, really important to understand that many companies, as you said earlier, Darren, many companies are facing similar problems. There are well-known solutions for many of these, and the return on investment is there. We just need to help others understand that there is a path forward.

Darren (0:31:18): I agree with all that. And I know this was a question you didn't really want to answer. So thanks for letting me put it in. The clickbait always makes me laugh. So I figured it was good to end on something entertaining. 

Marc (0:31:29): Yeah, absolutely. Okay. So Darren, edge computing requiring DevOps at scale.

Darren (0:31:35): Yes. And it's only going to increase as more edge requirements become visible. 

Marc (0:31:41): So Starcoder 2 LLMs designed specifically to generate code are arriving.

Darren (0:31:46): Yep. We're going to see more and more of them and people should be leveraging them because they are extremely useful. 

Marc (0:31:52): HashiCorp versus OpenTofu? 

Darren (0:31:54): Well, I tend to support the open source projects, even if I complain about open source at times not being usable. So yeah, it's a bit of a difficult situation. 

Marc (0:32:06): CTO2B seeing a need to break free from the cycle of solving the same infrastructure problems for every new company. 

Darren (0:32:13): I have nothing but congratulations for CTO2B because this is a phenomenal goal and a phenomenal start. And I hope we see more from them. 

Marc (0:32:22): Docker survey on widespread reliance on Gen AI? 

Darren (0:32:25): Two thirds of respondents being active in AI seems like observation bias to me. There's a part of me that thinks it might be higher. 

Marc (0:32:34): I sure hope so. So KubeCon keynote takeaways, call to action to innovate responsibly with Gen AI. 

Darren (0:32:42): There is so much work to do with responsibility and generative AI, especially given the effects on privacy and psychological safety. And everyone should be reading up on what they can do towards generating responsibly. 

Marc (0:32:59): All right. And I'm going to phrase this a little bit differently. Darren, does DevOps leadership suck? 

Darren (0:33:04): No. At times, leadership can be difficult and everyone's doing their best. But DevOps is quite a high pressure environment. 

Marc (0:33:13): Yeah. All right. Thank you once again, Darren. That has been the news for April from the DevOps sub. 

Darren (0:33:20): Thanks, Marc. Always a pleasure. 

Marc (0:33:25): We'll now tell you a little bit about who we are. Hi, I'm Marc Dillon, lead consultant at Eficode in the advisory and coaching team. And I specialize in enterprise transformations. 

Darren (0:33:37): Hey, I'm Darren Richardson, security architect at Eficode. And I work to ensure the security of our managed services offerings. 

Marc (0:33:44): If you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.