Amanda Brock, OpenUK CEO, is back in the sauna with Marc and Darren discussing the impact of AI-generated code on open source software. Licensing, ownership, regulations, challenges with Generative AI, security in private sectors, job displacement worries—it's all in this episode! Join in the conversation at The DEVOPS Conference in Copenhagen and Stockholm and experience a fantastic group of speakers.

Amanda (00:06): Who owns it? Is it the creator of the AI or the framer of the question? And is liability going to be the same? So if you own the copyright, are you liable? 

Marc (00:21): Welcome to DevOps Sauna Season 4, the podcast where technology meets culture and security is the bridge that connects them. This is the DevOps Conference Global post-game podcast. I'm back here with my dear colleague, Darren Richardson. Hi, Darren. Are you recovered from the conference? 

Darren (00:48): Afternoon, Marc. Yeah, I'm starting to get there. It's always a bit of a long recovery time. 

Marc (00:53): It's so great that there's so much work that we put into getting into one of these conferences, getting everything ready, getting our fantastic guests ready for the stage. And then when it's actually happening, it's like, oh my gosh, you just have to hold on and enjoy it while it lasts. And the next thing you know, we're post-game. So one of my favorite keynotes from the conference was a regular guest on the DevOps Sauna podcast. I have here Amanda Brock, the CEO of OpenUK. Hello, Amanda. 

Amanda (01:24): Hello both. Great to be with you again. 

Marc (01:27): We always enjoy talking with you and you gave a fantastic keynote at the conference and it had a little bit of an interesting title. So will opening AI destroy open source software? Would you like to give us a little bit of an angle on what you talked about and what you're up to? 

Amanda (01:46): Absolutely. I think there's sort of two or three main themes that ran through it. And the first was looking at the impact that using AI to create code has on open source. And we've seen a great deal of discussion around the licensing and whether licenses carry through and who owns copyright. And there's a piece there that definitely will come from our regulators, our governments, our lawmakers. And then there was a second piece that was more focused on the issue of being a maintainer and receiving code, which is contributed that has been created by AI and how we're going to manage that on already overburdened maintainers. And then I think the third theme, which is probably the biggest one, is that suddenly everybody wants to talk about the words open source. They don't always add software to the end of it, but in the context of AI and what that means and lots of people want to be relevant in that conversation, at least in the experience I've had over the last year, which means that there are many actions going on and many conversations going on around the merits or problems of AI openness. And it will be very interesting for us all to see where that goes in the next period of time, and that period of time is days, weeks, months, and probably a year or maybe slightly more, but it's certainly not years where we see the impact of it. 

Marc (03:08): Things are moving so fast and open source has been around for quite a while. And I'm so happy that you reminded our audience that what open source really means. Would you like to elegantly put that as you usually do? 

Amanda (03:22): Elegantly. Let's see if I can be elegant. So open source is a lot of different things depending on who you are and how you've come to it. But at its heart, there is the legal and licensing requirement, which is that the software is not only made open, but it's distributed on a license which complies with the open source definition. And I'm sure you all know that the open source definition is in the custodianship of the open source initiative. And the easiest way to make sure that it complies is to check whether or not the OSI has approved the license. And if they've approved it, then you have the rubber stamp that it complies with the open source definition. And there are, I think off the top of my head, around 80 of those licenses, but half a dozen that we see constantly used. And that's the basic definition of open source software. The reality is if you just stick code on GitHub with an OSI approved license, you are not really creating open source. You're not really getting the value out of it. And the value comes from the ecosystem and that's the contributors, the collaboration, making sure that you've got code that's in good shape, that's documented, et cetera. So it's a much bigger thing than just that legal definition. 

Marc (04:33): All right.

 Darren (04:33): And there's actually something kind of interesting here to me because you talked on the stage about how people are starting to maybe try to use the word open out of context with open source to try and generate this sort of, do you think they're trying to generate kind of a fake feeling of open source of openness and transparency? 

Amanda (04:53): Yeah. And I think there's different, there's a few different things happening there. I think there are people who are being disingenuous and I think there are people who do not know what they mean. And I think there's a good mixture of the two. And unfortunately the term open source has become used in all sorts of different ways. So when we talk about open source software, we mean what I've just defined. And for OpenUK, we've tried to look at the broad range of opens and called it OpenTech. Now it could be just the opens, you know, you could call it whatever you want to call it. But increasingly we see people talking about open source, meaning that whole space and that causes huge confusion. And then we see people who don't really know what either means. So they're actually implying it's open source software, it has all the value of open source software. Yeah, it doesn't. And there are these sort of shades developing or have developed over the last few years, not just in the AI context, we've seen a lot around cloud and we've seen open source companies, you know, we've seen Elastic, we've seen HashiCorp, Redis last week moving away from proper open source licensing to something similar. So it's not just an AI issue. And the easiest thing in the world is to say, Amanda, stop worrying about the definitions. Amanda, this doesn't really matter, but it does. And the reason it does is that at its heart, open source software allows an unrestricted free flow. So within the 10 points of the definition at five and six, it means anyone can use it for any purpose. Now that enables everyone to rely that they don't have to go and check out that they're not on a list of excluded people or categories or using it for something they shouldn't. And that's at the heart of the free flow of open source software and its success. And what we see is restrictions that are often around commercialization. Sometimes they're ethical, you know, sometimes people feel they have a really good moral reason for it, but whatever the justification, any restriction like that stops the software being open source. So if we look at AI, last year we saw Meta take quite a brave step and open up Llama 2. And they did it on something called the Llama Community License. And that has an acceptable use policy and it has a commercial restriction at 7 million users. Now that's important because that's different from an open source license. And actually OpenUK, I think we were the only open organization to support that launch. And it's not a decision, as you probably heard me say, that I made on my own. My whole board agreed that we should support it and we supported it because we felt that as an organization supporting all the opens, it was a positive step in the right direction. I think a week, two weeks ago, we saw Elon Musk open up Grok on an Apache 2 OSI approved proper open source license without those restrictions. And it's a step further. But if Musk wanted to make a point without having had that happen with Llama, I'm not sure that he would have gone as far as he did. So I do think that it has been steps in the right direction to get to that point where we see Grok and also Falcon in the Middle East opened up properly on OSI approved licenses. 

Darren (08:07): So you feel like without the initial drop from Meta, they would have been less willing to kind of open up so fully. 

Amanda (08:17): Yeah. It's boundaries, right? And when you push a boundary, you can't really go back from it. And I feel like they pushed the first boundary that sort of, if Elon then wanted to push a boundary, he had to go the full hog. There was no middle ground where you could be disingenuous or confused or whatever you want to call it. And with Meta, just to be clear that in the run up to launch, and if you look at their website, on the Llama 2 website, which has the OpenUK logo on there, it's very clear that the partners have all signed up to support open innovation. It was never described as open source. And it was only, I think, Mark Zuckerberg's Facebook post to launch it. And then, you know, communication since then with him and Yann LeCun, the product director, I can't remember his exact title, but, you know, they talk about it being open source, which is different. And again, I don't know whether that was actually intentional or whether it's just that this is the word or the phrase the market expects to hear is open source. They won't know what I mean if I say open innovation. I think it really comes back to matter when we start to look at how we regulate. 

Marc (09:21): Well, words matter. And I think that has to have been intentional. How could it not have been intentional? 

Amanda (09:26): Yeah, I'm not going to say because I honestly don't know. I think sometimes you'd be surprised by people's lack of understanding of things, but I'm not making excuses for it. You know, the community was very annoyed and felt it was open washing. It wasn't on the terms of what had been carefully crafted prior to launch, you know, that I can confirm. 

Darren (09:46): Yeah, I think you actually bring up this thing about definitions, which is useful because we have this. I was actually talking with another person, a lawyer in tech the other day who was saying how it's quite difficult to engage the extremely technical nerds in things like definitions and things like the defining of these acts and actually pushing them forward. So I do think there's some requirement for the strong definitions there. And it all comes back to this idea of trying to use the word open to try and cover things that it's perhaps not. But do you know, I guess you feel somewhat vindicated in your choice of backing Meta. 

Amanda (10:24): Yeah, no, totally. I've always felt it was the right thing to do. Even when we were being given a hard time about it, I've always felt that going in that direction was a major step forward. Because you'd seen Llama 1 last March. It was released for research only, I think it was in February. And then by March, it had been leaked. Now, Zuckerberg was dragged across the coals by various governments about how that could happen. No idea how it happened. But what you saw between that sort of March leak and May was advances in AI technology that hadn't been matched by the corporates doing this alone. And it really showed the value of an open collaborative community around innovation. But it also, for me, shows, I suppose that we need to learn from history, right? You know, I'm aged, I have been around tech since the mid-late 90s when I worked in the internet stuff and dotcom boom. And there are lots of things if we had a crystal ball and understood how that was going to play out, we wouldn't have wanted to happen or allowed to happen. We have the benefit now of history and hindsight. And I think it's super important to not have the eight or so companies who've got the money, the staffing and the compute power to do this innovation, to become guarded by a moat effectively. You know, that Google leaked memo, we have no moat, meaning that the IP wouldn't protect them enough and enable enough revenue to match the innovation. Now, we've got to understand that there's a cost to that innovation and that they've invested it and they're going to have to make some money somewhere along the way. And business models and openness, we all know it's tough, it's the Holy Grail fixing that. But the fact that you have so much more innovation and that you need to democratize this next technology that's going to be such a big part of our future really has to be high up the agenda. You know, we have to understand that none of us want that to end up in a few companies or a few individuals' hands and that we do need to democratize it. Now, that comes with a different set of risks. And I'm always banging on about the fact the UK is too risk averse and that risk shouldn't be a bad thing so long as it's managed and that we just need to understand what the components are and make an assessment based on our risk tolerance of what is acceptable. And I think that with AI has to be done on a progressive basis. We understand the AI of today, we might have an idea of where it's going in the next month, six weeks. We don't know where it's going to be in a year. We genuinely don't. So, we need to sort of plan so that we have some mitigation around extreme risk. None of us want Hull taking over the environment that we live in and making our decisions. On the other hand, that's a long way off in real terms. And what we need to do is be looking at the actuality and hard fact and technical understanding. And I'm interested there in what you're saying about the real nerds not wanting to be part of the discussion. There needs to be sort of stepping stones around that, I think, with people who can engage that community and be translators, which is kind of what I view myself as. But then they need to represent the breadth of that community. And one of the problems that we're seeing, not just in AI, but with things like the Cyber Resilience Act, is a lack of representation of SMEs, of developers, folk who really understand A, policy, and B, the issues, who can be a sensible voice into government lawmakers, regulators on what's going on and represent that community. Because most of the people with the skills and the understanding are working in big companies or the foundations, I guess. They have some of them. But what they represent is primarily their own interests. 

Marc (14:10): Are SMEs, do they even have a chance to maintain certification, or compliance is the right word, to be able to maintain compliance and competitiveness in the landscape that we have ahead of us? 

Amanda (14:23): I'm really concerned that they won't. And I'm really concerned they won't because of things like the Cyber Resilience Act. In a way, sitting here in the UK, I see it as a moment of opportunity where Europe has always been considered to be a leader from a regulation perspective. And I'm not sure that what the commission is producing, unless there's something I don't understand, which there could well be, is driving forward that reputation and even the ability to build that digital future that they want. I think they're accidentally closing things down. And I think they're closing down innovation. So if you're an individual who wants to create something or build a small business or build a big business, but you have stepping stones or just want to build something and then naturally wants to be able to earn enough of what they've built to eat and do the normal things in life, the fact that you want to earn around your software, that commercialization, whether it's a royalty in a proprietary context or selling services or subscriptions or whatever it is as an open source business, that's going to get you captured by European regulation. And that regulation is going to be really hard to comply with. And I think that the phrase that the governments use is regulatory capture, where they capture the market through regulation because it's too hard for small businesses or individuals to comply with. And it looks like that's where Europe's going at a pace. 

Darren (15:54): Yeah, I would agree with that, especially with the dawn of the AI Act that they just put into place, which I mean, we look at AI and a lot of us will just equate it with OpenAI and ChatGPT. But most of the AI tools out there are run by smaller and medium sized companies who now they have to deal with this EU AI Act may end up just being crushed under the weight of the new compliance required. Because if you're running a team of 20 people, you can't afford three of them working full time on compliance with a standard. So despite the EU being quite aggressively anti-monopoly, I feel like they are creating conditions for a market monopoly in AI because of this act. And similarly, that they might be running into the same thing with something I think we could discuss more at this point, which is the Cyber Resilience Act. 

Amanda (16:45): Yeah, I sort of share your sentiments on this. I think they've tried very hard to accommodate open source within what they're doing, but they've driven it at an aggressive pace without full understanding for some reason. And there's a sort of shift going on. And that shift seems to be on the way software is categorized legally, and that they're recategorizing it as a product or a good. So like a tangible thing where it's always legally been categorized as a service. And what that means is this sort of certification type model where you put something on the market and it has to be compliant with your kind of regulation seems to be coming through. And the way that they're dealing with the implementation of the legislation is through standards. And the standards for the Cyber Resilience Act, we're expecting 44 standards. Now, the open source communities will not have a voice in that process. And the pushback to what I've just said is that you can go along as a member of the open source community to a standards meeting as an observer. And what that means is you don't have a vote. You can ask questions, I believe, but you've got to get yourself there. These meetings are frequent, they're meetings that you have to travel to in person really to have any impact. And you would have to be a member of the standards process and body to be able to have a vote. And even if we got one or two organizations representing open source along, you are still looking at a handful at most of organizations versus all the enterprise players in the room. And to me, it's regulation by the back door where big companies with the resources to sit in those standards meetings are going to be able to heavily influence it in their favor. They may also be able to influence it in terms of their patent supplying, but it's not going to be good for small companies. It's not going to be good for individual innovators. And ultimately, I think it's bad for the whole tech ecosystem, including the big tech companies. Because if you think about most of their innovation or a lot of their innovation comes from acquisition. And if you're not able to grow small businesses to a point for fear of being captured by regulation, then you will not have as much. You may still have some, but you won't have as much innovation. And I think it's bad for Europe because if I'm sitting here in the UK and our regulation is much more liberal and the UK government keeps saying again and again, they want to be pro innovation, which is why they're moving slowly on regulation, then you would do something here. And when you release it on GitHub, it's available in theory anywhere in the world. Well, I would just be blocking it going into Europe so that I could continue to innovate. And potentially that is very bad for the European Union. So the most practical, simple solution to me is just 404 error message comes up when you try and download it the same way as when you deal with export control. If you're not allowed to download it in a country, that's how you manage it. So it's quite worrying. And honestly, I cannot believe that it's what they're trying to achieve. 

Marc (19:52): You talked a lot about the standards meetings and standards bodies. And I understand some of this from my past, but what can the average community member do? 

Amanda (20:02): I really don't know. 

Marc (20:04): There's a lot of us out there. 

Amanda (20:06): Go home. No, I don't know. There has to be something and there has to be some way to pull people together. And I think that's what everybody's working on. I was on a call a couple of weeks ago with somebody from the commission, and he was basically saying you need representation to go along and be an observer. Well, to me, that doesn't feel like enough. And the same week, there was a court case against CEN-CENELEC. And if you read their newsletter, it sounds like they won, but they didn't. And there was a decision made that you couldn't be charged for access to CEN-CENELEC standards. Now, historically, you've had to pay just to get the standard information so you could comply. So that's a step forward. I expect there'll be more litigation around all of this. But it does feel strange that regulation, that law is being made through standards and the standards come back to being around enterprise engagement and funding as opposed to the lawmakers who make decisions about the lobbyists. So I don't know. I think we'll see a lot more engagement across groups of people and potentially in the software space. I think there might be room for the open source organizations to collaborate more with the representatives of other SMEs, because we're all going to have the same issues, and to try and create some sort of force in numbers. You see, if you look at Eclipse Foundation, they've employed a very good guy, actually, on standards. And so, not on standards, sorry, on policy, who ultimately will deal with standards too, I guess. But I expect we'll see more of that from the foundations. But I'm not sure that the foundations fundamentally have a representative role for the individual developers. They're not a union, they're not a guild that represents everybody. And the Cyber Resilience Act and the pushback last year really showed that. So if you look at the press, you would think there's been massive steps forward in the last 12 months. And there have been, and it's not as bad as it was, but it's still bad. And the foundations which were going to be captured as commercial entities appear now to be let off the hook as stewards. But this steward thing hasn't been defined yet, and we don't know if there's any way for companies to do it. So you may find that a lot more stewards are going to appear, because the model of putting your code into a steward who will not be liable while you commercialize it by earning enough to eat yourself may become more. Now, if we look at the foundation market that's out there for open source just now, there are a few foundations. Linux Foundation is obviously by far the biggest. Then we've got Eclipse, Apache, and a bunch of others. But they can't take everybody's code. They can only take a certain amount. And they generally only take projects that are either critical to the ecosystem or that are funded by collaboration by multiple companies. So you buy your way in or have something essential. Now, if you're evolving something, A, it's not essential at that stage. But B, you're not going to have that financial buy-in from multiple companies. So where are you going to put it? And that, I think, is something that one solution will be more foundations being set up. And I've been telling people for years, please don't set up more foundations. But actually, that might become something that we have to do to protect the communities and to enable them to find a way to still commercialize but have their code held somewhere safe. And until we actually know what the definition of a steward is going to be and what the steward has to do, we don't know quite how that's going to work out. And not knowing isn't great for planning and for people's business models. 

Marc (23:45): Interesting place that we've come to. I'd like to go back. We had a few questions from the conference that we didn't have a chance to Q&A with you, Amanda. I'd like to go back. I think a few of these are quite important. And one of them you just kind of touched on, which reminded me of this question, which is, who should enforce control if a developer creates an open-source project which, under the wrong hands, can be destructive to specific companies or other entities? So rather than being destructive on yourself for being an open-source contributor, what happens if someone does something either malicious or that tries to be destructive towards other companies? 

Amanda (24:23): Yeah, so I guess we fall back to regulation and law, right? And something that isn't always understood is that the distribution of open-source on a license, which is your choice of license, is required because of copyright, right? So we put a license on to enable others to be able to use it and to share because we want them to use it and to share. But whatever we say in our licenses, that's still subject to law. And if the law says you can or can't do something, you can or can't do it. Now, we've always sort of relied on a disclaimer saying we supply this as is and you, the user, are responsible for how you use it, which pushes the liability and the risk onto the user. And to me, that makes sense, right? So the user is the one who has to comply with law. So I choose to use Linux on my server in the financial services sector. I need to ensure that how I use it works with the financial services regulation. If I used it in a health context or for a medical device, I would need to ensure that it met the regulation of that. But if I used it in a mainstream business, I'm much less regulated. So I don't have to worry in quite the same way. And that's because I'm an informed user in a business context making that decision. And it seems right to me that that's where liability sits. But now what's shifting, and I think it's an attempt to catch the big companies, is that the liability will sit with the commercialiser. So whoever is distributing and making something financial out of it, whether it's a smaller or larger amount, there's no discrimination there on the level. And I think who then enforces and who makes sure that the behaviour is acceptable within the eyes of the law is the regulator or the court system in whatever country you're looking at. The other way round when it comes to enforcing your copyrights and enforcing your rights as the creator of something, we then rely on foundations and organisations, I suppose I didn't mention them already, like Software Conservancy, who have brought litigation against Visio recently and who represent the individual copyright holders who have created the code. So I think ultimately it comes back to law trumps licensing. And that's why what's happening in Europe is so concerning that place the point of risk and liability shifts from the end user to the commercialiser. 

Darren (26:47): Thank you for that. We also have a second question we got, which is regarding generative AI. And it states that these generative AI engines like ChatGPT, Copilot, they're trained on what's available, which also includes open source software. But often because of the nature of AI, they don't tell the users how the data was received, how it was processed. And it doesn't mention that it's AI generated, that it's gathered from open source data. So do these engines potentially put users in non-compliant with open source license terms? 

Amanda (27:21): Indeed, they do potentially. And there's a number of different court cases ongoing around this one on Copilot. And I think we will see some outputs in the not too distant future around whether or not copyright exists in the outputs of AI. And if it does, who owns it? Is it the creator of the AI or the framer of the question? And is liability going to be the same? So if you own the copyright, are you liable? And I suspect that that might not be the case because I suspect whoever is going to be liable is who chose the data on which it was trained. So it will come down to a very complex, it depends answer where if you have framed a question and you've chosen the data that the tool gives you the answer based on, then you're likely to own the copyright or not depending on the law and be liable or not depending on the law. But if you use pre-trained AI and you frame the question, I suspect that you will or won't own the copyright depending on the law, but that you won't be liable because the trainer and the person who selected the organization who selected the data are likely to be liable. And if that's not already confused you, it may not be the same in every country. So this is going to be a minefield and it sort of brings me back to why I personally have avoided AI for as long as I possibly could and sort of encouraged others to do the same. A, because you have to know what you're doing and very few do, but B, I don't think it works to create regulation that isn't global. We sort of need a UN of AI which can create cross-border regulation laws, principles, codes of conduct, whatever it needs to be that also spans geopolitics. Because if you set it up for one area, whether it's the whole of the EU or the whole of the US, even big areas, you're always going to have this friction and it's something that's released globally. And if you can do something with it in one country that you can't in another, if one country says Hull can release a nuclear weapon, but in another you can't, then it gets a bit confusing, right? And it's not going to actually have the effect you want it to have in your jurisdiction by creating restrictions. And I've always been hopeful that if we can make this happen in AI, the next stage would actually be that governments would start to realize that a lot of regulation around software and in particular around open source, because it's built collaboratively and across borders and across geopolitics, might also need to follow the same vein. And when we go back to what we were talking about before around treating software as a service as opposed to a good, that's been done historically because it's got a different nature. But if we think about how we collaborate to build it, we don't do that with physical, tangible things because they have an output in one place, whereas the inputs and outputs of software really are global now and AI is much the same. 

Darren (30:17): Thank you. We have a question that's particularly close to me talking about security vulnerabilities with regards to open source. So it's one of the core concerns that security vulnerabilities, to resolve these security vulnerabilities lacks framework and ownership and prioritization in open source projects. So do you think in sectors like government, financial, medical, that the use of open source is responsible and scalable? 

Amanda (30:44): I think it is, but I think that there's a, and this goes back to where the liability should sit. One of the reasons I think that liability should sit with the end user is that they ought to know what it is they're using and how it's being used. And if they don't know, they ought to be either employing a third party and paying them to understand it and do it for them or skilling up internally. And that to me is something called the curation of open source, a good technical processes and hygiene and good governance. And to get that, you really need to know what you're doing. You wouldn't want anybody in that kind of environment to be using software that they didn't understand to have infrastructure that didn't make sense to them. And I think that this is something that's been problematic because the scale of adoption and the pace of adoption over the last decade, but particularly the last three to five years has just outpaced the scale of understanding. And it's a sort of few to many issue where there are a few of us who understand the bits that we do in the open source ecosystem versus the many users and how we get that data and good practice to those people at scale is something that's really been worked on. And I suppose things like Tubecon last week, the 12,000 people in Paris start to show you the scale of engagement and people trying to do that learning and that understanding and the value of the expertise and the salaries that some people command. But I think it does work. And I think what we see is an ecosystem that responds. And if you look at programs, projects, foundations like OpenSSF and the amount, Alpha Omega, the amount of money that started to be put into this ecosystem to ensure that we have the right reaction to vulnerabilities. And I would almost argue that the open source vulnerabilities ought to be easier to deal with and better dealt with than proprietary ones because you have that scale of response with the old cliche, the many eyes. What is it? Many eyes make bugs shallow. I think it's attributed to Linus. I don't know if it was Linus that said it or not, but that effect should still cascade and the transparency should effectively enable trust, particularly when you see ecosystems shift or sectors or verticals in industry shift as a whole like the finance sector is. Healthcare is one that's going a little bit more slowly, but we're seeing happen. Mobile also, very regulated sectors. Automotive is already there that have moved over to this collaborative model and open source. So I don't think that it's a problem beyond the fact that security is a problem with all software. 

Marc (33:18): You just reminded me, Linus' father, Nils, is in EU parliament. I wonder if we have an ally. Yeah, he was at some time. Someone asked him how his son is and he said, which one? 

Amanda (33:31): I once met, this does not equate, but I once met Boris Johnson's dad at a party and he was pretty much the same talking about my boys, you know? So yeah, I'm not equating Linus to Boris. 

Marc (33:47): No, no. 

Amanda (33:48): I'm not equating Linus to- 

Marc (33:50): Or Nils to Boris' dad. 

Amanda (33:54): Exactly, exactly. 

Marc (33:55): All right, we've got one more for you. Just interested in your opinion here, since your keynote at the DevOps Conference Global was, will opening AI destroy open source software? And the question from one of our listeners is, perhaps the question is wider, will AI kill all software? 

Amanda (34:14): So I think all three themes that I started with come back to this lack of understanding of open source that is sort of run through our discussion on legislation, on using it in different sectors, on dealing with vulnerabilities, you know, there's that lack of understanding. I don't think software generally has the same lack of understanding because it doesn't have the same lack of interaction and exposure. So I think that question isn't framed in the same way. My concern about open source really comes down to understanding and managing risk and managing its future in an appropriate way and legislators and others not understanding it. When we talk about software more generally, I think the question, although it's framed in the same way, I think the question is actually fundamentally hugely different. And what we're asking here is, is AI going to take my job? And we could say the same for anything where you have people creating outputs that AI can chew up. I think I used my favorite analogy for AI, which is the mincemeat machine in Pink Floyd's Another Brick in the Wall. And I think of the machine as the AI. And then I think of what goes into it, preferably not humans, but what goes into it as meat that comes out as mince or filet mignon. And whether it ends up being a fancy dish or a burger depends on the quality of what goes in, right? And that's what we're worried about. We're worried about any job that has an input and an output, whether that's a lawyer with a contract that, you know, data goes in and a contract comes out, whether it's an author with the words going in and a storyline going in and a book coming out, or whether it's software. And I honestly don't know. And I don't think any of us know. I think the concern is, for me at least, discernment. And whenever I try and use any form of GPT or equivalent to get myself an easy route to having a press release or an easy route to an article, I'm always put off because within the first paragraph, there'll be what I know are factual errors. Now, I can only identify that because I've spent 30 years building up know-how and knowledge. And to do that, you have to start by doing what are frankly the shit jobs. And you have to learn somewhere to be able to build your expertise and your experience. And if you don't do those low-level jobs, you know, there's a benefit of maybe not having to do so many of those low-level jobs or for such a long time, but you have to do some of them to build. And I'm sure we can have some flex and we can build differently, but I can't see how we can exclude all of that in our process of learning. And without that process of learning and gradual build of understanding, I don't see how we evolve expertise. And maybe that's my lack of vision. But if you don't have some expertise, how are you going to be able to exercise discernment and know when the AI is right and when it's wrong? And I think that applies not just to code, but to contracts, to books, to anything that AI can generate for us. And I suppose a lot of it is like white-collar jobs, right? You know, you're not worried so much. Maybe you are, but I don't think we're quite so much worried about whether AI is going to take a cleaner's job as whether it's going to take a software engineer or a lawyer's job. And it's a sort of automation issue that people in factories have had to deal with for years. So there is a rationalization, which I don't necessarily think is bad for society. And there may be a shift where we just do less of it. So we work less days, which wouldn't be a bad thing, right? So long as we still earn enough to keep ourselves. But I'm not sure that the level of work any of us does really will matter if we can't exercise that discernment in our activities and in our choices around the AI. It's the best I can do.

Marc (37:57): It's pretty good. And it's from someone with a great deal of experience in this area. I have exactly the same fears like when I was talking with Andy over the weekend, a good friend of ours. And, you know, the AI is good for the seniors who know what they're looking for and how to, for example, code. And in the juniors, it can be a really scary thing. 

Amanda (38:19): And that goes back to, you know, is it just going to become overwhelming for maintainers? Because so many people are going to contribute AI generated code because they've done it to be lazy or they've done it to save time or to be efficient, you know, whether it's a bad or a good reason to use it. They're creating code that they don't necessarily understand or that's beyond their understanding or even it's within their understanding and they just don't check it and contribute that. And is that going to just destroy the whole ecosystem? You know, lots to worry about. I'm sure we've got something happier to end with. 

Marc (38:51): Well…

Amanda (38:51): No? 

Marc (38:53): I think-- 

Amanda (38:54): You've made me the voice of dim and gloom. I don't want to be that. 

Marc (38:57): I just opened the channel and... Amanda-- 

Amanda (39:02): Let me go. 

Marc (39:03): And let you go. Amanda, I see a lot of positive here. I see that we have the greatest tools available kind of ever in humankind in order to make a better world. And there are people like you that are out there. I wish there was more we can do as a community. And I think that the most important thing is to be informed and to be active. And you make me want to be more informed and more active. So that to me is a really positive thing. And I think it's really positive for our listeners as well.

 Amanda (39:32): Thank you. Thank you for that. I think you're absolutely right. That being informed is something that really matters. And if we can make sure that more and more people are informed, then we're sort of doing our jobs. 

Marc (39:42): Absolutely. Amanda Brock, thank you for coming to the sauna with us today. 

Amanda (39:46): Thank you for having me. 

Marc (39:48): And Darren, thank you so much for the gregarious conversation. 

Darren (39:52): It was a pleasure as always, Marc. 

Marc (39:53): All right. We will see you next time in the sauna. Thank you. Goodbye. We'll now give our guest an opportunity to introduce himself and tell you a little bit about who we are. 

Amanda (40:07): Hi, I'm Amanda Brock. I'm the CEO of OpenUK. We are the organization in the UK for the business of open technology. I am a former lawyer and got into open source through Canonical. 

Marc (40:21): Hi, I'm Marc Dillon, lead consultant at Eficode in the advisory and coaching team. And I specialize in enterprise transformations. 

Darren (40:29): Hey, I'm Darren Richardson, security architect at Eficode. And I work to ensure the security of our managed services offerings. 

Marc (40:36): If you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.