Will AI destroy open source because it will end up in the hands of a few companies that can afford it? Amanda Brock, CEO of Open UK - the UK organization for the business of open technology - is excited about working out how open source will survive as this mainstream thing and continue to be the bulk of our infrastructure so that it belongs to all of us instead of a few.
Amanda (00:04): We are at a moment in time, a moment in the history of tech where if we can influence governments in the right way, we'll see a shift. And if we don't influence in the right way, I think we'll end up back at proprietary.
Marc (00:21): This season, Andy and Marc are back with a fantastic group of guests.
Andy (00:26): I bet the depths remain classified. And Marc keeps his head in the clouds. With our combined experience in the industry, we can go from the bare metal to the boardroom. Enjoy your time in the DevOps Sauna.
Marc (00:46): Okay, we are back in the sauna. We have a very exciting guest today. Amanda Brock is on the podcast. Hello, Amanda.
Amanda (00:54): Good morning. Thank you very much for having me along.
Marc (00:57): It's lovely to have you here. And as usual, my cohort Andy Allred.
Andy (01:01): Hello-hello. Good morning.
Amanda (01:03): Good morning, Andy.
Marc (01:03): Now, we had an interesting conversation just before we started recording about how you two met, Andy and Amanda. Would you like to start, Andy?
Andy (01:15): Yeah, I've been going to a bit more conferences now that that's a thing again. And I went to Open UK in London. And I wasn't sure what to expect, but it was a great conference. A lot of people talking about open-source and the different aspects of it. And it was just really wonderful. After one of the sessions, I stepped up to Amanda, who was organizing it and said, "Hi, I'm Andy, would you like to be on the podcast?" And then we talked for a couple of minutes. And there were a couple of things that came up that really stuck in my mind. I was like, "That's really cool. We have to talk about that." Therefore, the invitation came, and I'm really looking forward to this.
Amanda (01:56): It's great to be invited along and the conference state of open comm we had our first one. I don't know if you know it was our first conference in February. And we're planning now for next year. But it was all a bit of madness in that we organized it in 10 weeks. It was very much focused, as will next year be on bringing together people from across the UK and beyond who are interested in open tech. And we were really delighted that a lot of people like yourself traveled. We had about 30% of our delegates came from other places. And I think tying it into FOSDEM, being part of the FOSDEM fringe was really important to that.
Marc (02:31): And there was an interesting thing. I do some hosting and some speaking at conferences, and I have some recognizable physical features that people often remember. Oftentimes people know me and I don't know others. But then, as it turns out, you didn't remember, Amanda, meeting Andy.
Amanda (02:51): But I knew I just said yes to the podcast. Yeah, I had a bit of overwhelm going on. I didn't meet a lot of people in one day because I was either speaking or speaking to press. And then the second day, I met so many people, just so many people I didn't know and trying to leave was difficult. We'd given a lot of tickets away. And we'd given tickets to students for free. And leaving the building, there was this line of people waiting to say thank you for having me, which was just insane. I'm not very good always at memory. I'm usually pretty good with faces. But I think just the overwhelm of the scale of people I've met in a very short space of time. You're not the first person this week even, Andy, who said to me, we spoke at the conference. And I'm like did we or have we met before? It's happened two or three times just in the people I've been speaking to this week.
Andy (03:39): I can only imagine. There was quite often a line of people who were waiting to talk to you or shake your hand and say thank you or other topics. And I had to wait for the right moment that I could get up there.
Amanda (03:51): Well, it worked, and I'm sorry I didn't instantly remember.
Andy (03:55): And then I just moved to a new team inside of Eficode and did a little bit of an introduction myself to the new team. And one of the things I brought up was that I call myself an introverted extrovert. And what that means is that I'm totally fine being in front of other people, but it's draining. And I need to get energy by going off and being by myself. Just being at the conference for me was a little bit overwhelming with all the new people that I can only imagine what it would be like the number of people that I saw interacting with you as well.
Amanda (04:30): It's funny I'm the opposite. And when I've done all those tests, I've worked in a lot of businesses over the years and they've done Myers Briggs on me so many times. And I come out as extrovert and usually in the tests like I'm 10 out of 10 as an extrovert and that doesn't make me more or less extrovert. It doesn't matter how many, if the majority is extrovert, you're extrovert, but what I've learned in the last few years, I had an autism diagnosis a couple of years ago and I've spent a lot of time trying to explore what that means in my head versus other people's. And what I've learned is that I have to decompress. I found it hard in lock down because we would do digital things. And I would give all my energy and get nothing back. Because I'm not with people and I am sort of a vampire, I suck all my energy. The energy you're losing friends, even though I didn't remember the interaction. I've sucked all the energy out of you and taken and built my strength from it. But I can't not have a lot of time on my own, which then confuses people. And I think it's the autistic need to decompress. After a conference like that, or if I'm in a conference in another country with people constantly, at the end of the day, it takes me an hour or two. Whenever everybody else goes to bed, I then need an hour or two, where my brain is just decompressing and processing everything. I can't just sleep.
Marc (05:45): Yeah, I have a similar where when working around people, I always have more energy and more ability. And it's kind of my superpower is communicating in a group of people about what are we trying to talk about or what are we trying to do? But then in the absence of that, it all just kind of collapses.
Amanda (06:02): Yes, hard. But I think also, I suspect you might be a little bit like me in this way. I also need time to think. And in your youth, you don't necessarily recognize that as an extrovert, and sometimes push yourself too hard.
Marc (06:15): I'm quite interested. Well, we'll get onto one of the topics here, which is open-source in general. I started in open-source and Linux and things in the mid to late 90s Slackware and things like this. And what attracted you into open-source, Amanda?
Amanda (06:34): Absolutely nothing. I knew nothing. It was an accident. I probably incorrectly became a lawyer and taught myself how to be one and how to deal with the stuff that lawyers do, but never enjoyed being in a law firm. Quite quickly, I moved into companies. And I also found it quite hard to fit. I only had a couple of jobs that were more than two years. I had one for five years at a company called Dickson's retailer. But I did all sorts of tech things there. I worked in an ISP, and I worked on a digital transformation team. I did a couple of things after that and had some time where I was doing short term roles. And I was offered this role for three months in a company called Canonical, I couldn't even say the name. I kept calling it Canonical. I couldn't get my tongue around the word. I joined Canonical and I was meant to be there for three months. I was meant to scope the legal team. And then I was going to go to Amazon to work on their first electrical product. And I've been meant to go to Amazon three times and not gone. I guess I'm never going to work for Amazon. But at Canonical, within six weeks they offered me the head lawyer at the General Counsel role. And there were a couple of things. First of all, I'd worked in the OEM space at Dickson's. And so, I had worked on distribution of software, but from the other side, and building computers and desktops and laptops and all that stuff. And I understood how that worked and how the contracts and the commercials around it worked. Suddenly Canonical had somebody when they were dealing with Dell and the likes who understood. Not many people had worked in that space, so understood that. And then the real thing was that I absolutely felt for the first time in my life like I was somewhere I belonged. And it was a strangest thing. I felt like I fitted with the environment I was in with the engineers that I was with, and also my value set aligned with theirs. And I completely fell in love with this open-source thing, or the way I see open source at least. And this idea of collaboration and sharing and working together and doing something that you're totally passionate about, being willing to do in your free time as well as your work time. And wanting to change the world and you make it more good to -- I use the word competitive, but it's not really right. But allowing more options for people taking lock in a way. All the stuff that the engineers in the very early days of Canonical were talking about and the Debian principles they were bringing to that. That just really aligned with my value set, which I think was a shock because they'd hired me in as somebody who'd worked in big companies mainly.
Marc (09:09): I went on a similar path, which I think is interesting that when open-source when I was first exposed to it, I felt a bit like a charlatan like okay, here's something free and woohoo, we can just use this and we can start to build products and different types of things upon it. And then when I actually started working in an open-source organization, many people don't understand that the average open-source developer is a paid professional working in an organization and may or may not be using that software within that organization. Oftentimes, yes, they are, but then the passion of these people and the willingness to fight like we were talking about Richard Stallman in the in the pregame and this free software free society. If we don't have the ability to see what's inside of our machines and our machines are effectively controlling parts of our lives, how will we ever know?
Amanda (10:04): It's a really interesting topic at the moment. And there are a lot of different things going on in the world around the policy space, which this roll at Open UK has led to me being involved with. Because of digitalization, we see people, governments, policymakers focusing on things like security and security is the one that's really brought it up to the surface. But there are lots of other things about digitalization and software that are really relevant. If you digitalize your software, defined if your software, defined your open source today, and we're seeing these people starting to work out the challenges of using open source and not really understanding it because of the pace of adoption, and the value of what it brings to me. We're at a moment in time, and it's not a second or a minute, we're at a moment in time, a moment in the history of tech where if we can influence governments in the right way, we'll see a shift. And if we don't influence in the right way, I think we'll end up back at proprietary. But one of the pieces that really relates to what you just said is that there are a lot of ways in which open source has been held to account in the commercial world in the public sector for the last decade, where people want to know what's in there. We're talking about S bombs today, software bill of materials if you haven't come across them. And an S bombs are list of what's in the code. And it's done in a way that an engineer can read. Whereas for me in Canonical 15 years ago, we were having to type out lists to attach to contracts of what was in the code because it was being distributed. We weren't distributing it as part of our commercial arrangement. It was there anyway, but we were providing services for it. Companies would want to know what it was that they were using. And I always felt and I feel even more strongly now that the proprietary code in that black box you refer to hasn't been inspected in the same way. And there is no understanding of what's in there. You've got your escrow agreement if something goes wrong, where you can open it up and find out what's in there and maintain it if you know the company providing it goes bust or something. And that's if you've got one. But with open source, you have this absolute transparency. The level of governance and the level of disclosure, and the requirements around it are much greater actually. And I suspect that it will help us over time because I think being held to that higher account now will make it easier in the future and the proprietary folks are going to have to catch up with it.
Andy (12:31): I come from a telecoms background mostly. And in that industry, it's quite often that, "Okay, where did this come from? That guy in the corner over there. He's the one who wrote it." So then the idea of we need to use something open source used to be just like, no, if it's not written here, we're not allowed to use it. We just have to be in absolute control of everything. And now working as a consultant, I'll go to some company and it's like, "Okay, well, we can't use open source. You have Linux, right?" "Well, yeah, but --" Okay, so let's shift our discussion not from we won't use open source, but where will we use open source? And then start digging into where they're already having open-source influence into everything that they're basically doing. And realizing how key it already is and they never even realize it because it's just a package we got from somewhere. Where did the package come from?
Amanda (13:29): Telcos are really fascinating because it's a very young industry. It's about 40 years old as an industry, I think, from the 80s. It's massive because of the scale of adoption and everybody having a phone. And there's been a lot of money made in that space, which has allowed massive expansion, but at the same time, is quite immature because it's only 40 in terms of other industries. And there's this massive shift going on. And it's been going on over a few years. I've been watching it for three or four years. And I was at MWC for the first time this year Mobile World Congress in Barcelona, and there were lots of announcements. There's the API gateway, the GSM had anounced opening up apps, Racketin is very much talking about open source in the telco space. Even Nokia, who hasn't always been a friend to open source was talking about open collaboration. And I think it's great to hear all of that, but I also will watch with interest how they're going to actually be open because as you say, it's as almost antithetical to the way that the telcos have worked. And they're a sector that's been very disrupted by lots of different things. They've lost the revenue from the roaming, the OTTz over the tops, people like WhatsApp have taken away the data revenue, and now they're looking at losing patent revenue because of open source and it will be really interesting to see how that interaction evolves.
Marc (15:03): I think it was about 10 years ago now that I was in MWC on the stage with Mark Shuttleworth. We were there at the time. Yeah.
Amanda (15:14): No, I never got to go.
Marc (15:16): I tried to make a joke at the time I asked him if Soyuz dropped him off, but nobody thought it was funny. I was already interested then that telcos will become a commodity. And then along came, Android and to a certain degree Apple. And Android turned the mobile phone into a commodity, I think, much worse than Apple.
Amanda (15:38): Yeah. Well, I think 2008 was the first Apple phone, the first smartphone.
Marc (15:44): Not the first smartphone.
Amanda (15:47): No, you're absolutely right. And then the first Android released that year as well, right?
Marc (15:52): Yeah.
Amanda (15:53): And that whole thing, I can remember being in Canonical with Jane Silber, who had gotten an iPhone, and I was all standing around looking at what does it do? Why is that so special? Well, it's good. And I think it took like two generations of apps because that everybody made an app that was useless. And then the second generation of apps started to permeate your life. And I think that's a huge shift in everything in society. Because that's where digitalization comes from. Once you've got the UI working with the individuals, then all the infrastructure sitting behind becomes relevant.
Marc (16:28): Very, very excellent point. Yeah, the commoditization has gone much, much further.
Amanda (16:36): The whole AI piece will commoditize a lot more. And that raises all these challenges. I've avoided it like the plague. And now I'm having to start actually engaging with AI because I think it's too complicated to get your head around. But there are some things going on now that have forced all of the open-source people to start to deal with it.
Marc (16:53): We're all being forced to get into it now. What's your initial impressions on what are you doing? And what are you seeing there?
Amanda (17:06): It's frustrated me hugely that government has invested so much money in it, and not in other things like more skills development around open source. And I viewed it as a bit of a hoax because it's ahead of itself. It's not as usable as we've all thought it would be or hoped it would be. And I felt it was immature and not ready. But ChatGPT has shifted the conversation. And I think it's going to be super interesting to see how we, as humans manage this. Because when I talk to particularly young engineers who are using it, they're saying they're using it like a smart Google search, and they're using it then to get information that they have to read, assess, and validate. And they can read, assess, and validate because they have skills. And the danger now is that we allow those skills to disappear because we rely too much on this tool, which will do a lot of different things including removing the individual's ability to discern, but it will also remove the data set from the conversation where people debate the rights and wrongs of things and how to do things that we train the AI on. If you move too fast, or if you allow it to completely dictate, you're going to lose all of the data for the future that you need to train it so that it can give you the information to discern with. I don't know what we do with this gap.
Marc (18:28): That's really, really interesting.
Amanda (18:30): It's not entirely original thought from me. I was shared the other day something from Peter Nixie, I don't know. And he's talking about Stack Overflow, and saying he's in the top 2% of users on Stack Overflow viewed by 1.7 million people, but he doesn't think he'd be doing very much anymore because of AI. And he's saying where's this content going to come from? Really, really fascinating stuff. That's the thing that over the next few weeks, I'm going to really be spending some time trying to get up to speed on AI because we want to bring it into the state of open reporting that Open UK does. And again, it's not entirely altruistic. I want government to understand, I want the public sector, I want enterprise to understand the place of open source and all of this, but I was asked a question, I don't know if you can answer, which is will it destroy open source because AI will end up in the hands of a few companies because only a few companies can afford it.
Marc (19:25): This is the normal dystopia of science fiction like read six and lose books like with the three-body problem and these kinds of things. But yeah, that's one absolute fear. We used to talk about you'll get the Wikipedia chip in your head and then you'll basically have access to what's on the internet at the speed of thought are greater than the speed of thought and then you will be able to speak a different language depending upon the context and then what will happen is that humans don't have the chip in their head will basically become like puppies. It's like, "How cute are you? You only speak English." Well, they're speaking German for engineering and Japanese for Finnish, I don't know and French for love and Italian for singing and Spanish for talking to God or however the joke goes. This kind of level that when the haves and the have nots, and the haves have access to technology that is so vastly beyond a singularity point that we can't even cope anymore.
Amanda (20:30): Well, that is almost reality. The whole exclusion thing. Gosh, it probably surprises you. And I wonder if either of you would agree with me, I would happily be chipped.
Andy (20:42): I would strongly consider it.
Marc (20:46): I don't. I have film cameras and vinyl records. And I just got another cassette deck. I'm completely losing my technology credibility. Yeah, I might be one of those that goes the other direction. I have the ability to live off grid if I choose.
Amanda (21:05): All three of us, I just lose everything. It'd be so much easier if it was --
Andy (21:13): Yeah, but I was just telling Marc earlier this week that during the weekend, I spent some time just with Chat AI and our ChatGPT. And I had this idea. I wanted to write this Python program. And I know it's something I could do myself because I have done similar before. But I was like, "Hey, let's just try this out." And ChatGPT gave me and I specified everything. And what I got back was absolute garbage. I threw it away. And I started again. And I said, give me just a very, very basic does this. Great, now add this, now add this, now add this. And in the end, I got exactly what I wanted. And I thought it was really kind of interesting that I was basically able to replace a junior developer by specifying step by step what I wanted. And it was really intriguing to get that, but then at the same time, now, I was training ChatGPT how to understand to evolve to make this program what I asked for in the first place instead of a junior developer. When I retire, where is the senior developer who has that experience to train the junior developer if we outsource everything to ChatGPT?
Amanda (22:34): Yeah, it's interesting. Long before I went to Canonical, I had ended up as a head of legal probably five, six years before. And what I was doing was implementing a lot of commoditization into legal. And I always felt that nobody at the soul really wanted to do the boring, repetitive stuff. And if you could modularize the documents, and then use interviews to yes, no questions, drop down boxes, picking from lists, you could spit out contracts quite easily. And it speeds the whole process up. And it also removes the repetition. But there's still a need to do some of that repetition to be a junior to learn your skill set. And you have to go through a period of time in any profession, or any role, I think, so that you develop those skills. And I see that missing in people just now. I had somebody work for me, who had been a lawyer for five years when he joined me and I took him to a meeting. He must have been about seven years into his role. I took him to a meeting in Europe, and he was really excited about going to it. And it wasn't because of the travel, it was because it was his first face to face meeting. Now I had no idea of that, or I would have fixed it. And I tend to let people be quite autonomous, but he had been going through life doing calls. And there is a thing that you get with being with somebody else, seeing them do a job. You can learn by being taught, you can learn by observing from a distance, but there's a thing where actual and making your mistakes. But actually, being with somebody and seeing how different people do the same thing that you just can't replace that. I don't know how you fix that. And with ChatGPT, I'm pretty good at searches. I'm pretty good at finding things that other people can't find. But I've been taught to be very analytical as a lawyer. And that's a lot to do with how I can find different connections and ways of getting to what I want in a search function. And again, without those basic skills from other things, I'm not sure people will be able to do that for AI either.
Marc (24:39): Are you in Scandinavia this autumn? Well, if you're not, you ought to be because the world's greatest DevOps conference is coming to Stockholm in Copenhagen. I'll leave a link for you in the show notes. Now back to the show.
Marc (24:55): We were talking at the conference about somebody brought up the Deep fakes are getting so good that they will have no glitches. And that's how you tell that it's a fake. It's from the glitches. And I said, no, it's not. Millennials today are inundated with so much bullshit that they have the finest bullshit detectors in the entire world. The way that you tell if it's a fake or not is by the context, or that would not happen in reality. And this is one of the skills that I think Amanda that you're talking about may erode. It's like, as we lose the ability to understand what's human and not and what is real and not based upon culture, personality, expectation, all of the things that are around the thing that we're trying to discern and understand and decide upon. I think that's where it's going to get really interesting as well, that there's just going to be so much realistic things that are no longer real.
Amanda (25:58): Yeah. And the words like discernment for me quite a lot of the time, I intuitively know. But then it's not necessarily just intuition. I'm 53. It's a mixture of experience and intuition.
Andy (26:10): If I go back to my making a Python app during the weekend example, I've written enough of those that when I looked at the first version, I was like, this is utter garbage. And it cannot work because of this, this, this, this this over there. But if you don't have the experience of having done that for years, are you able to spot that? And if you throw it into a computer, and it compiles and it does mostly what you think it might do. are you ever going to spot the problems? You definitely need these somehow to train the juniors and set your BS detector and whatnot at the right level, that you start to be able to understand and be able to use this for what it is, and not have expectations of what it's not.
Amanda (26:58): And I think people also have to expect that training. It has to happen universally; everybody has to have it. And they have to know that it's a norm. They're not being picked on, but that's the way that you go from having a bit of knowledge to being good at something and most people want to be good at what they do.
Andy (27:15): Hopefully. It's been my experience.
Amanda (27:19): Yeah, that's pretty miserable if you're not good at what you do.
Marc (27:22): I just went in a completely haywire direction, which is like without the human review process that happens when you write code and maintenance and usage. When we talk about software security so often, it has the compiler been compromised. It was like the first time I went: oh no. If our tools are compromised, and putting backdoors and ingress kind of points, but when the AI is writing the code, it could sneak in all kinds of little backdoors that would allow people to come in, and we would never really know until they come because we're not going through the Software Craftsmanship of actually building the software with humans and reviewing it and understanding what's there and sharing it as it's open source and having people all over the world’s eyes on it. That's a really interesting further complication.
Amanda (28:14): Yeah. And I think there's this ability to interrogate a problem or interrogate a situation, and it applies whatever your role is, whether you're a lawyer, whether you're a coder, whatever it is you do, you have to be able to look at something and work out why it's not working, or why it's going wrong. And I think that's the piece that you will lose if you don't have the experience and the knowledge. And the AI might spit out something that's clearly wrong, and everybody knows wrong, but you weren't able to work out why if you don't have that skill.
Marc (28:42): Another interesting topic that's come up recently is like copyright in AI, and copyleft. It seems like copyright has a human attached to it. And have you looked at this at all, Amanda? Or do you have any opinions on how this may go?
Amanda (28:59): I have opinions, I haven't looked at it enough. And it's what I've been avoiding for a long time because I think it's really, really hard. And it gets into ethics and stuff over time. And one of the things I've always loved about open source is that is non-judgmental in that it's for everybody to do anything with. And we don't have this layer of ethics people have tried to apply it. And it doesn't work. It just destroys the collaborative model. This is a personal belief; I'm not representing anybody else. But my personal belief is it's an accident of history that there's copyright on code. And if there was no copyright on code, and we carried on in the way that coding started in the 50s and 60s, and people were sharing what they were doing in universities and working together, then we wouldn't have needed to free software open source because everybody would just have collaborated and the code would have been in the hands of the many, and it would have been societal and very, very different. But that didn't happen. We applied copyright and it was a choice and it was a human choice to apply copyright to code. And it's not 100% that applies, but in different places. But you can work on a default that anything that is coded has copyright and needs a license for anyone else to use it. Now, when you get to that next stage where you have AI generating code, I think there was a case in the US maybe three or four weeks ago, where they said that there was no copyright in art created by AI. And it's problematic. And it's problematic because you have code being recycled and reused that's already on a license, and you've got modifications and adaptations and new works created from it. What happens to that copyright? Does it follow through? And this is the discussion that's come up over the last year or so. And it has knock on effects. This decision about art has an effect on code because it's whether or not an AI can generate something that merits copyright. And I don't think the people making the decisions necessarily fully understand the landscape. And the landscape is huge because it's about everything that an AI will do, whether that's art or something else, whether it's code or something else. And the reason it scared me is more than almost anything I've seen in my lifetime; I think it needs a global decision. I think all countries need to be acting in the same way with respect to AI, or it's going to be incredibly destructive. If you say AI can't be used, can't be weaponized, can't be used in military in the UK. And we say that in France, and we say it in Germany, but Switzerland's decides that they're going to militarize AI, then it doesn't work that we've all made that decision. You need this harmony, this unity, I don't see how we get to it. You almost need like a United Nations of AI policy and a supranational power around it in my mind for it to work. And maybe I've watched too many Sci-Fi movies, who knows, but it's just a such a difficult thing. I worked in the dot coms in the 90s. We said that was different, and it was the most difficult thing. And then you see it happen again, with cloud. And that's a different thing. And now it's AI. But I suspect with singularity and the difference between what AI is and humanity, that this is the one thing in our lifetimes that we really have to worry about. And I didn't feel intellectually equipped for it, but now I'm having to try.
Andy (32:30): Well, I'm not sure how to respond. I've heard a lot of discussions about mostly around art, and can you copy stuff and is the AI really copying or whatnot, then, if you have a human who looks at a piece of art, and then recreates, it's not copying because a human recreated it. And then what's the difference if an AI does the same thing, and a lot of discussion, but I think it's really interesting that the huge difference here is, a human is not going to go look at the Mona Lisa. And then within 30 seconds, create 875 copies of it. But an AI could do that, and even a lot more than that. My interpretation is is an idea. But when you can massively scale it so much, it fundamentally changes the way we look at it.
Amanda (33:27): For me, the straight copying isn't so much interesting because you can do that. It's the adaptation. And it's the same with code. It's when it takes the Linux kernel and all those 6000 people that have contributed and takes it and modifies it, and you don't necessarily give them credit, and you don't have your attributions, and you don't have your licensing pass through. And the kernel, if I'm right is on GPL, which is copyleft, which requires you to give back. If it's not then required to follow the licenses through, it's not required to give back its modifications. Whoever controls the AI can potentially take and use and abuse. Abuse, I think is the right word. It's quite extreme, but I think it is right.
Andy (34:08): The people who want to do will say its use and the people who are really going to do it the most are abusing in many cases.
Amanda (34:17): Is that the people making these decisions just won't understand, they couldn't possibly. Even using AI and you've got all that knowledge in your head; you wouldn't be able to. The impact of it is so vast.
Marc (34:29): Unfortunately, when I think of human nature, just the power of your words, doesn't require someone to really understand what it means, but they understand that it is power and that they will try to grab that and hold it. That's what generates my fear.
Amanda (34:46): And that's where somehow making open-source work with the code is the bit that we can engage with and that we can help with and if we can somehow make it work to keep that something that everybody owns that's owned by many, many people, I think the importance of it will be greater than ever.
Marc (35:03): I think that's beautifully put. Yeah, enjoy the chip in your head.
Amanda (35:10): We were having this conversation the other day, and I probably don't seem like the type they would just be chipped, but it just seems so sensible
Marc (35:18): All right. Well, thanks, Amanda, this has been really, really fantastic.
Andy (35:24): One of the things you said when we met that triggered me that I really want to have you on the podcast was you said, saving costs is not a reason to use open source. And then our discussion has been fantastic, but that was the one thing that really caught my attention. I would like to dig into that at least a little bit.
Amanda (35:45): I think it is a reason to use open source, and it is the single highest reason in every survey I have ever seen about open source. And it frustrates the hell out of me. Because I don't think it's a good reason. Because there is a cost to everything. And if you look at open source, and you're doing it well, and you're implementing it well in your managing in your organization, and curating it so that you have the good technical hygiene, you got the good governance, you're still going to have a cost. And I think often when people say it's a cost saving, they mean that it's a royalty cost saving, and it's the way that it's accounted for in that company. And actually, the reason to use it is the quality of innovation, the ability to collaborate, the lock in that's avoided over time because you are able to maintain yourself. Potentially, sustainability because you can use devices for longer because you can maintain the software on it rather than working to the manufacturers cycle. There's so many different reasons that the quality of the code and the innovation you're getting. The important things that it just frustrates me so much that people focus on the money saving.
Andy (36:53): As you were describing, it's really an artificial focus on money saving. Quite often, we do a lot of cloud native type of work and moving to the cloud type of consulting. And it's like, okay, well, this AWS bill is this high. If this GCP bill is that high, and we need to change things and shift things over here. Like yes, these are big prices. And it's less than the cost of one of your developers per month. Is that really where we need to focus. The idea of this software royalty cost this if we can cut that away, doesn't really necessarily mean you save any costs, it means you shifted to something which is a bit more obscure and invisible. How do you make it visible.
Amanda (37:38): It's another level of understanding around what you've just said, though. You've talked about things like AWS having a cost and that cost being big. But that cost is less than you yourself putting the resource in place and developing the skills to create that and maintain it. This SaaS model is cloud model, this platform model works better, it's easier because you're all sharing that overhead with other companies. Except there's another layer to that. And it's very hard to demonstrate the value. And it's something I really want us to work on across the industry. And to get better at demonstrating the economics of open source. That AWS piece is, of course, sitting on open source, and AWS and anybody else couldn't have gone out and created that and sold it at the price they are if it hadn't been permeated by open source. And the value that that creates and generates is critical, but it's been very difficult for us to show that and to show it to businesses and enterprises so that they understand. One of the reasons I don't like the conversation about the money piece is that people will come back shocked and say, but I had to spend an open source, it's not free after all. You're disappointed because that's not what it's about.
Andy (38:53): Excellent.
Marc (38:54): Okay, I'm going to try again. Are you satisfied, Andy?
Andy (38:57): I am satisfied.
Marc (38:59): We could seriously do this all day, Amanda. I absolutely adore this conversation. And thank you for being here. But I've got two questions that we have been asking everybody that comes on the podcast for season four. The first one is, Amanda, when is the last time you tried something new? And what was it?
Amanda (39:17): God, that's a question. I suppose organizing a conference is my copper on that one.
Marc (39:23): I think you deserve it.
Amanda (39:25): But I haven't organized something like a big conference ever. And that was new. And there was an awful lot of learning around it. An awful lot of people did massively hard work to support me and it was funny. I realized after I'd signed the contract, maybe a week later I was speaking it. What was I speaking at a DevOps London? No, Kubernetes Community Days London, and I was writing my talk, and I was trying to adjust the thing that I was going to talk about. And I woke up at three in the morning panicking, thinking that the F have I done? I told these conferences. And it was incredible because people from the audience came and helped me. And they actually came and participated and became major parts of the conference.
Marc (40:07): I think in the course of this conversation, you've shown a very brave level of vulnerability on things. And you show that it pays off as well. It's like if you are able to be comfortable enough with yourself to say, I've hit deep. I have deep problem, help. I did it to myself now. And I need you. That's beautiful.
Amanda (40:29): Yeah. I don't think I quite understood the scale. It worked out fine.
Andy (40:33): All right. And then our last question, what's the last thing you've done or what's the last thing which has really, really excited you?
Amanda (40:42): It's funny I'm quite an enthusiastic person, typical ADHD. And I get very excited by lots of things as they come along. And it's quite hard for me not to spend the whole of my life being excited by everything around me. Gosh, I saw a good movie this week, I'm excited about this prospect of doing AI. I had a dinner last night with people who weren't from open source, but from policy. And we were talking about the election that's likely to happen in the next year to 18 months in the UK. And it suddenly occurred to the group of people involved around Open UK that we have an opportunity to try and get into political manifestos because this is going to be for the UK a really important election. It's a big election and it's likely whoever wins it will be in government for quite a long time. And we want to influence the tech policy that they have. I think the thing that's really exciting me now is trying to work out the two or three requests that we have of them that we might be able to drip feed through all this policy work into their manifestos and into future tech policy. And for me, it's very definitely about working out how open source will survive as this mainstream thing and continue to be the bulk of our infrastructure so that it belongs to all of us as opposed to a few. And something that I'm quite excited by is the possibility that we could influence that.
Andy (42:17): I'm excited recently that the snow is melting, and I'm able to go walk with my dog and you're talking about UK infrastructure or UK policy around technology future. I absolutely love it.
Amanda (42:33): I saw this movie this week that I really enjoyed. And I saw a preview of a movie called Air about Michael Jordan and Nike. And I thought it was really, really excited, but that was really good. I'm excited about all sorts of things.
Marc (42:46): Something occurred to me in this very serious and important conversation, but still yet copyleft sounds like a UK invention.
Amanda (42:54): Why?
Marc (42:55): Right, left. The rest of us, hm. Okay. Just me. Okay, I got the laugh. Wonderful. Thank you again, Amanda. I think we learned an awful lot today. And I think this whole thing around copyright and AI is going to be one of the things that I'm going to, I hate to say lose too much sleep over or have the most look at, but I think that's been a really, really fantastic thing to bring up. Thank you once again for being in the sauna with us today.
Amanda (43:27): No, thank you very much for having me along.
Andy (43:29): Thank you so much.
Marc (43:34): Before we go, let's give our guests an opportunity to introduce themselves and tell you a little bit about who we are.
Amanda (43:41): Hi, I'm Amanda Brock. I'm the CEO of Open UK. We are the organization in the UK for the business of open technology. I am a former lawyer and got into open source through Canonical.
Marc (43:55): My name is Marc Dillon. I'm a Lead Consultant in the transformation business at Eficode.
Andy (44:00): My name is Andy Allred. And I'm doing platform engineering at Eficode.
Marc (44:04): Thank you for listening. If you enjoyed what you heard, please like and subscribe it means the world to us. Also check out our other interesting talks and tune in for our next episode. Take care of yourself and remember what really matters is everything we do with machines is to help humans.