Skip to main content Sök

AIConference talks

Panel discussion: Responsible AI in an era of transformation

At The Future of Software conference in London, an exclusive panel discussion on Responsible AI in an era of transformation was hosted, leading to a dynamic discussion diving into the biggest challenges and opportunities in Responsible AI. This was a fascinating discussion and insight from a panel of experts working with AI on a daily basis into how it's being applied today and what organizations are doing to keep it compliant. About the speakers: Peter Gostev is the Head of AI at Moonpig, leading generative AI initiatives and integrating them across the organization. Prior to this, he was the AI Strategy Lead at NatWest Group, where he drove AI innovation for over five years. Earlier in his career, he worked in strategy and analytics across financial services, including at Accenture. Anne Currie is a tech veteran who has been a part of the industry as a developer, senior manager, and startup founder for thirty years, working on everything from high-performance C software in the 90’s to e-commerce in the 00’s to modern operations in the 10’s. She is currently a campaigner for future-proof, sustainable systems and regularly writes and speaks on the subject. She is the founder of Strategically Green, a green tech consultancy. Lofred Madzou is a leading global expert on AI governance, with experience across startups, international organizations, and government. He was the Director of Strategy and Business Development at Truera, a leading startup in AI Observability, until its acquisition by Snowflake.

Panel discussion: Responsible AI in an era of transformation
Transcript

Welcome back, everyone. Now, really exciting. We're mixing things up a bit. We've had speakers, we've had talks. Now we're going to have a panel discussion. I've got next to me three experts in their fields. I'm not going to include myself in that at all. I'm actually going to ask them to introduce themselves - because they can do it much better than I can. I'm going to jump over and ask Anne to begin. Well, you've picked the weirdest person - because I'm the weirdest person. I've got multiple hats. I'm CEO of a learning development company called Strategically Green. I'm the author of O'Reilly's book Building Green Software. It's about how the tech industry will align - with the future of renewable power. The weird thing that I'll hopefully be adding - some thoughts to this panel on - is that I'm also the author of a science fiction series - about AI and the future of technology called the Panopticon series. That's my slightly weird take on this panel today. Hello, everyone. My name is Lofred Madzou. I'm a global expert on AI governance. I've been in this field for the last nine years now. I started off working for French government. I'm one of the co-drafters of the French AI National Strategy. Then I worked at the World Economic Forum, - advised many governments and businesses on responsible AI. More recently, I was working at a startup called TruEra - in the AI observability space, for those of you who are familiar. We were acquired by Snowflake last year. I was the director of strategy, - and now I'm an independent AI governance consultant. Awesome. And yes, Peter. Hi, everyone. My name is Peter. I work for Moonpig and I lead our AI function there. It's basically on the applied side, - introducing AI in various different ways. You might have seen, I don't know if you're a customer, - I'm curious, but we launched AI sticker generator recently. That was my team working on that. It's is really fun. Some interesting details we had to work through. Apart from that, before that, - I was in financial services, working for a bank, - so closer to the city, also working on AI applications. Amazing. Thank you for joining us. The reason we've gathered these wonderful people - together with us today is to have a discussion on responsibility - when we're using AI and transforming our futures. I'm going to kick things off and ask the question. I'll start with Anne again. What is responsibility to you in this world now? What really comes to your mind and what are you passionate about? I think in terms of responsible AI, there are kind of two strands. There's how do we make it so that it isn't just broken. How do we make it so that it does the right thing? It does what we're expecting it to do. That's governance, testing, all that kind of stuff. That's really important. That's about changing the technology. What is equally important, in fact, even more important, - is changing us as consumers so that when we interact with technology, - and this is true not just of AI, but it's an extreme case, AI, - that we interact with it mindfully, carefully, - and we engage our brains. The best example of this, - I think the potential problems here is something that happened in the UK - that every single person in this room will know all about, - which is the Horizon scandal. Let's face it, it's an old-fashioned piece of technology. Basically, it's like a spreadsheet that says, - this person should go to prison, and everybody goes, - well, the spreadsheet said so. Let's put them in prison. Horizon wasn't even a convincing piece of software. Everyone knew it was full of bugs. Yet people were still happy to say, - well, Horizon says we should put them in prison. Let's put them in prison. AI will be so much more convincing, but it is still error-prone. It will still always contain errors. There's no way that you can take all the errors out of everything. We, as users, humanity, - has to be much, much better at not going along with it. There's a famous psychology professor, Solomon Asch, - around the dangers of just going along with stuff. The banality of evil is Hannah Arendt rather than Solomon Asch, - but the same kind of thing. We cannot afford to just go along with it. I have go, could I apply my own instincts to this. That's my take on this. I would very much echo your points, actually. I think there are two levels to this. There's one level, which is about how to make AI work, - because newsflash, AI doesn't work most of the time, - especially in enterprise environments. That has been my line of work for the last decade. For me, trust what AI means, ensuring that the behaviour of a model - is consistent with a set of quality requirements. Obviously, core performance, depending on the task, - but also biases, fairness, robustness. You can come up with a lot of quality requirements. I've been thinking about the policies, - the tools, the processes that you should put in place - so that AI does what it's supposed to be doing. That's one level. There's another one, which is more philosophical, - which is to say, where should we be using AI - to the benefit of human flourishing? I'm looking at this more on the use of AI in education, for instance. When we're talking about AI tutors, - to what extent they can complement, replace, support teachers? How does it affect teaching? That's a second layer. First, we need to make sure that this thing works - before we can profoundly discuss how we should be using it. Peter, you're one of the people trying to make it work, right? In an applied way. How do you do that? For me, that's really the main point about making it work. Responsible AI, - I think generally people have pretty good judgment about - what they want to put to production. If it's generating some bad images, - we don't want to put an image generator into the world, - or at least from our brand perspective. And the same if you work for a bank. You don't really want to put - some crazy mortgage advisor into production. I think people understand that. For me, being in the direction of responsible AI, - the main thing to make sure that it is ready for production, - that we can make it work. Part of it is just, models need to get better. I think sometimes people try GPT-3.5 and it was very flaky, which it was,- and they said, okay, AI is bad. But now if you try later models, they're so much better. It's worth just granularly understanding - what are the capabilities of the models. You can only do that by playing with them. And beyond that, also understanding where they fail - and what extra maybe guardrails we need to put in place. Whenever you actually have to deploy these models - into some product, you have to understand it in detail - and then you know how far you can push it, - where you shouldn't go - and what extra capabilities you need to put in place. I think what you mentioned there is no adverse impacts. I'm going to throw a cat amongst the pigeons here. We've got different angles. Lofred, you've worked a lot on regulation - and the public side of things, - and adverse impacts mean something totally different - when you're protecting the public. And Peter, I'm not going to paint you as a fat cat capitalist, - but still you have an agenda, - you have a business that you need to work on. Who should be responsible for what? What is the balance of power there? This one is interesting. I think, ultimately, - we always have an intuition about who's responsible for what, - because you never operate in a vacuum. If you are, let's say, a bank in the UK, - you work under a specific jurisdiction, - you have some regulation, so that the different layers of responsibility - actually is quite known usually. One thing I would like to say about AI is - that it doesn't fundamentally change the chain of responsibility. It's really important. Ultimately, there's just a new model, - and you shouldn't be confused about that. There's a temptation sometimes in AI to think that we can, again, - reboot the whole world and everything has changed and upside down. That's not the case, especially in highly regulated industries. I've been working a lot with financial services and banks. The question is never about how to change everything so that it fits AI. It's much more how to make sure that AI is under control - within our existing framework and our existing responsibilities. That's the trick. Otherwise, you get carried away. Responsibilities are not changed fundamentally. I think the process to hold people accountable - have changed a lot, and the tools as well. My take at least. In terms of capitalism, and I'll play that part, - I think it is a good thing that when we build new features - we've got a grounding point where we align. Is someone actually going to buy this stuff, - are people interested? If we release a feature no one cares and it's a bad quality - and it generates some weird stuff, no one wants to use it. It's a nice way to align. It's not ideal, - but it's much better than just philosophizing about it. For us, whenever we deploy things like that, - we always look at metrics. There would be maybe softer metrics of complaints or things like that. Then if we do something really bad that damages our brand, - that's also impacting the profit-line. I think it is a nice way to align. I'm definitely not against that. I think it's an interesting question. I'll take a slightly different angle on it, - which is who's going to hold you accountable if it goes wrong? It feels like there are lots of different groups - who can and will hold companies accountable for AI. You've got governments. That's almost an European way, to say, - look, don't do these things, - and we're going to hold you accountable if you do it. You've got courts, the American way. The court holds people accountable, holds companies accountable. You get the patent sued off you. That also can be a very effective way of doing it. There's also people, individuals, groups within society - who hold folk accountable. They complain and say, - well, actually, no, we're not going to buy from you anymore - because we're going to set your cars on fire. We are going to vote with our feet. We are going to do that. The thing I was talking about earlier about every single person, - this is going to be ubiquitous and it's going to be pumping out - a whole load of information, a whole load of instructions, - a whole load of ideas to us. As individuals we need also to hold ourselves accountable - and hold it accountable and say, - well, do I want to comply with this? Do I want to challenge it? The answer is everybody's going to have to hold it accountable. I don't think there's any avoiding that, I would say. What do you think? Yeah, there's an interesting point about accountability, - again, at a very practical level. My view is that to be properly accountable - or hold someone to account, - you really need to understand at the granular level - what the technology is doing and what are the potential fault lines. For example, in image generation, - if you're a general, I don't know, - court, oversight government, you don't really understand the detail. You can look at it and say it's fine, or maybe you can identify some issues. Because I've been playing these models every day - and we test them many different ways, - I would have a much better view of where the problems are. I can choose not to highlight them and so on, - but I think the accountability should still sit with the people - who are the closest to the models or the people who do the oversight. They really have to go down and test this stuff. It's really hard to guess where the model would go wrong. Without actually playing with them, - I don't see how you can provide meaningful oversight. Just to add to this point, you add a keyword for me, - which is testing. I've been in the testing space for a while, - and in financial services, I won't name any clients, - sometimes there's a temptation because we're testing software. Okay, you help me test this model, - and ultimately what I want is a seal of approval. That somehow it checks all the boxes and I can deploy it. We cannot give you the seal of approval. We're not a certification body. We're not working on standards. There are a lot of people - doing great work in that space, by the way, in the EU context. Ultimately, we have business stakeholders - who are responsible for the models they use - to push for business products out there. This doesn't change because AI comes into the room. You're still are accountable for your business practices. I think what these people need more - is guidance on how to maintain this level of accountability - with that very new piece of software. That's where we can help them profitably. The thing that comes to my mind when I think of accountability, - over in the blue stage Johan mentioned - incentives and drive. Why would you need to do this? It reminds me of carrots and sticks. What I've noticed in the news recently, - there's some approaches like, okay, - here's the carrot for why you should invest in AI and do this stuff. Very British. Government will throw billions at it. Then there's the stick, there's the regulations. Nothing against regulations. I'm sure the ones you wrote are fantastic. Is there a thing to discuss here with incentives - because what we've seen is, most of the incentives are actually - much better in the private sector and it's easier to disrupt - and get paid for it and become a billionaire perhaps - than it is to go public sector and take people to court - for doing bad naughty things. Have you come into that? Yeah. Often when I work with clients, - I tell them that there are probably two line of arguments. One is defensive argument. Why you should make sure - that your AI is working? To your point, you might get fined. You work in highly regulated environments. You may put people's health at risk, for instance, in pharma and healthcare. This thing has to work, right? Or in financial services, you might - undermine new people access to credit. It might be really consequential, - and you need to be accountable for this. This works in highly regulated environments. Frankly, the context, the geopolitics, I won't get into the details, - but there are some parts of the world where it doesn't work as much anymore. But the other one, which is more of the offensive one, - which is to say, you can add a lot of value using AI, - but you have to do it right. You have to get it right. That requires investment and expertise. Often the tricky part is not so much the software. Oh, I want to play with AI. It's the expertise that you put around the software. Do I have the right people? Do you offer the right training within your organization? Sometimes you don't need to hire new people. I think the thoughtful companies understand - that you want to obviously comply with regulation - in these matters, but you want to do good business. You cannot ride on that business for a long time. You can get quick bucks. Ultimately, once trust is broken with your clients and customer base, - and broader trust for the stakeholders, - some point you would get kicked out of a business. That's why I'd say you need to do both. You need to be defensive and offensive to make it work. What you're saying is, - how can regulation and anything else ever keep up? Because there's huge amounts of money, power, speed and energy - behind changing, moving faster and faster - and developing new things. How do we slow it up? It's a very European approach to say, slow it up. The American approach is, let it rip and then take it to court. In some ways, that's a way of keeping up. Because especially in America, there's money to be made - from suing people who've really screwed up. Regulation alone is probably, I think you're right, too slow. We are going to need to throw all the guns, - you know, you obviously don't throw guns. That's irresponsible. Yeah, we've learned that lesson. Don't throw guns at things. We're going to have to use more than one lever here. I think that the lever that we are underestimating and undervaluing - and would be enormously valuable for more than just AI - is educating people to bring their own mind to the question - and not just go, computer says X. That can be done quite quickly. As we learned with Mr. Bates and the motor, - one good TV show can change minds very quickly indeed. I think the challenge with regulating AI is such a horizontal technology - that it's a little bit like regulating spreadsheets, - where you can write terrible things in a spreadsheet, - or you can write great things in a spreadsheet, - and there's no one way to regulate it. Another complexity is that things change all the time. There's no meaningful way for us to, - we don't have a stable base for us to think about certain problems. For example, you define certain threshold for how much compute - you can spend before your certain regulation supplier. I think we blew past that last year. Now what was the point of that exactly? It's not very clear. Then we've got like Gipsy, open source models. Okay, you can regulate these ones, and others come in. I think it's too unstable. It's too horizontal. It's too much complexity to actually get it pass. One thing on this, not a different regulators - but I've been thinking also about this. Ultimately we do know that you have different regulations - at different level of interventions. It's really important to be clear about this. Ultimately you can say outright by some bad practices - I don't want you to use AI for - basically abuse people's emotions or like really dodgy things, right? That's really like bottom line. We have the same approach to testing software. I always make the parallel. If you think about these large-range models, - there's some level of testing done at the financial, - at this level by these companies. Ultimately, if you're an insurer or a bank, - and you're using AI for, I don't know, credit allocation - or claims management, whatever, - uou want to do some testing at this level, - at the application level, right? Across your stack, you're going to do different level of testing. The same thing with regulation. You have some level of horizontal regulation - and some level of vertical regulation, depending on the industry, - depending on the risks that you are facing, - because risks are really context-dependent. Going back to regulation. You have vertical regulators - that are aware of some level of risk in their own industry - and want to make sure that these bad outcomes don't occur in this industry. That's the kind of bottom line that you should respect - as you work in this industry. Again, things haven't been changing dramatically. I think the real trick, - what has changed is that software has been built from the beginning - to be deterministic. Now we have probabilistic models. That's tricky now. How to make sure that this probabilistic model - will still have some level of control as we have for the old software. That's the tricky bit. Because the rest, frankly, - conceptually and legally has been thought out. This debate reminds me a little bit of, - I gave a talk nearly ten years ago at a conference in Munich. It was about AI ethics and what's going to happen. This was very speculative. This is coming at some point. We're going to have to make all these decisions. How do we know that we don't do terrible things - because the AI tells us to do it or the spreadsheet? Somebody came up to me afterwards and said, I think that Germany - is in a pretty good position globally to not make these kind of bad decisions. The reason is that we're very aware of - Hannah Arendt and all of those rules. At school, there's a huge concentration on the philosophy of human behaviour. We learn about Kant, we learn about what's right and wrong, whatever. We are taught to not just follow the crowd. Just because everybody else is