Marc and Darren discuss the EU AI Act, the first-ever regulation on AI, which aims to control AI development, addressing risks like bias. Concerns include impact on innovation, enforcement challenges, and implications for marketing practices. Join in the conversation at The DEVOPS Conference in Copenhagen and Stockholm and experience a fantastic group of speakers.

Darren (0:00:06): It's the world's first attempt to really control and put guardrails on artificial intelligence development. 

Marc (0:00:21): Welcome to DevOps Sauna Season 4, the podcast where technology meets culture and security is the bridge that connects them. We are back in the sauna. Hello, Darren, how are you today? 

Darren (0:00:44): I'm doing pretty good. It’s quite late, could do with the day being over, but just this one recording to go. How are things on your side? 

Marc (0:00:53): Well, speaking of the day being over, it's getting, the days are getting so long here in Finland. I think we're like at 16 hours already in Helsinki, and I think there's plenty of day left to have a conversation about the EU AI Act. 

Darren (0:01:10): Yeah, so in March of this year, the EU passed the world's first regulation on artificial intelligence. It's 272 pages long, so it's quite comprehensive, and you can find the exact act online, but given how we're seeing all these new startups come out of the woodwork, they're starting to emerge and work in these new areas of business alongside AI, it's going to be interesting how this act affects everyone. 

Marc (0:01:44): In its spirit, I like the idea that someone is at least paying attention to the science fiction dystopia that I think AI can easily make possible when we talk about things like emotion recognition, and social scoring, and exploiting vulnerable people, and natural bias of things. I think that it's interesting that people are paying attention to this, but judging the government's ability everywhere to serve the population, it also scares me a bit that we're starting to look at things in an area that's changing so fast. I like to see some kind of levels of protections and regulations, but then I also know that most of this stuff is not going to be enforceable outside of Europe anyway, unless you're serving an EU country as a customer, but the rest of the world is going to continue to do all of this kind of horrifically scary things that are pointed out in the EU AI Act. 

Darren (0:02:46): You're quite right, and there's this idea, I think, that has been floated a few times first with encryption. There have been several attempts by governments to regulate encryption, and now there's an idea of regulating AI, and all it means to regulate AI is that good actors will face regulated uses of AI. People who want to misuse AI will absolutely continue to do so. But maybe we should dive into some of the specifics of the actual Act. If we start by looking at some of the prohibited items, because there's quite a long list of things that are just outright banned by the EU AI Act. 

Marc (0:03:28): Right off the top, to me, it's almost a double-faced palm. Prohibited use of AI includes subliminal, manipulative, or deceitful messaging. Does that mean that we can basically outlaw politics now that we have AI? 

Darren (0:03:44): At least marketing, because marketing exists in a kind of semi-manipulative state most of the time. So can AI not be used to advertise? That would be an interesting development, and based on the wording, it might be possible to just completely outlaw it. But there are some important things that go through, like the exploitation of vulnerable people. So that and bias, this kind of thing, are hugely critical in AI, because it will be so easy to identify things like risk assessments of criminal offenses, which is another outright banned action, to prevent any kind of thought policing. So that can only continue to happen in the UK, where it's become an occupational hazard. But at least the EU will face some protection against that. 

Marc (0:04:35): Yeah, it just feels like this act feels like a recipe for a future dystopia, where, you know, the first thing that I would do if I was a dictator is I would make sure that we have all of these things, essentially. I would make sure that we have all of the surveillance cameras are hooked up and online and using facial recognition to track everyone, and I would make sure that we understand people that have committed certain criminal offenses are going to get more attention or people that have, you know, challenged the authority of the existing regime. And it just absolutely terrifies me to think how, you know, the ability to do these things at scale has never been greater. And we live in a society where information integration and availability is the highest that it has ever been in humankind. And now we are talking about limiting the ability to do a lot of downright evil things, which basically means that the bad actors will have a lot of fun being able to utilize all these things. 

Darren (0:05:41): It does. But as you mentioned before, I think the correct approach to take to this act is to look at the spirit of it. We've both seen examples where an act has been put into place. We can use GDPR as an example, which was designed to protect people's privacy. And it basically triggered a load of irritating pop-ups on the Internet saying, we acknowledge the law and we're going to collect as much data on you as possible anyway. And so obviously the spirit of the law is there. The actual letter of it may not be yet. 

Marc (0:06:16): Yeah. And this always reminds me of I moved to California in the mid-90s and everywhere you looked was a sign that says this area contains a chemical known to the state of California to cause cancer, birth defects and other reproductive harm. Where this Prop 65 that was a voter's initiative on the ballot and the idea was to warn people when they may be around something that's toxic. It didn't actually reduce the toxicity of anything anywhere. But what it did was it put a sign everywhere that you went that told you you were constantly at risk, which would basically numb you to the idea. And then industry marches on. 

Darren (0:06:51): Yep. So we are in a situation where we might end up with nothing actually changing. Just that we have these warning signs, “Caution: may contain traces of AI” on every website we click into. But I think it's important that they have outright banned a lot of the more dystopian features of potential AI. They've also moved on to like some banned applications of like high risk things. So anything involving critical infrastructure is obviously considered high risk in the AI act. And what that means for people trying to leverage AI in those industries is they're basically going to be given a checklist of a load of different types of data to ask what they're doing to find out if they are needing to comply with this. And the checklist is actually quite strict. 

Marc (0:07:46): So the EU has led the way many times in terms of regulation of things. Like you see chemicals will be banned here before they'll be banned in the US or in other countries and things like this. So I hope that there is some kind of leadership that is coming from here to say that there are a lot of things that are moving really fast. And these capabilities, when used badly, can cause a great deal of harm. But then when I look at a lot of the things and kind of come back and think, OK, predictive policing is a banned application based solely on profiling a person or accessing their characteristics. It's like we do hear about this one in society that racial profiling, for example, is already outlawed. And we know that these things do exist a great deal. But when I think about this in the context of AI labeling, it's like what I really want to know is in a concise manner, what types of AI is being used in the application that I am in or in the area that I am entering in a kind of humane way. And it's really difficult to, you know, if you compare that just to the cookie pop ups that we get that say, you know, we've got 3000 vendors and we may collect your data about how you use different devices and whatnot. And it's going to be really interesting to understand at all the context that you are facing when you are essentially arguing with a machine that doesn't want to recognize your ability to do something. And the reason is that it has been trained in a certain way. And you have zero visibility as to what that training was, what that data was, what its biases are or, you know, why it may be excluding you from something. 

Darren (0:09:28): I do think you raise a key point that at first with mentoring, the EU has had some success in litigating these things before. But they've mostly done so in physical space. This is an attempt to manage these things in digital space, which might be a bit more complicated. But I think you're right. And a lot of the wording around the new act is about requiring companies to be more transparent about what they're doing with AI. So we come to this idea where an AI generated content needs to be explicitly labeled. People need to understand when they are interacting with anything created by an AI, when their data is being processed by an AI. So it's really all about transparency. And when it comes to AI, we most commonly talk about these large language models. I think that's most people's exposure to AI at this point. Obviously, there are other things like image generation. There's all kinds of mathematically interesting things you can do with AI. Everyone's familiar with large language models. So I think it's good to take those as a template and say what we're going to see going forward is this idea that we need to keep track of how models are built, what kind of data is going into models, what kind of output is coming from models compared to expected output. And this is again, we're back to the letter versus the spirit of the law. And this is, I think, what we expect to see. Whether we see that is another question, because, you know, we expected to be tracked less instead of just being tracked as much as we were before with cookies, as you were saying. So we'll see what, well, it's going to be coming into effect in October, I think. I'm not 100 percent sure of that. So we'll see how people start interpreting it. 

Marc (0:11:28): You know, like there's a use case that comes to mind that we were talking about maybe five, seven years ago, and it wasn't even involving AI. And the idea was this, that currently cellular operators basically know where you are without GPS, down to a certain number of meters. And essentially, when you walk down the street past a video screen billboard, that video screen could take the operator data that knows essentially who you are and where you're walking around and kind of average it out so that the ads for that may be based upon your supermarket loyalty card and present ads that would be more favorable to the average people that are around at any given time. So if you're the only person walking down the street on that day in that neighborhood, it could very well be an ad that is completely, perfectly targeted to a thing that you usually buy, but now it's on sale. So you better rush over to that grocery store and buy that thing now. And these types of things. And when I look at the way that AI is described in this act, it is, you know, essentially banning ideas like this, which I think is good because I don't want to be coerced into going to buy something, you know, based upon just walking in the wrong place at the wrong time. But then I do kind of wonder how is this going to lead to, you know, bad acting from others? And is this also going to be something that in places that have not yet legislated, like we have in the EU, these types of things, is this also kind of writing a playbook for some of them? 

Darren (0:13:08): It could well be. And I think not only does it kind of open the door for bad actors, but it closes the door for a lot of actors we don't want to close it on because we have quite a few interesting interactions with startups. Like, I don't know if you know the statistic about Finland's small to medium businesses basically make up 99 percent of businesses in Finland. They're these small companies often personally owned by someone. But in the AI space, we're seeing a lot of startups and we're seeing these startups that are running between 10 and 40 people. And I'm worried that these are actually going to get kind of legislated out of the market by the weight of the compliance required for this EU AI Act, because they're not running compliance teams that can deal with this. And this is going to have implications in a number of different areas. Because if I think of some of the cool applications of AI, we're thinking about AI powered vehicles, so self-driving cars, and that then becomes critical infrastructure and puts them immediately in a high risk area, which is a valid place for them to be. But it kills the innovation. And earlier today I was reading how Google put out this new Gemini AI designed specifically for medical purposes. And I think medicine is one of the most important areas we have to consider because it's the area that affects human lives the most. And doctors are fallible. Doctors make mistakes. If we can generate lists of symptoms, that's an extremely valid use case for AI. And this kind of EU AI Act is going to keep that innovation out of the hands of everyone who isn't Google, everyone who doesn't have that level of compliance team ready. 

Marc (0:15:05): Yeah, there's a word that I learned in the US, tort, T-O-R-T, and the US is a tort society, which means that it has a lot of litigation. You don't like something, you sue someone. So if I think of prohibited use, including exploiting vulnerable people, I could imagine that an AI doctor looks at your test results and hypothesizes that you may have a specific illness. And then upon having that, taking whatever necessary medication, it could place you in a high risk group because there are some races and family lines that may be more susceptible to certain diseases than others. And then being found not to have that disease later, were you exploited as a vulnerable person because you were a member of a high risk class and therefore subject to litigation, right? I kind of imagine all these funny things, even when it is on the positive side as trying to protect people, the ability to also litigate against some of these, I think could be really interesting. Use of biometric data outside of law enforcement. Do people with certain physical characteristics have more inclination towards certain diseases? 

Darren (0:16:22): Yeah, very specifically. And all of this is built around one of the prohibited use case, which is biometric data scraping, which is the gathering of all this data required to build these things. So again, we're just seeing something that won't even be possible outside of huge companies. But yeah, I feel like the litigation aspect is one that hasn't really been considered. And it's just, I think that side and the act in general is kind of the EU passing the ball on something they don't really understand. So they're trying to litigate something they haven't really fully understood. And I sympathize with that. AI is an extremely complex subject, but in order to be able to actually rule on it, to have any kind of control over something, you need to fully understand it. And frankly, it feels like it's sort of a swing and a miss, but I mean, we'll see how it's implemented. I think the important thing we should be talking about here is how it's going to be implemented on the actual level, because one of the things it requires is transparency. And based on my understanding, transparency in AI is extremely difficult because transparency is about knowing something and AI is built around these black boxes of randomness that you put information in and they give information out. And you can tell, you can sometimes tell how by having a good chain of implementation where you know what data has gone in and you're controlling the data that comes out. But I don't think if you'd like, there's one question I think AI scientists would dislike. It would be the question, why, why did the AI do that? And I feel like a lot of the time the answer is we don't really know. 

Marc (0:18:17): The why subject is really interesting. As I studied LLMs, one of the things that came to mind was every time I run the same set of prompts, I get different answers. And then when I started to understand more how the AI LLM works, there's a certain random element, because if you didn't have the random element, everything would always point to what is every seven Wikipedia articles or is it something like that? If you follow seven Wikipedia links, doesn't it lead somewhere? 

Darren (0:18:50): I think there are various games of getting to a specific link by doing the minimum number of Wikipedia clicks. I'm not sure what you're referring to. 

Marc (0:19:01): Something like that. Like, you know, every seven Wikipedia clicks leads to Kevin Bacon or something like this. Um, but point being that there's a certain random element. It is not always statistical probability, but there's a certain random element in how, especially LLMs are calculated to work. And that little bit of random element can sometimes lead to unwanted behavior. We can get an awful lot by having, you know, every face in the world that is available on the internet scraped along with whatever words may describe a typical emotion, you know, joy, anger, sadness, grief, and then being able to use that data, but then, you know, oftentimes there's going to be, you know, joy, but it wasn't necessarily the picture of the person receiving the joy, or it may have been, you know, something random. And then all of a sudden, you know, why are you so upset? The machine asks when you're not upset at all. 

Darren (0:19:56): But actually there's this interesting idea. I was reading about the OS put out these top 10 vulnerabilities for various things, and they've actually released one that I didn't realize they'd done for language models. And one of the threats is actually this idea of poisoned models, which is essentially feeding it data you know to be false and know to be problematic in some way, and ending up with models which take that false data and use those assumptions. And yeah, the idea of manipulating emotional data to trick the AI into believing whatever about the person it's interacting with. And again, this is one of the things that the AI Act is actually going against because one of the forbidden things about it was emotional recognition, particularly in workplaces and educational institutes. So sure, there may be like, that's a good start. I feel like there are lots of public places where I wouldn't really want emotional recognition to occur either. I wouldn't want my emotional state to be used against me the next time I'm shopping, for example. So in a way, the act doesn't quite go far enough. And then in another way, it goes too far. 

Marc (0:21:12): Yeah. Rereading it while we're talking, the prohibited uses and the banned applications, you know, it really does feel like it wants to put marketing out of business with AI that manipulates human behavior or exploits people's vulnerabilities. You know, we are all vulnerable to dopamine and shiny things and the new idea and things like this. So it's going to be really interesting to see AI versus human nature, how this goes down. 

Darren (0:21:39): And one of the banned applications is just outright AI that manipulates human behavior and marketing is in a way about manipulating human behavior. It's about getting people to act in the way you need them to, to sell your product. So I will admit to having a complicated relationship with marketing. And at some point I made a presentation about the fact that I thought according to Finnish law, cookies were stalking because the way the Finnish law is worded, it's about someone who repeatedly follows, threatens or makes to feel uncomfortable. And I was just like, yeah, this new cookie approach has kind of ticked all those boxes for me. So maybe the marketers are stalking me. So I would actually not hate more regulation directed towards marketing. My big fear is that we lose a lot of the core innovation inside the EU that's happening with AI. 

Marc (0:22:40): One of the things we see a lot right now with companies is that they are looking first off to just get AI in the house. This is the biggest hype cycle I think since web 1.0 in the nineties came around. And then the companies to be able to help them do this work are going to be impacted by this EU AI Act. There are many companies that have suffered through all of the different requirements that are necessary to be GDPR compliant. And I think that this AI Act takes this to a new level because it's not just about access to the data and storage of the data, but it's about your ability to act upon the data. And I think about, you know, if you've got a CRM system, customer relationship management system, and you go and you open up the system and you look around and you say, ooh, you know, there's a customer that is looking for something very much like what I have sold before. I'm going to go look at those materials and I'm going to look at the sales arguments and I'm going to think, will those work for this new customer? Maybe they will. So then I'm going to reply to the new inbound or I'm going to cold call a similar potential customer. And I'm going to say, “Hey, did you know that you might have this problem? And it just so happens that we have a solution for that.” So that's how selling products works oftentimes, especially B2B. Now, if you have your AI doing that exact thing that I just described, is that using AI as a band application now? 

Darren (0:24:18): That's a good question. And it's difficult to answer. I feel it might trigger the manipulation causes in the prohibited systems. I feel like that's the case. I hasten to add that I am not a lawyer and I shouldn't be taken as a legal expert on this matter, but yeah, I feel like it would trigger those systems. But looking through the list, cause all we talked about so far here is the outright ban section, but one of the interesting things that on this one website, the, there is an assessment you can do to find out whether or not you will be obligated to do anything under this pact. And one of the final questions is, does your system perform any of these functions? One, interacting with people or two, generating synthetic audio, image, video, or text content. Well, do you think the AI is going to be interacting with people? Now that just requires like a natural person's obligation. That one's a simple one to fulfill. But once you start getting into the high risk categories alongside it, the requirements, they become much more complicated. 

Marc (0:25:34): The thing that comes to mind, like one of my favorites, early use cases for AI, LLMs is let's take our company knowledge base and let's put that into a database and let's use that in order to front end an LLM so that if a customer comes to a chat bot and says, “I have problem XYZ,” that then the database will be queried for potential solutions to X, Y, and Z, and then it will synthesize a response to that problem set from that customer. And that is exactly the type of case that you described, Darren, where this is valuable and it might even be much better than, you know, calling and waiting on the phone for a human to give you the same type of experience or trying to do this by searching forums and online sources, could be really, really valuable to the consumer of that. However, we're interacting with humans and synthesizing output. 

Darren (0:26:37): And that is going to be the case, the use case for 95% of AI systems. The interesting thing happens though, not just when it's about synthesizing outputs, but when you fall into the high risk systems and the high risk systems are actually quite broad. So for example, critical infrastructure is obvious, but also I believe suppliers of telecoms equipment, telecoms devices, and their service providers are also listed. So it's the kind of thing that will have considerably further reach than I think people will realize. 

Marc (0:27:18): It's going to be really interesting to see how this one pans out. And if you are interested, I urge you to tune into our next episode as well, where we're going to look at DevOps as the safety net for AI. We're going to go into a little bit deeper on what companies are doing with AI at the moment, what some of the use cases are, which I believe are all compliant within the EU AI Act, not completely certain sometimes, but lots of good ideas anyway. So tune into our next episode to hear about those, but Darren, let's look at a few different kinds of options backwards. So how do you summarize the EU AI Act? 

Darren (0:28:01): It's the world's first attempt to really control and put guardrails on artificial intelligence development. It's in an immature phase at the moment, but the spirit of it is something I can get behind and I hope they refine it further. 

Marc (0:28:17): All right. Do you think this is going to have a big effect on corporations or SMEs or everyone? 

Darren (0:28:25): I feel like it's going to damage SMEs working in AI space. Large corporations have the capacity to deal with this while SMEs just don't in a lot of cases. 

Marc (0:28:37): Will we essentially lose innovation or maybe lose that within the EU realm, but allow for others in order to have more innovation space? 

Darren (0:28:46): I actually think we will. I'm genuinely worried about a few companies that I know are doing cool stuff with AI that will probably be hit with the AI Act, so I'm not sure how they're going to deal with it. It will be interesting to see, and I'm hoping we don't lose innovation, but we might. 

Marc (0:29:06): So how do you think this is going to play out? Is it going to be tested? How do you think that it's going to work in practice? 

Darren (0:29:13): I think the Act, the time to these things being moved by the EU is usually several years. So I think what's going to happen is they will put it in place and then the people responsible for interpreting it will interpret it in the easiest way possible. So we'll just see lots of notifications of “Caution: may contain AI. You are interacting with an artificial intelligence. You are seeing generated content,” and then hopefully we'll go through some iteration phases where it improves over time. 

Marc (0:29:50): And I think there's going to be some really interesting example lawsuits that come out of this. That's going to be the early evidence after the pop-ups and the notifications, we become numb to those. Do you think that this is going to outlaw marketing once and for all? 

Darren (0:30:06): I hope so. No, I think a lot of the high risk and prohibited systems are very specifically aimed at preventing this from being used as marketing data. And I think that's a good thing. So I think marketing will continue much along as it has, and it might limit its use of AI, but it won't kill it. It will just make it less convenient and more private for everyone else involved. 

Marc (0:30:36): Brilliant. Thank you, Darren. 

Darren (0:30:38): Thanks a lot. It's been a pleasure as always. 

Marc (0:30:40): As always. Okay. Once again, a reminder that we're going to be talking about this again in the next episode. So tune in there to hear about the DevOps Safety Net for AI. Signing off for this time. Thank you. We'll see you next time in the sauna. We'll now tell you a little bit about who we are. Hi, I'm Marc Dillon, lead consultant at Eficode in the advisory and coaching team. And I specialize in enterprise transformations. 

Darren (0:31:08): Hey, I'm Darren Richardson, security architect at Eficode, and I work to ensure the security of our managed services offerings. 

Marc (0:31:15): If you like what you hear, please like, rate and subscribe on your favorite podcast platform. It means the world to us.