Skip to main content Etsi

AIConference talks

Foundations for the Future - AI | Donna-Marie Patten

At The Future of Software conference in London, Donna-Marie shared her insights from her experience driving AI delivery aligned to customer outcomes and provided tips on how to identify and enable your largest value opportunities, including pitfalls and unexpected gotchas. In her talk, Donna-Marie highlights various AI-related topics all organizations should be aware of, including AI in automation, as well as AI compliance and bias. About the speaker: Donna-Marie Patten is an outcomes-driven Programme Director with 22+ years of strategic and operational leadership experience. She excels in delivering large-scale global managed services and transformation programs for Tier 1 Financial Services organizations.

Foundations for the Future - AI | Donna-Marie Patten
Transcript

Good afternoon, everyone. It's really a pleasure to be here today. My name is Donna-Marie Patten. And yes, it's an honour to talk - to such a forward-thinking audience like yourselves today. And really what I share today is from my own personal insights and experience. So, a little bit about myself. I have 22-years-plus in operational and leadership experience - within the financial services section. My first 10 years was actually within application management, - so supporting front office, middle office, trading platforms. And, when I think about back then, 2002, - it's like it was scripting, heavy manual processes, monitoring tools. AI really wasn't that prevalent. And I wish it had been because it would have made - my support role a lot more easier than it would be today. But what I would say is, years on from now, - as a technical program manager, - I am fortunate to run cross-wide impact initiatives across organisations. And that does include AI - and how to transform enterprise technology operations. More specifically, use cases to enhance infrastructure stability, - resiliency, that might be reducing number of tickets, - it may be to do with optimising resource utilisation, - reducing recovery time as key examples. So, yeah, real pleasure to be here and share some of my insights. So, let's just set the scene then. We're clearly living in a world where software is rapidly growing. It's evolving, and the pace in which technology is moving - is mind blowing, I think we can all agree. And as technology professionals, we are faced with increasing complex, - challenging scenarios, multi-cloud environments - that we have to essentially cater for. And so, within large scale environments, supporting global operations, right? So, AI really is no longer a buzzword. It's actively transforming the way in which we're doing things, - whether that's from knowledge management, - the way we automate things, the way we're making decisions, - it is changing in what we're fundamentally doing. But in order to unlock AI's full potential, - we do need to be strategic in how we go about implementing that, right? And it's about understanding what are the right use cases to introduce - and when to leverage AI versus traditional methods - that we've all come to know, automation. So, for me, over the next 25 minutes, - my aim is to take you through some of my personal insights - in how to think about incorporating AI into your organisation, - formed by my own experiences, but the intent is really - to help you think about how you identify high-value opportunities, - and then think about some of those pitfalls and challenges, - that you may need to consider - when looking to implement within your organisation. So, I wanted to firstly start off with, - and I know Kelsey's going to kill me because I just said "Legacy" here, - so maybe I should say Hall of Fame Automation versus AI. I think we are familiar, and we've come to rely on tools - like Ansible, Puppet, different scripting languages. They have really been fundamental to our technology operations. They've helped us to automate many manual processes, - repetitive, improving efficiency, - and ultimately, reducing the chance of human error. For years, I would say that these tools - have been the backbone of infrastructure management. They've allowed IT teams to streamline operations, - that might be configuring servers, deploying software, - monitoring systems without much oversight from humans or intervention. So, the goal has been simple, right? It's to reduce manual work, standardise processes, - and make things more predictable. And I can totally, completely relate to that - when I think back to my former support days. However, we must be aware of the limitations - when it comes to our traditional way that we have been working on, - particularly within large distributed organisations, - environments that span multiple cloud types, - whether that's internal, external cloud, et cetera. And then, legacy tools, or should I say automation tools - then can only get you so far to try and navigate - across those multiple types of environments. So, for example, there may be thousands of systems - whereby you've got different environments, - and then you may have multiple incoming tickets going to IT teams, - and that can quickly become overwhelming, not for systems - but for the teams themselves, and then taking resources away, - having to navigate those tickets, - rather than focusing on how to improve the environment itself. If we take incident management as an example, - ultimately, tickets will get raised, and teams will need to resolve, - sorry, so we need to pick that up, may need to manually route those tickets, - investigate those tickets, maybe some form of remediation. So, ultimately, we need to do better in improving our operations landscape. So, then we need to think about AI. AI can, definitely, and is starting to change the game. What I would say is that when we think of things like large language models, - we can automate decision-making, making real-time decisions, - enabling intelligent and context software, - sorry, context-aware responses. Instead of just using static scripts or automation tools, - we can analyse user queries, - understand intent, and offer personalised responses. Now, if we think about virtual AI assistants or agents, - however you want to refer to them, - they don't only just resolve IT tickets, but they could potentially predict - potential system failures before they occur - and maybe help with more preventative or proactive, predictive measures. So, the shift is not just now, - we're not just talking about automating tasks, - we're improving decision-making, - we're enhancing the overall experience, - not only for IT teams, but also our end users. And then, rather than simply addressing issues reactively, - AI can help us to respond to things more proactively. Now, you're probably going to wonder why I've decided to [chuckles] - call this out in terms of defining the terms. I've done this because, in my experience, - I've seen a lot of confusion around the varying terms - and how people understand the varying terms. So we've just talked about Hall of Fame automation, - Legacy automation. But I wanted to call out the difference, particularly around the AI terms. So, when we think about AI, that term, that broad idea - of creating machines or software that can do tasks - that would normally require human intelligence, - like problem solving, decision making, or understanding language. If we take a simple example like Shri, Siri, or Alexa, - Siri, I'm creating my own thing now. Yeah, [chuckles] Siri or Alexa, they can recognise voice commands, - and they'll respond to that, okay? Then, you have machine learning, okay? Machine learning is a type of AI - where machines learn from data and will get better over time - without being programmed from exact instructions. We could take our spam filter and emails as an example. As time goes on, it gets better at filtering out the crap, right? Then, we have generative AI, right? Generative AI is the type of AI that creates new things, - whether that's text, whether that's images, - whether it's music, based on patterns that it has learned from data. So, we know ChatGPT, that's my best friend, - or DALL-E creates things, creates pictures from text descriptions. So, I just wanted to call out and say up front - what these different terms are, because I've seen it in my experience, - they're used sometimes incorrectly, - or people don't have a base level of understanding. So, how I describe, as you've said, in short, - AI, machines doing smart things, - ML, machines learning from data. And generative AI, machines creating new content. So, the thing that I then wanted to move on to is, - how do we identify value opportunities to leverage AI? AI does present endless opportunities. It really, really does. But it also presents a lot of challenges. How do you sift through all the potential use cases - that you go and deliver against? And it is very easy, and I see you get swept up in this hype. AI, okay, I must implement - the next cool, snazzy thing within the organisation, - but not necessarily thinking about the value - that it will bring back to your business. So, something else that I've seen in my experiences, - I mean, how many chatbots is too many chatbots within an organisation? I'm forever hearing myself asking folks, - what is the problem that you are trying to solve for? And then, how does AI/ML solve for that thing? So, I think depending on the size of an organisation, - you can experience a lot of duplication. I've seen it, I live it, I breathe it. [chuckles] But it's important to have a mechanism - to filter through ideas and your options, - and then carefully evaluate which ones are worth pursuing. So, before you dive into AI, it really is essential - to establish governance, governance frameworks. Without clear guidelines, AI can quickly become chaotic - and produce very unintended results. So, governance should define the success, - what success looks like, how AI models are trained, - but also, how they then interact with human decision makers. So, I just wanted to highlight a couple of things to think about. So, first consideration would be, look for your pain points. Start by identifying where the most significant pain points are - within your organisation. What are the areas in your operations that consume the most time or resources? This could be anything from repetitive tasks - to high volume support tickets, to slow decision making. Okay? Consideration two, - once you've identified the potential area where you want to focus on, - will solving this pain point through AI reduce cost? Will it improve efficiency? Will it enhance the overall quality of somebody's experience? Now, if we take an example like automating network monitoring with AI, - this could reduce time spent on manual incident response, - and/or it could simply improve the detection - of anomalies in your network, right? Consideration three, feasibility and return of investment. So, things to think about. Do you have the necessary data to train your models on? Do you have the right infrastructure in place - to support what you're trying to do? Actually, one of the first questions I should say, - does your organisation already have that solution, - a solution somewhere in the organisation? Particularly when you start to get into larger organisations, - there could be something that exists over there, - but you're not necessarily aware of that. What is your total addressable market that you're trying to solve for here? And obviously, do not forget metrics. How are you going to measure success? How are you going to know what you've implemented - is actually done the thing that you want? This is no longer about, oh yes, I'm using an LLM, - something really cool and funky. What is your OKRs? What are those key results that you're actually working towards? So, I guess the thing that I've learned over time is that- AI isn't always the best solution. Controversial to where we are today, it may not always be the solution. And I think for certain processes, - traditional automation still remains valid and the right thing to do, - and maybe the more cost effective thing to do. If we think like the provisioning of servers - or managing configuration files, - maybe traditional automation is still the thing you continue to leverage. It's really about finding that balance. So, the key is to ask yourself, what will AI add? Where will AI add the most value? Where could AI be the game changer? And where can we stick to traditional solutions? So, the next thing that I wanted to touch on, - which maybe folks don't necessarily think about, - and it does touch on, if you've got large organisations, - multiple solutions, it's all disparate, all that wonderful stuff, - it's about thinking where you can leverage shared use cases - versus specialise, versus what's the best of both, okay? So, when we're thinking about scale, - that's when we start to think about shared use cases. A key principle of AI adoption is driving convergence - for shared use cases across your organisation. If you could imagine, if you had an incident management system, - you've got one over there, one over there, one over there, - that really is not the most effective thing to do. You would want a consistent look, feel, - regardless of user type in that one incident management system. So, try to think about common use cases where a single AI solution - can be leveraged to then offer a similar experience. Conversely, the power of specialisation. Now, where shared use cases can be effective, - there will still be times where specialisation is necessary, - is required, makes sense. Okay? Now, I'm going to go back to that network monitoring example again. We're talking about something that's highly specialised, - it's very nuanced to your environment. Specialised solution is going to make a lot of sense in that regard. So, then the trick again is finding the right balance. It's not dissimilar to automation versus AI, - finding the balance where then you have the most impact. So, shared use cases should focus on maybe tasks - that are standardised across the organisation, - while niche use cases can be tailored to your company's unique needs. Now, I do want to just touch on agentic workflow operations. This is where, I guess, AI steps in, not just as a tool, - but as a powerful autonomous agent, right? Capable of managing entire workflows, - without requiring human oversight. So, instead of having people manually handle varying parts - or having to step in in various steps - of whatever that end-to-end process might look like, - we're now talking about AI systems that are autonomous, - that can manage something end-to-end. They can triage, look to knowledge databases, - make decisions, maybe triage and resolve those tickets, - having some form of context of what it is that they are solving for - without someone stepping into in every step of that. So, this agentic capability does allow organisations - to scale operations at speed, reduce human error, and most certainly will free up resources for them - to focus on more complex strategic type tasks. So, the question that we, myself, I should say, - see a lot of is the question between - do we build our own solution, or do we buy a solution? You could not imagine the number of times - when folks have come up with this really wonderful idea, - and then I have someone go, - but doesn't something like that exist in the market? Right? So, I think that's definitely one of the most critical decisions - that organisations do face where they build versus buy. What I would say, again, my personal opinion is, - when to buy is when you have things like general purpose needs. So, it might be coding assistance, - natural language processing, chatbot automation. These are areas where leading vendors are very mature in that space. Why would you want to necessarily reinvent the wheel - when you've got something very mature on the market? And also, it will come with the necessary support. So, as your business scales, you know you've got that continuous improvement of that product - that you're leveraging that grows with your business also. So, then conversely, when do you build? So, if we relate it back to those specialised use cases, - the nuanced environmental things that may exist in your organisation, - that's potentially where building something is going to make more sense. I take the networking example again. That's going to make sense for you to build something. It's going to be very, highly niche, nuanced to your environment. My opinion. People may counter that. So, I think it would be definitely remiss of me - if I didn't touch on [chuckles] things to consider - in highly regulated environments. Actually, a lot of these things don't necessarily have to be - solely applicable to highly regulated environments. I think these are just good things to be thinking about - when you are leveraging AI within your organisation. So, compliance with regulations. We've got data privacy laws, - if we think like GDPR in Europe or HIPAA in the US. They play an absolute crucial role in how AI can be used. And we need to ensure that AI is compliant with these laws, - especially when we are handling sensitive data, - and if we're going to use that data to go on to make decisions - that impact whether it's an individual, a customer, or whatever the case may be. So, data privacy. Then, of course, you have transparency and explainability. Many regulators do require AI decisions to be explainable. So, it's really important that AIs can actually provide clear reasons - for how they've arrived at a particular decision. Auditability goes without saying, - we need to have audit trails of essentially complying, - having an audibility of what your AI systems are doing - should just be something you do, and that shouldn't matter - whether you're in a highly regulated environment or not, as the case may be. So, auditability. So, then here I've mentioned bias and fairness. So, there's a heightened scrutiny on bias in AI models. And what we don't want to do is build models that then, - shall I say, how did I have it here, discriminating, - we want to avoid discriminating classes like age, race, gender. We don't want to proliferate that, right? So, we need to ensure we're not perpetuating these harmful biases - where you might be making decisions, I don't know, like hiring, lending, - insurance decisions, it really is important to think about. So, ensure that you're implementing strategies - to regularly check and correct AI models for bias. I can't say that enough. And then ensuring that compliance with both ethical and legal standards. And we know that the landscape is constantly changing. So, it's important to stay in touch with that and up to speed. Risk management and AI reliability. So, AI systems must be thoroughly tested for reliability. Again, I don't think this is actually specific - to a highly regulated environment. You want to know that your system's reliable, any system for that matter. And it's important that there's thorough testing, - frequent testing that is happening with your systems, - and failure to meet operational or safety standards - could lead to legal or reputational consequences, - which you wouldn't want. So, you want to continuously, continuously assess the risks - associated with AI and the decisions that it's making, - particularly when decisions impact our customers - or folks that decisions are being made about. And let's not forget, I talked about reliability testing, - but I think stress testing is also important to do. I'm sure we do that with our normal applications, - depending on what industry that you are in, - but stress testing is also very, very important. We have data security and protection mentioned here. Where we are dealing with sensitive data, - actually, where we're dealing with data, private data, sensitive data, - it's crucial that you implement data security to protect data - from any threats, that may be cyber threats and breaches. And so, I would say, consider where you can - using strong encryption techniques and role-based access control. I have personally seen, I mean, I'm not at liberty - to talk about what it is that my current organisation do, - again, I'm just talking about personal experience, - but it's something that I've seen - where there has been a breach or non-compliance. So, it is real. It definitely is real and must be considered. Whilst I didn't put it on here, I did just want to talk - about continuous monitoring and oversight. So, ensure that you are continually monitoring - and providing oversight on what it is that you're leveraging AI for. You want to make sure that that's existing - within an established c