Skip to main content Search

Building trustworthy AI — with Lofred Madzou

What does it take to make AI truly trustworthy? Pinja and Stefan talk with Lofred Madzou, CEO of Skillmind, about responsible AI, governance, and strategy—plus why organizational culture matters more than hype. Lofred also shares his inspiring work with the African Olympiad Academy, nurturing the next generation of science innovators across Africa. 

[Lofred] (0:03 - 0:10)

The biggest challenge is always when you have a shallow, poor understanding of what AI can really do. It's going to get better, so I'm really hopeful about the future.

 

[Pinja] (0:14 - 0:23)

Welcome to the DevOps Sauna, the podcast where we deep dive into the world of DevOps, platform engineering, security, and more as we explore the future of development.

 

[Stefan] (0:23 - 0:33)

Join us as we dive into the heart of DevOps, one story at a time. Whether you're a seasoned practitioner or only starting your DevOps journey, we're happy to welcome you into the DevOps Sauna.

 

[Pinja] (0:38 - 0:48)

Welcome back to the DevOps Sauna, and we have a really special episode here today. I'm, of course, joined by my co-host, Stefan. Welcome, Stefan.

 

[Stefan] (0:48 - 0:49)

Thank you, Pinja. Nice to see you again.

 

[Pinja] (0:50 - 1:01)

As always. And then we have a very special guest today. We're so happy to welcome Lofred Madzou, who is the CEO at Skildmind, and he's the expert in AI governance strategy.

 

So welcome, Lofred.

 

[Lofred] (1:01 - 1:02)

Thank you very much for having me.

 

[Pinja] (1:02 - 1:12)

We're really excited about this. And you're working for Skildmind. You're the CEO of the company and part of the founding group of it.

 

And could you please tell us a little bit about what Skildmind is doing?

 

[Lofred] (1:12 - 1:45)

Yes. In a nutshell, we help enterprises, medium-sized enterprises, in the AI transformation journeys. All of us, like I've read the reports about the low adoption of AI in enterprises, all the challenges, and I have a lot of experience in this space.

 

I was working before in a startup named TruEra that was offering tools to test, debug, and monitor AI models to enterprise clients. So I've been in this space for a few years and I've learned quite a lot. So I'm helping them now across the AI transformation journey.

 

So we have a full set of capabilities to help them be successful in their journeys.

 

[Pinja] (1:46 - 2:06)

Also one tidbit that we're going to talk about a little bit later in the episode, because you're also part of the founding team of the African Olympiad Academy. We're very interested in this, but let's get back to that. But AI governance and strategy, what was it that got you into this path?

 

And how did you end up where you are right now with the SkildMind and the African Olympiad Academy?

 

[Lofred] (2:06 - 3:21)

I guess part of it is luck and part of it is passion. 10 years ago, exactly 10 years ago, I was a student at the Oxford Internet Institute. I was doing data science and philosophy back then.

 

And one of my professors, his name was Luciano Floridi, he was the leading philosopher in space. He's quite known in the philosophy of information, ethics of AI, and so on. He was my professor and he's the one who introduced me to the AI governance space.

 

Back then, it was really, really early on. And what he made me realize is that as fantastic as AI and as powerful it is, it's creating very important governance challenges, regulatory challenges. It comes with some level of risks around privacy, around bias, and so forth.

 

That's when I realized that this space is going to grow. So back in 2015, 2016, so it was quite early on. And during this class, but also the other classes at the OII, I think I owe much of my AI career to the OII, to be frank.

 

I found the defining question of my career, how to make AI more trustworthy. And for the last 10 years, I've been exploring basically the same question from different angles at different organizations, but it really comes down to this. And obviously, governance is a very important component of trust, right?

 

That's how I got into space.

 

[Pinja] (3:21 - 3:23)

That's a very interesting take.

 

[Stefan] (3:23 - 3:45)

Super interesting, because usually when I hear people talking about philosophy, they end up in something completely different. The more human side, not so much about governance, but it's really interesting to see philosophy goes into governance all of a sudden, because governance usually is like lawyers or somebody studying the law or security people that goes that far. And so I really love the approach from philosophy instead.

 

[Lofred] (3:45 - 4:42)

No, that's a good point. I think something often overlooked, people ask me, what's the relationship between AI and philosophy and governance? It's actually deeply, deeply related.

 

If you think about AI as a field, literally, it's the science of making machines perform human tasks, right? And that requires some level of intelligence. It turns out that philosophers have spent thousands of years discussing what is intelligence, what is a skill, how do you acquire a skill and so forth.

 

And many of the AI paradigms, AI, symbolic AI back then, and now with deep learning and so on, are built on some deep-rooted philosophical assumptions about what it means to be a human being. So philosophy actually gives you a very powerful, I'll say, angle and toolbox to really address these questions and think deeply about what these systems can really do, and then what should we do with these systems, right? And that's where the ethical part comes in.

 

That's where governance comes in. So it's deeply related.

 

[Stefan] (4:42 - 5:04)

And I guess a lot of people are sort of, it might sound harsh, but piggybacking on this at the moment, talking about ethics and responsible AI and so on. I think I had a colleague who said, we cannot talk about ethical AI because it's uncontrollable. We can talk about responsible AI, and it has this taint of philosophy as well.

 

I think it's super interesting that we're getting there instead of just adding AI to everything and all is good.

 

[Lofred] (5:04 - 5:12)

Really interesting. Agreed. We have to be a bit more thoughtful about where we should use AI, for what purpose, and for what benefits.

 

Yes, exactly.

 

[Pinja] (5:13 - 5:26)

We're talking about trust here. And if we think of something that we cannot quite define, as you say, it's hard to define what is humanity and what is this. How can we define the trust in a system that we might not understand fully?

 

[Lofred] (5:27 - 6:52)

It's a very good question. So let's start by a very basic definition. What I mean here by building a trustworthy AI system simply means that building a system whose behavior is consistent with a set of requirements or expectations.

 

So it's much more a method than a specific philosophical definition or school of thought. Because again, ethical expectations will vary depending on the culture, the context, the industries, and so on. What we need to agree upon is that the system is never fully autonomous, and we need to decide what we expect from that system.

 

Not only the designers of these systems, but also people who are going to be affected. And obviously, when you're thinking about regulating any piece of technology, you're thinking about policy and law and regulation. And AI is arguably one of the most powerful technologies ever invented.

 

So there's a pressing need to make sure that its behavior is consistent with our expectations. And again, the second question is whose expectations, right? And as I said, that's a collective discussion.

 

And often when things go wrong in governance is that one specific set of stakeholders were not involved either in the design, or we didn't take into consideration their interests in deploying this system. And we then have potential adverse effects. So governance is there to make sure that we are comfortable with the outcome of these AI systems in our society.

 

[Stefan] (6:53 - 7:16)

Because we always hear like, oh, I don't trust this AI, or how can we do all of this? Can we rely on it? I'm not sure I'm over-interpreting your words, but we need to still have the human in the loop somewhere.

 

But we might not stop the AI and do double checks all of the time. We just need to be aware of where we can intertwine or intercept or pull the big handle and stop everything.

 

[Lofred] (7:16 - 9:32)

Yeah, that's a good point. Now we're getting to the how. Now that we have a definition, how do you make sure that, okay, let's assume that we have these requirements.

 

And first of all, what's the mandate to make these requirements in the first place? Well, in democratic societies, we have elected officials. And I started my career working for a French government.

 

So I'm going to address your point, but I want to give you an articulated answer. Working for a French government, I was one of the co-authors of French National Strategy, advised French MEPs, and later on other governments on regulation, on AI regulation, horizontal but also vertical in specific industries. For instance, I've been involved in looking at facial recognition in airports or other industries and so on.

 

So there's always, you need a mandate first to set these expectations. And depending on where the question is being asked, sometimes it's really at the national level and you have all the policymakers being involved. Sometimes, as I said, on facial recognition, it's much more at a vertical, industry-specific level.

 

So you often have industry standards involved, you have business actors, you have civil society. And my job initially at the French government, but then at the World Economic Forum, was an AI community of stakeholders that have various interests around a specific AI system in their environment, and making sure that we come to an agreement, some level of agreement about what should be the requirements, what should be the guardrails, and then how we're going to ensure that AI is behaving properly.

 

And then we have a lot of things in our toolbox. Obviously, we have policies, we have processes, we have standards. I'm thinking about ISO, for instance.

 

We can also have corporate guidelines directly at the level of an organization. We also have some level of tooling. So I moved from high-level recommendations from governments, then advised other governments at WEF.

 

Worked with private sector companies leading big tech on establishing the responsible AI departments. And there it was really about, okay, now we have some level of definition, how do you make sure that our systems are consistent with them? So we need processes, we need tooling.

 

And then I worked at TruEra, what TruEra was doing, literally building tools to make sure that the BRF's AI systems were consistent with expectations and that they were not biased. Okay, what do I mean here? Because again, that's a piece of software.

 

So how do you basically close the gap between the ethical or the legal definition and the technical underpinnings? How do you make that translation? That's the work we've been doing for the last 10 years.

 

[Pinja] (9:33 - 10:14)

And if we think of, I'm just thinking still about the kind of bridging the gap from having a social contract and that being the basis of building trust in somebody, something that we have basically the need to have the transparency, rules, accountability, and now moving that, as you say, into the organizations, into enterprises, and we're talking about scaling it big. So is this something about building this kind of trust and governance structures into organization? How is that being interpreted by the industry itself at the moment?

 

When you think about enterprises that come to you and your company and your associates, do they feel that this is something really a need, or is there still some education needed in the industry on this?

 

[Lofred] (10:15 - 12:00)

It's a very good question. I will say that now they've come to an agreement that it's a pressing need. So if we look back at what happened the last few years, there has been fantastic progress in AI.

 

And we experienced the ChatGPT moments where GenAI became public. We knew about GenAI as insiders, but the broader public found out, including many of these business leaders who are not AI experts. And to your point, initially, GenAI gave this impression of easy access, easy use.

 

All we had to do, basically, was an enterprise version of ChatGPT within my company. Let's say I'm an insurer, I want to improve my underwriting process, or I'm a bank and I want to improve my fraud detection, whatever is the use case. But once you try to replicate such technology within your organization, then you have to take into account the regulation.

 

You have to take into account the reputation risk. It's not anymore like a single user using ChatGPT and doesn't care much about the output or the outcome. Here, it's really highly consequential.

 

And they realized quickly that getting access to the technology of a model is not the hardest part. It's to build the governance framework around this all. That's one level, but the biggest challenge is this organizational shift that you should put in place to maximize the benefit of AI.

 

And this is really challenging because there's no really clear playbook for this. It's one thing to hire a few data scientists, but embed them within your organization in relation to your risk functions, for instance, in financial services. And working closely with business functions and making sure that AI adoption is growing and creating value.

 

We'll get to this later because there's been a lot of frustration in the industry as well, that we invested so much in AI, we got so little in return. And as I get closer, I'm not sure that you invested a lot in some tech and some provider and some data scientists. I'm not sure you invested a lot in your organization to leverage AI.

 

[Stefan] (12:00 - 12:25)

I guess that sort of relates to the whole strategy aspect, because we see a lot of customers asking about AI, but when we start asking for this specific AI strategy, it's sort of like models out like, yeah, we want... And they can't really give you a clear response on their strategy. That for us is super important when we talk to the customers, you need to think about your strategy.

 

And it's hard to build AI into everything if you don't have a strategy because you don't know where exactly you're going.

 

[Lofred] (12:26 - 13:55)

That's a super important point. And again, I'm seeing this again and again with my clients and prospective clients. Usually, that's why I recently put together an article, I'm going to reference it later, basically on the five levels of AI maturity based on my experience the last five, seven years, especially GenAI.

 

And often, when I get into a new company and new client, they have some level of AI experimentation going on, some POCs here and then, but rarely I come across an organization that started with a strategy. What's the game plan? AI is just a tool.

 

What is the outcome that you expect? What are your KPIs? What is the incremental value you're trying to achieve?

 

What are these high priority use cases? What is the governance framework in place to make sure that things don't go sideways? How do you prevent data leakage?

 

How do you ensure that you have the right security involved and so on? In many instances, these questions are really important as somehow like afterthoughts, because AI now is being led by champions within specific departments, often people who are more technically inclined, and they go for the shiny use case that can get the attention of the managers or the executives, and there's pressure on the board from the board and pressure from the media to do something. So people do stuff, right?

 

But often it's not thoughtful enough. And that's where skill comes into play. And we're trying to help them create real value over AI systems.

 

And often say like, we can build GenAI solutions with you, and we have the expertise to do so, we have the team to do so, but to what objectives? What do we want to achieve with this? That's for me the most important.

 

[Stefan] (13:55 - 14:32)

And I actually caught, like you said, a governance framework, and I think that's super important as well, because as you said, a lot of people are sort of jumping into some sort of GenAI. And I remember when GitHub Copilot came out, everybody wanted it, but it always ended up legal and like, we don't know how to handle this. Like you need a sort of governance framework for accepting AI in, because you will get a new provider tomorrow and the day after and the day after.

 

Exactly. You need to be able to quickly go through that setup and say like, all right, looks good, we can go instead of like, three months later, legal might come back to you and you might have a response. Like we cannot sit around and wait for that.

 

[Pinja] (14:33 - 15:21)

And that's one of the problems, because we know if we think of a development organization, and we think of the developers, if they're following what's going on. And a couple of years ago, we were at this breakfast event, and I asked the participants, how many of you have an AI strategy? And out of the 30 people we had in the room, maybe five raised your hands.

 

And then I asked the follow up question, for how many of you is the AI strategy that doesn't use it? And I think it was like three out of five out of this group of 30. And if you, I don't want to make it sound like the developers don't know what they're doing, but they're going to start using the tools that are available.

 

And if you don't harvest their internal need, and the motivation to start using these tools into what you actually want to build, is that trustworthy? And how do we build trust in an organization like this?

 

[Lofred] (15:21 - 17:02)

That's a very good example, because then you mentioned developers. Making sure that you have alignment between AI initiatives in your organization and the executive objectives is really, really critical and often missing initially, to your point. Because you have users that get exposed to AI in their private life or work conferences and are eager to test AI, but your organization is not designed to create that experimentation space, and let alone go into production later.

 

So it creates frustration on both sides. Frustration on the champions, they want to use it, they want to create value, they want to be part of this movement. And also executives, because they want to create value, but these experimentations are not delivering, despite sometimes enabling some of these champions initially.

 

And part of our work is always trying to reconcile these specific initiatives and make sure that they're highly connected and endorsed by the executives in collaboration with other key stakeholders, including the developers. And one thing I'll always say is, the biggest risk for me is, yes, wrong use of AI that might have some adverse impact on your brand or your revenue and so on. But the worst risk is not using AI at all.

 

Despite the hype, AI is creating real value because we're speaking about the laggers, but real leaders in that space are already creating value. And I can tell you from experience that organizations that do it well are already outperforming their peers on key metrics. I'm talking about revenue here.

 

I'm talking about employee augmentation, productivity, and so forth. Not like vanity or shiny metrics. But it requires huge transformational efforts in the organization.

 

I'm not sure that everyone is ready for this discussion sometimes.

 

[Stefan] (17:02 - 17:23)

And taking the whole risk perspective, if we don't have the strategy and the governance and everything, we go back to the same issue as we have with everything in IT. Then you have shadow IT. Well, shadow AI is a thing as well.

 

If you cannot get your account, you're going to get your personal account, and then you're going to copy and paste all of the code from the company into that personal account. And who knows what happens then? You need to be ready for this.

 

[Pinja] (17:25 - 17:49)

I'm thinking of what you and your company, SkildMind, is doing at the moment. Looking at the scaling, you said that you often start with the maturity assessment. Where does the company stand at the moment?

 

How do you make all this building trust scalable for a company like this? We're going to talk about a big enterprise. We might start from the management, but how do you build that trustworthiness on a scalable level?

 

[Lofred] (17:51 - 20:34)

Let's build on the framework that we put together based on our key insights and engagement with our clients. So we have five going really fast. We have this experimental level that everyone knows.

 

Basically, there's no direct leadership. I mean, there's some leadership endorsement, but to your point, it's either shadow AI or limiting projects or ongoing fragmentation. Once we get to strategic, to address your point, for us, what really matters is to create an alignment on where we think AI can create value in the organization.

 

If we feel that there is no alignment even on this, so what is the hypothesis here? Let's say in your bank, right? What are these high-impact use cases?

 

Executives have some thoughts. The teams involved in these divisions, are they part of a discussion or not? We need to bring them in.

 

Look at the hard numbers, how your business is doing on X, Y, and Z. Often, we're trying to build this level of task force together. There's some resistance depending on the organization, depending on the culture.

 

Some organizations are really hierarchical, but AI is a different beast. It requires much more collaboration. It's not just something to plug and play.

 

That's not true. If someone comes to you and says, I'm going to build enterprise AI for you, and that's plug and play, he's lying. This is not true.

 

This won't work. A lot of times, this won't work. You're likely to have security coming a few weeks later, rambling, being really upset because something went terribly wrong or risk and compliance, putting the brakes.

 

It won't work, right? You need to gain the trust of key stakeholders and put that journey together. First is this.

 

We make sure that the right stakeholders and alignment on where we think can create value of AI, which comes down to specific high-impact use cases, and then an incremental strategy. Often, what we recommend to large enterprises is to create a dedicated outside structure within the organizations to handle AI initiatives called the AI Center of Excellence. To your point, that's what we have been doing for a little while, is helping companies establish such centers with a head, with different key functions within the center, connected to these other key departments.

 

And that's the point. The center is not an innovation center. It's there to do two things, drive adoption and drive education.

 

Because another thing I haven't touched upon yet is the cultural resistance to AI. Not only, obviously, from some employees that are not really technical, but including the most technical ones is, okay, that might replace me, or it was not designed with my involvement. Someone made a decision and said, now you have to use this tool, and so on.

 

This is the thing, and really, it comes down to many workshops. So it's really low tech. It's like workshops, engagement.

 

The tech is there at some point, but I always try to push back a bit. So, well, these are the use cases. That's how we should do it.

 

Let's go. I want to make sure that there's some level of strong endorsement from key stakeholders on these assumptions before running with them. It's a bit like a long answer, but I wanted to give you the full picture.

 

[Stefan] (20:34 - 21:21)

I think it's interesting that you mentioned the perspective of people thinking their job is going to be taken away from by AI. I've yet to see it really, really happen. There's been experiments where Klana fired 500 people in their support department, and one year later, they had to bring at least half of them back because the human touch was completely gone from their support.

 

AI will take us so far, but we're not yet at the level where it's actually going to fully replace people. There might be some jobs that will be replaced, but they would probably have been replaced sooner or later by some other automation. But I think, yes, AI will.

 

It might speed up or be a catalyst to those things happening, but it's not like if I worked as a software developer, I wouldn't be afraid that it would replace me tomorrow or in five years or 10 years. There are so many things that still need manual intervention.

 

[Lofred] (21:21 - 24:26)

Agreed. And also, I think part of the confusion, and it goes back to philosophy, is that we're so impressed by the — and for good reasons, people are not crazy, for good reasons — by the power of AI on the consumer side, as an individual in the comfort of our homes or at work, and we're playing whatever apps, engaging with these chatbots or others, and get amazing responses. But that's really an individual interaction with very low, it's a really low stake outcome, basically.

 

And they're like, yes, AI has been doing wonders. But when you work in an enterprise, basically, your core challenge is often organizational. You have a lot of people to coordinate, and collective action requires what is central, there is the human element.

 

It's self-induced when you really think about this and step back and say, well, obviously, if I have thousands of people in the loop to do X, Y, and Z, I need to have some level of collective coordination, engagement, and buy-in to deliver value. Somehow, people often miss this. So you have a typical situation where things go wrong.

 

You have a provider that comes in and claims that, hey, I can do this thing. I won't need any provider. And it turns out that, well, actually, even just to test, even to run the experiment, you need to have a buy-in of various stakeholders.

 

And then scaling an enterprise, it's really challenging because of the governance, because of the overhead, because of the expectations when you're an established business. You care about the outcomes of your systems, and the systems are still deeply probabilistic. So one thing as well I'm always trying to do is setting clear expectations on what we expect from these AI systems.

 

It's only because we have an agreement about what they can really do, what are the core capabilities, then we can decide what we should do with them. What are the more profitable use cases? And often, what I'm seeing is the biggest challenge is always when you have a shallow, poor understanding of what AI can really do, or AI systems specifically in one industry.

 

The domain knowledge expert knows his business really well, but he's not fully understanding how this software is going to help him. And vice versa, some very technically talented folks within an organization who have a shallow or poor understanding of what is really the day job of this underwriter and so on. And the job is not just a set of tasks.

 

That's a confusion as well. It's like, if I compress the job description, what I have is just a set of tasks that I can then automate. The job is much more than this.

 

That's often the challenge that organizations are facing. They are rushing to solutions, while not starting with a problem and say, hell, from this problem, what solution can help us address this? So they already have a toolbox.

 

It's not in a startup environment. It's a solution in search of a problem. I see many, not many, but I see a significant number of executives that feel pressure to use AI.

 

And even it's a red flag for me when they speak too much about AI. I'm like, what is your challenge? What is your problem?

 

Try to stick to this. What model should we be using? Please, let's talk more about the problem.

 

And sometimes I leave meetings, one-hour meetings, I haven't talked about any solutions. It's like, yeah, we'll come back to you. I'm just taking notes.

 

I just want to understand clearly what the problem is.

 

[Stefan] (24:27 - 25:09)

I think that sort of ties in with some of the things we see. Like when a ChatGPT came out, all of the products in the market, you could just see an AI sticker all over the place. Now, when we get introduced to potential partners or people who want to present a product to us, you can see they've actually had some time and now they actually know what will AI actually bring to the table here.

 

It could be looking at the logs and it will bring up things that are connected to it that sort of ties in and shows some sort of maybe reasoning behind why this thing failed at some point. So it builds the context for you instead of like, well, AI can do X, Y, and Z. Well, you could do that with a fuzzy search or something like that as well.

 

Now we see the actual good use of AI, I would say.

 

[Lofred] (25:09 - 25:22)

It's going to get better. I still think that we're early in the process, but I'm really hopeful about the future. I think, again, there's a growing frustration and now there are some people who want basically a real return on their money.

 

So, you know, we should have the market.

 

[Pinja] (25:22 - 25:47)

And let's speak about the future as well. And we already gave this kind of a sneak peek in the beginning of this episode on another initiative of yours at the moment. So let's talk about the African Olympiad Academy.

 

And you're part of the founding team. And this was when we were preparing for this episode, Lofred, you mentioned that you were going back to school. And this is a really interesting initiative of yours.

 

Could you please tell us about what this African Olympiad Academy is about?

 

[Lofred] (25:47 - 27:40)

Oh, yes. So I'm really excited. I'm really proud about this initiative.

 

So it's basically a residential high school located in Kigali. It's a Pan-African high school for top African science, math and science students from across the continent. But the curriculum is based on Math Olympiad pedagogy.

 

For those, you know, for the listeners who are not familiar with the Math Olympiad, it's arguably the most prestigious science competition in high school. It's like the Olympics of mathematics. And it turns out that performance at the Science Olympiad is the best predictor of future exceptional impact on science.

 

You have more Math Olympiad alumni that have, you know, made scientific discoveries, got Nobel Prizes, became, you know, executives at top AI companies than MIT graduates. It's not even comparable. So what we're trying to do with the school is that physics is simple.

 

Africa is the largest pool of untapped Science Olympiad talent, exceptional talent. And we're creating this environment to identify them first through Math Olympiad style competitions, and then nurture them and bring them to the school. So it's fully paid for by them.

 

It's a bit like a football or sport academy. They come to the school because we're exceptionally gifted, and we put together an environment where the talent can shine. And we help them, you know, take it to the next level, to the benefit of their communities and to the continent at large, because we really think that the smartest minds should work on the hardest problems.

 

In Africa specifically, we have, you know, many key challenges and want to make sure that we are supporting the next generation of innovators. So that's what the school is about. And the last thing I want to say is what I'm learning so much is, well, I was a science major in high school, but nowhere close to this key, just to give you a perspective, right, to get the numbers there.

 

To get 30 students, we tested thousands of them. To make it into the national team, you know, Math Olympiad national team, from talent and from trends, you need to be in the top six math students in your country. I don't know how good you were in math back in high school.

 

Not good.

 

[Pinja] (27:41 - 27:43)

No, not like that. No.

 

[Lofred] (27:43 - 28:17)

I was nowhere close to the top six. Are you kidding me? I mean, I'm not sure if I was in the top six in my school or even my city, like, you know, let alone the country.

 

And yeah, so it's a school for exceptionally gifted African science students. And my role as a director of partnerships and co-founder is to build a network of mentors, supporters, and funders that are going to be instrumental in helping them realize their full potential across their journey, because high school is really the first step. As you know, most of life happens after high school.

 

So even though I want to give them a strong foundation, I'm there to make sure that they have the support they need for their journey after.

 

[Stefan] (28:17 - 28:46)

It's really interesting, because I guess they're super driven by knowing more about their field. They're not driven to win a prize or anything. They just want to know more, because when you reach, let's say, the top six in the country, you are so geeky that you're so far into the field.

 

You don't care about the prices. And long term, that is way more sustainable for you. There's a lot of science behind that in psychology as well.

 

It's like finding the right people that are driven by knowing more. I think that's the key to bringing success to that.

 

[Lofred] (28:46 - 30:03)

That's a really good point. And actually, I was asking myself, why is it such a strong predictor? Think about this.

 

It's high school. How come a high school competition can basically identify future Nobel Prize winners? Just to give you a perspective, half of the founding team of OpenAI, half of it went through Science Olympiad, Mathematics or Informatics.

 

Mira Murati Ex-CTO, Math Olympiad talent. Think about Alexander Wang, CEO of ScaleAI, Math Olympiad talent. Andrew Ng, superstar, Coursera, DeviantArt AI, gold medalist at Math Olympiad.

 

Perplexity, arguably one of the biggest competitors to Google. The co-founders, Math and Informatics Olympiad talent. And I can go on and on and on and on.

 

I was asking myself, what makes the Science Olympiad so different? Two things I think are really important to your point. The first one is they always deal with new problems.

 

It's not like the math that you and I experienced. You have a theory and then you apply it on a problem. They deal with new problems every day.

 

Well, what is known research at the very best is basically solving an unsolved problem. So if you do it when you're 15, 16, you get into good habits, right? And the second thing is that passion for learning.

 

You know, don't get discouraged. You keep going with the grind. I don't know, but I'm going to find a way.

 

That, I think, makes them very, very special. And me being around them, frankly, I'm just impressed. I'm being lectured by a bunch of 16-year-olds on mathematics.

 

[Stefan] (30:03 - 30:05)

You're happy to feel stupid in the crowd.

 

[Lofred] (30:06 - 30:16)

It's okay. It's okay. I don't mind.

 

I knew I wasn't a genius, but now it's pretty clear. I think it ties well.

 

[Stefan] (30:16 - 30:45)

I do a hobby as the Olympic weightlifter on the side, but when I talk to the different coaches we have, what they're looking for is people being coachable. They need to have the passion for doing this, knowing that this will take ages of their time, but you need to be coachable and be able to receive feedback and understand the feedback and processes and improve upon that. And I guess if you're at that high level and you want to be able to solve these unsolved mysteries to put some sprinkle on top, then of course, you're interested in getting this feedback and improving all of the time.



[Lofred] (30:46 - 32:15)

Definitely. Definitely. And one thing also is, just to finish on this, it's still related to AI for me because, again, I'm passionate about AI, but I'm also slightly concerned about its impact on society in the long run. Not so much of the future of work, as I said, like jobs being replaced.

 

I think part of it is just storytelling, but it fundamentally changes education for me. K-12, but also continuous learning. And one thing also, tying this up to what I'm doing with my clients, because these things are related, in building this AI center of excellence, I really want to drive continuous education.

 

And I think the way we're thought of education before, it's basically, it is the territory of academic institutions and you've been trained in the early part of your life, and then you go into work, you still kind of learn, but there's no this won't work. The pace of innovation, the nature of the systems will require that we continuously learn and grow and engage. And for me, this school is really a blueprint.

 

I'm learning what the future of education is. What's the future of K-12? Because the way the classes they have, the way it is structured is very different from any classic high school.

 

They will still have, will be Cambridge accredited soon, so they will pass a high school diploma and stuff. Fundamentally, the approaches that we kind of pioneer now at the school are very different from classic education. I hope that we can not only share these learnings, but also scale this across the continent, because we're the continent of young people.

 

I don't know the numbers anymore, but I think 50% of the population in Africa is below 25 or 20 years old or something like this. So it's really important what we do there.

 

[Stefan] (32:15 - 33:05)

It's slowly turning into a philosophy session here, because when I look at my parents... We'll go back to enterprise, sir. Yeah, exactly.

 

When I look back on my parents, they had to do everything by memory when they were in school. When I went to school, yes, you could look things up. I think the strong key thing I learned in school was to learn how to look things up.

 

Remember where to go and look. The future, even in schools in Denmark now, we have the discussion of, are you allowed to use GenAI during your exams or writing thesis or whatever? We do have these progressions in our society as a whole.

 

We have so much information available, but how do we find the right stuff? How do we sort of analyze if it's correct or if it's fake or so on? I think that's the point we need to find out about AI.

 

When can we fully trust the responses and when is it giving this good creative energy to us?

 

[Lofred] (33:05 - 34:36)

Can I just quickly react to this? Because you made a very important point. Again, linked to the school, but linked to SkildMind as well.

 

What I'm trying to do there is help... No, basically we help our students acquire these key meta skills, problem solving and critical thinking. Because to make the most of AI, you need to have a clear understanding of what are its capabilities, where it fails, what are these failed use cases, how to spot them, when to step in, when the human should be on the loop, how to correct.

 

All that, this expertise, you know, comes with critical thinking, problem solving, and so on. We need to make sure that we do more of this. So the kind of like by heart learning, I think will, you know, die soon, if not, if it hasn't done yet.

 

And what I'm trying to do is, again, going back on skill, say, yes, we can build a ton of GenAI and do many, you know, shiny things. Let's pause for a minute and think deeply about the problem and build the right solution for a problem. And frankly, I rely more on flows of even data science for this.

 

It goes back to what I studied 10 years ago. And the most important thing I've learned is to ask the right questions. What are the assumptions?

 

What is this hypothesis? What are the conditions under which these things hold? Well, logically, this doesn't make sense.

 

And so on. And how do you look for evidence? How do you build this argument?

 

I've obviously never expressed this fully when I'm engaging in a business discussion. But in the back of my head, this is really what's going on. I'm taking as many notes as I can, because when I go back home, I'm reading it's like, wait a minute, it claims that, you know, AI should do this, this, this, this vis-a-vis KPIs.

 

That's what they try. Well, I'm not sure that they really do the right experimentation to do this. At least if it was the goal initially, well, they drifted.

 

That's the kind of thing that's your face when you really think critically.

 

[Pinja] (34:37 - 34:45)

That's amazing. And I think that's all the time we have for this. Lofred, it was amazing to have you with us and for you to bring your insights and your passion.

 

So thank you for joining us today.

 

[Lofred] (34:45 - 34:48)

Thank you for having me. It was a great discussion. I really enjoyed it.

 

Yes.

 

[Pinja] (34:48 - 34:48)

And thank you, Stefan.

 

[Lofred] (34:49 - 34:49)

Thank you.

 

[Pinja] (34:49 - 35:02)

All right. Thank you, everybody, for tuning in and welcome back to the sauna for the next time. We'll now give our guest a chance to introduce himself and tell you a little bit about who we are.

 

[Lofred] (35:03 - 35:07)

Hello, everyone. My name is Lofred Madzou. I'm the CEO of Skildmind, your enterprise AI partner.

 

[Stefan] (35:08 - 35:13)

I'm Stefan Poulsen. I work as a solution architect with focus on DevOps, platform engineering and AI.

 

[Pinja] (35:13 - 35:18)

I'm Pinja Kujala. I specialize in agile and portfolio management topics at Eficode.

 

[Stefan] (35:18 - 35:20)

Thanks for tuning in. We'll catch you next time.

 

[Pinja] (35:20 - 35:28)

And remember, if you like what you hear, please like, rate and subscribe on your favorite podcast platform. It means the world to us.

We'll catch you next time. And remember, if you like what you hear, please like, rate, and subscribe on your favorite podcast platform. It means the world to us.

Published:

DevOpsSauna SessionsAI