Skip to main content Search

AICloudPlatform engineeringConference talks

Driving real impact with AI Agents: A blueprint from Högskolan Väst | Högskolan Väst & Eficode

Discover how Högskolan Väst, in collaboration with Eficode, leveraged Microsoft Azure to implement enterprise-grade AI Agent architecture for real-world impact in higher education. The session covers the design, deployment, and scaling of AI solutions that streamline campus operations, personalize support, and transform digital interactions for students and faculty. Learn from the project's key decisions, challenges, and practical lessons to start building your own secure, responsible AI Agent ecosystem.

Driving real impact with AI Agents: A blueprint from Högskolan Väst | Högskolan Väst & Eficode
Transcript

[intro jingle] Thank you. So, great to be here in Copenhagen. Before we get started, I really like to get to know my audience - just a little bit before we kick off - with the implementation and all the technical stuff. So, a quick show of hands. How many of you have ever studied at, taken a course at university? That's excellent. Keep them up. Don't be afraid. Just one more. Just one more. And it's okay to lie on this question if you're feeling, - but how many of you have ever failed an exam? Keep your hand up. Yeah. Okay. So, this solution is for all of you. Yeah. This is for our students. [Peter:] Let's talk about who we are. - [Tobias:] Who we are. Yeah. So, my name is Tobias Ekenstam. I've been with University West, Högskolan Väst in Swedish, - for about almost 30 years soon. I do a lot of work with collaboration, innovation, - with always a focus on efficiency. I love the Agile values, which means that I really like - working with our students and our teachers and our researchers. And right now, of course, a lot of focus is on AI - and what to do with this technology. And I have with me my friend Peter. [Peter:] My name is Peter Drougge. It's not Druggie. Although every time I go to the US, - that is unfortunately what they do. Mr Druggie! Like, no! [audience chuckles] I've been a development architect for the better part of 27 going on 28 years. Still learning every day because we are in the best business there is. Free learning, and you get paid for it. I have the benefit and fortune of leading a great team at Eficode - that does Azure and AI. And they are phenomenal. [Tobias:] So, a little bit about us. This is kind of mandatory, - we have to tell you who we are. But we're actually quite a small university in Sweden. We're the fifth smallest. And like it says here, we have about 700 employees, - 14,000 students. four institutions, et cetera. One of the things that we are really, really proud of - is that last point there, that 86% of the students have a job - after graduating in about one, one and a half years. Which is actually quite a high number. So, we're really proud of that. And that's kind of a sign telling us - that our educations are appreciated by the industry, - that our students are prepared for work life when they come out. So, that's something that we really like. We are situated in Trollhättan. Trollhättan is about 100 kilometres north of Gothenburg, - so we're on the west or the best side of Sweden. [Peter:] I live in Stockholm. [Tobias:] There's a friendly rivalry, if you didn't know it, - between the west side and the east side, and the west is the best side. [Peter:] I actually agree with him, so that's a problem. [Tobias:] [chuckles] That's good. Thank you. Our motto at University West is that together we change. And that's kind of our philosophy, - working together with the surrounding community, - and of course, within ourselves, working together, - collaborating between teachers, students, technical staff, et cetera, - that's the key to success. So, we are very much focused on learning together with everyone else. When we started looking at, - how can we start looking at AI and what's happening in the industry, - of course, it's a rapidly evolving technology, - but looking back, it's nothing new for us. We have adapted to new technology all our lives. Going back to the calculator, to the computer, - to the modem, to the internet, and now it's AI. It's just another tool, right? But with AI, it kind of feels different. It used to be that with all of these other technologies, - we watch the landscape, we watch what's happening around us, - and we think, okay, that's new, that's hot, - that's shiny, let's wait a bit. Let's wait until it's mature, let's wait until it's secure, stable, - and maybe a bit affordable as well. And then, we pick the cherries from the cake. So, we just don't have the money to jump on every new shiny toy, right? I guess that's the same for all of us, unless you're in a really big company, - somewhere with lots of money to spend. But that's not us. So, with AI, it kind of feels different. With AI, it kind of feels that there's this huge hype about it, - but there's also some kind of reality - in that it's actually a very, very useful tool. But we have to understand how to use it - efficiently and properly and doing it smart. So, it feels like this time, we can't just sit back and wait - until it's mature because then we will be left behind, right? So, that's kind of a big challenge. How do we start looking at this - even with our very small resources? I said that we had 700-something employees. Out of those, six are actually developers. Two of them work with AI, - and they're only doing it part-time, and they're sitting over here. Later on, you buy them a drink, they'll give you the source code. [audience laughs] Sharing is caring, and sharing is learning. So, it's quite all right. Like I said, we have to be smart about our resources. We have to do it the right way. And that's why we want to team up with partners. So, when building this solution, we've done it together - with scientists and researchers, teachers, - and of course, with our students as well. They're really, really important to us. And of course, we do it together with our friend Peter - and his colleagues at Eficode. It's all about learning for us. So, when we build our solution, we don't want them to build it for us. We want them to guide us and teach us and show us - just how we can use our resources efficiently. We don't want to spend our time or money on making all the mistakes. They can help us navigate past the mistake - and help us do it the right way from the beginning. So, this is really, really important for us. So, the final question that we are looking at - when doing this solution that we've piloted now - is can we help our students study smarter, not harder, - and maybe they can learn more in the process as well. And in that process, we will learn things as well. So, it's always a continuous learning that's kind of key for us., [Peter:] So, I'll add a bit to the challenge and background, - because you guys had a vision from day one. You've done your homework. When we started talking, it was not about what is AI. And it was not about doing just another RAG chat thing. You didn't want that. You wanted something that was really, - you started talking about multi-agents directly, - and not as a challenge, but as an opportunity. And that is kind of exciting and inspiring. So, we needed to build an architecture, a partnership - that could enable real workloads. Something that was secure, compliant. It should definitely be governable. It should have telemetry, observable. And it should serve as a foundation for multi-agent systems. Here's the problem, though. It's super easy to build RAG chats. People do that all the time. What we set out to do is something that actually works in the real world, - something that is deployed and is in production, - not just a proof of concept. And in the real world, shit is messy. Shit is way more complex than marketing slides. Just to give you a rough estimate - of what I'm talking about in terms of complexity, - we have at Eficode an agent orchestration blueprint. It is like a guiding framework where you can sort of tick the boxes - to make sure that everything is deployed and covered. Everything is there for your real production needs. Just as a side note, that little black thing called Agent logic, - That's the little, small thing about all this. When you go to production, you will need mandatory stuff, - guardrails, you will need observability. Otherwise you are blind. And otherwise, these things will fail. You will need model repositories, model management, - because these models get deprecated. If you don't plan for that straight up, you'll be... Let's just say you won't be in a good state. The true value, I think, lies in a combination - of architecture, governance, and reliability. This is the framework we use to guide our customers, - our partners, when doing this. With that said, it's not a build-everything-yourself solution. You went with Azure. We went with Azure. Trust, secure, reliable. Yes, it's a big corporation, so it moves slowly, - and it's kind of boring. But it moves fast now. And just to be fully transparent, I did spend 10 years at Microsoft. I'm a wee bit biased in terms of cloud vendors. Although, to be fair, every cloud vendor that is big - does the same thing, and they do it great. AWS does a fantastic job with Bedrock. Google does a fantastic job with Agentspace. But this was Azure and AI Foundry. So, AI Foundry was chosen - for an enterprise-grade AI in terms of platform. It does deliver the model management. There's like 12,000 models in Azure right now. In reality, you might need five or two. It's a very good combination with the Microsoft Agent Framework. The Microsoft Agent Framework is the evolution of two parts - that Microsoft delivered a couple of years back. First part was something called Semantic Kernel, - which was their deployment-ready framework for AI and for agents. And the latter part was AutoGen, which stemmed from Microsoft Research. So, they've done a good job of harmonising these - and unifying them into one. So, if you have a building on components right now, - definitely go with the Agent Framework. If you've done Semantic Kernel, - it's kind of easy to just upgrade to Agent Framework. This is the stuff that we chose for you - together with you, to build on your agent. So, let's talk a bit about some of the agents that you built. Sounds fair? - [Tobias!] So, a multi-agent system. In reality, it's two agents and an orchestrator. So, it's a small system. - [Peter:] For now. [Tobias:] For now. We'll talk about the future later. [chuckles] So, this is a solution to help our students. Like I said, study smarter, not harder. So, this is an agent. The course agent represents the course. So, we take the course material, - and we put that in a RAG index, and we connect it to this agent. And that's also one of the small challenges that we have. Our teachers, they produce their material. It's not always AI friendly. It's not AI optimised. It's what we have. Okay, so we have to make do with what we have. That's the real world. We can't ask them to change their course literature or stuff like that. It is what it is. So, we take all the course material, - and we have done quite a lot of testing - and discussions about the temperature settings. Should the agent be allowed to answer any question, - or should the agent be very, very strict - and only answer questions directly from the course material? And since the course material is rather sparse, - it's anything from two or five or 10 files, - it's not that much, we have to let the agent a little bit free, - but it has to stay true to the course material. So, there's this balance that we try to find, - because we don't want an agent that says no to the student. Because when the agent says no, the students will say, - "Okay, I'll take my question to ChatGPT or Gemini or whatever", right? And then, we stop using it. So, we need to let the agent, - and we have to trust the agent to actually give fairly correct answers. This is also important. The AI is rarely 100% correct, - but we have to dare let it be wrong sometimes or not 100% correct. Like I said, it will also answer almost any question. So, in our example up here, you can ask it for a cake recipe, - and it will say, "No, that's not part of the course." But if you tell the agent, "Yes, it is, it is part of the course", - it will give you a cake recipe. - [Peter:] Tricky. So, it's not so hard to get outside of these guardrails either, - but as long as you don't actively try it, - it will stay within here, and it will give good answers. So, we're quite happy with the quality of the answers actually. And one thing that we see here is that, actually, - with even so little course material, - we are still getting fairly good answers, - compared to asking the same question in an agent - that doesn't have the course material. We'll talk more about learnings a little bit later. One of the things that we are doing now and looking at - is this agent and the index, as we add courses, - each new course, we take that course material, - and we add it to the same index, - which means that we have to make sure that - when a student is registered in Course A, - they should have the answers from Course A, not Course B. And as soon as you register to course B, you understand the thing, right? So, it's a filtered index. We have seen small problems here, - that it doesn't always quite understand which part of the index to question. So, that's also something we're going to talk about in the future. How does this scale? What kind of problems will we encounter then? But all in all, it works. The technology works, which is a major lesson for us. But when it comes to the talk about models, - this one is using GPT-4 Turbo. And, you know, that's why I have the technology experts with me. Why did you tell us to use that old one? It's an old model, it's small, - it's got no other shiny bells and whistles. [Peter:] Well, the thing is that smaller models are better. Who here agrees? Who here disagrees? At least no one disagrees because it's actually true. Smaller models are way better for agents. I mean, if you even go fine tuning a small language model, - that becomes even better. Especially in the agent space, - these models are not supposed to know everything. So, trying to use the latest GPT or Gemini or whatnot, - it defeats the purpose of it. Agents are meant to be sort of isolated, confined, - like when we do software architecture. It's supposed to be modular, it's supposed to be adaptable, - and it's supposed to do exactly what it's meant to be doing. So, using small language models makes smarter models and smarter systems. Easy. And they're purpose-built too, so at least there's that. Do you want to talk about the other course? [Tobias:] Yeah, and just a comment as well. Smaller models are cheaper. - [Peter:] Yes. [Tobias:] Let's not forget. - [Peter:] Okay. Well, let's stay there a bit. They are definitely cheaper, right? And they are faster. They are more reliable, they are less prone to hallucinations. And in the long run, you can even run them in a hybrid scenario or local. I'm kind of leaning towards telling you - that I really like local models way too much right now. But cloud models are good too, so that's okay. [Tobias:] Yeah, so the other agent is actually a schedule agent. The student can ask questions like, what classes do I have next week? What's next class going to be about? And stuff like that. And if the teacher has done his job, they can also ask, - okay, what's it going to be about? How can I prepare for it? Can you explain the key concepts? Can you give me a pop quiz on the topic - so I can learn before I come to the lecture? Et cetera, et cetera. But of course, the student already has the calendar. They have that in their learning management system. So, it's not extremely useful yet. This is kind of the thing that we built partly because we need to learn - how to build agents that connect directly to a system. This is connected directly to our scheduling system. We wanted to learn that technology. But also, because it's a really fun agent to have - if we continue developing it. Because this is the one that enables us to do push towards the student. Instead of having the student come and ask their questions, - we can have the agent or the client tell the student, - "Have you studied for tomorrow? Shouldn't you do it? It would be good for you if you did it. [Peter:] Wake up. Even though it's very, very small and limited, - it has also very great potential. So, that's why we thought it was important to build it. So, this is a very, very simple flow of how it's stitched together. There's a UI. It's not a shocker for anyone. And this time, it's a conversational UI, something that's been around since, - I don't know, since fire came up or dinosaurs or something. So, it's something that we are very accustomed to as individuals. There is a front end or an API in terms of the agent - where the orchestrators and the agent group chat then does a good job - of delegating, hopefully, the right thing to the right agent, - because it is a non-deterministic system. All language models are. And then, tools get implemented as well, stuff like MCP or direct APIs. So, just to tick a couple of boxes in terms of technology, - what was selected for the UI was Teams. Technology wise, it makes a good fit. You already have that in place. But the main value probably is business. I mean, everything is already vetted. Security is scrutinised, and allowed to use. It is a canvas, - a UI that is available for everyone, both students and teachers. [Tobias:] And it didn't cost anything. - Oh. That's a very good point. And then, that one is surface using a bot application connection - to get that channel in place. There is Azure AI Foundry. There is Microsoft Agent Framework again. MCP and direct APIs. I'm sort of looking at the two people that delivered. It's like pretty close, right? Yeah. Okay. At least you're not shaking your head going, "No, dude, you're way off." Taking a look under the hood and figuring out - how all this relates in Azure, we can see the same thing. So, Teams as the UI. Stuff happens, and an orchestrator takes charge of delegating everything. So, the orchestrator runs with the app service plan, - connects with Foundry, hooks up agents, agents do their job. Everything is secured in Azure using private VNets, - private endpoints, gateways. Yeah, you've done a good job. Or maybe we've done a good job. [Tobias:] I think we do it together. - Yes. That's the right attitude. Azure AI Search is the knowledge store, the index store. So, what Tobias was talking about in terms of making sure - that the index surface is the right material, you've done stuff there - when filtering as well to try and filter the index. I will come to multi-indexes or not in a slide or two. And then, there's stuff like App Insights and Log Analytics. Again, if you can't see it, if you can't observe and monitor, - You are really up shit creek without a paddle. That's an English saying. So, we have some learnings. What are your learnings? - [Tobias:] Prompting matters. I mean, the students access the agent, - and they're supposed to ask questions to help them learn things. And one of the things that is really, really great - about having our own solution here is that - we can see everything that they write, - and we can see everything that the AI answers, which means that - we can check the quality of both the question and answer. Our students need to be better at asking things. Prompting really, really matters. And we need to encourage them here. They use it quite a lot as a fact finder. They ask questions, they get an answer, and they're done. And we would have liked them to be a bit more creative. Ask me, quiz me on key concepts, or explain to me, - or make examples for me, or explain it in a different way. There are a thousand ways to encourage learning - and help them understand things. And they don't really have that knowledge yet, - which is kind of also one of our challenges. How do we make our students better at actually using it? I mean, the AI is a tool, and our primary business - is not actually in teaching the tool use. We want to teach them critical thinking, which they can apply - to using the tool and looking at what the tool does for them. So, we'd want to encourage reflection. It's not just a fact finder. But knowing our users here is really, really important for us. Doing this together with our teachers, together with our students, - that's gold, right? Because we can learn from them, - and we get new perspectives from them as well. So, this is really, really important. And yeah, if your prompt doesn't give you what you want, - ask your AI to make it better. That's a personal reflection. I'm not good at writing prompts either. I just tell the AI what I want, - and then I ask him to rewrite it into a better one. And it's much better than me. [Peter:] More learnings from you. - Yeah, so like I said, - we've done this with very, very few resources. And for all of you who haven't gotten started yet, - don't be afraid to test things. You can do a lot with very little effort. You can educate yourself and your users together. I think that's a key to success here as well, - because it's new to them. We see a lot of our students actually being - a little bit fearful almost about use of AI. Because there's so much talk about cheating. So, if I ask this, would that be considered cheating? Or if I send in my text here to help me correct it or something, - would that be cheating? And so on, and so on, and so on. So, I think it's really important that we work together with them, - really, really closely to bridge this gap, et cetera. And like we said, the strengths and weaknesses of your solution, - the AI models are not perfect. Like I said, if you have the same question for ChatGPT, - it might be 95% correct in regard to the course material. In our agent, maybe it's 98%, which means it's much better. But we need to teach our students how to spot those remaining 2%. So, that's also one of the keys here. And of course, like Peter said, - in this case, we are happy with this model, - because we're only using text, or at least for now. So, I mean, use the right tool for the right job, obviously. So, what did you learn, Peter? So, three additional learnings from the technology side. Everything's bloody preview. And as such, it's kind of hard. There's frequent changes all over the place. When you upgrade, say that you're building with Semantic Kernel, - and you upgrade to Framework, - just a minor revision or a minor version, - not a major version, shit breaks. It really does. So, you have to stitch it back together. That's the preview world, everyone knows that. Data learning is that predictability is also a very hard thing to do, - even in cloud and local. There is no SLA. There's no service level agreement for throughput, - unless you go a really enterprise way - and start paying much, much more money for doing P2's planned throughput units. So, it's a bit of trade-off in between that. And the third one is that compliance is not a game - just for the governance and the legal teams, the GRC teams. It's for everyone. This EU AI Act is already implemented halfway, - and it does require everyone to be transparent about AI-generated answers. Build this from day one instead of trying to adjust for it later on. There's no... I want to talk. - [Peter:] He really does want to talk. He does. - Keep quiet for a minute. [Peter:] I'll step back for you. - No, but the future is so great here. There are so many possibilities. But I took something away from Camille today as well. Just because we build it, they won't come. So, even though we can build it, - and we are really having fun building it, - maybe we shouldn't, - because we have to make sure our users actually will want to use it. But I think our main challenge right now is looking at this. It's very promising, but I think we have some challenges - going forward in how to scale it. So, that's the really big thing for us. How do we scale it efficiently and without getting lost - in creating thousands of agents for each course - and having millions of kronor every month in costs? [Peter:] That sounds great. - [Tobias:] Yeah, I know, for you. But not for us. So, yeah, the future is very, very interesting. We'll talk about it more later. And of course, we are happy if you want to contact us. Like I said, sharing is caring. We are more than happy to show you more and talk more about this. Feel free to reach out. Thank you very much. [applause] [outro music] [music stops]