Top AI tools: How to benefit from AWS multiagent AI - Demo and discussions
Discover the most innovative AI tools revolutionizing DevOps and software development in this webinar series. In episode 2 dive into a demo-focused session showcasing AWS multiagent AI technologies like Bedrock, Q, and Sagemaker. Discover how multiagents revolutionize infrastructure automation through self-healing systems, cost optimization, and scalability. We’ll also explore how multiagent AI is transforming software development, helping teams manage complex systems and build more adaptive, intelligent workflows. Key Highlights: - Live Demo: See AWS multiagent AI in action with real-world Bedrock and Sagemaker integrations. - Reshaping Development: Learn how multiagent AI is redefining software development practices. - Future-Ready Insights: Understand the impact of multiagent AI on the next generation of software innovation.
Transcript
welcome to this second part of Fode webinar series top AI tools for DevOps and software development today's topic is going to revolve around AWS their AI portfolio AI agents and how it can fundamentally change the software development and our our way ways of working and um here is um quick agenda that we are going to focus today like AI transforming software development jack Deep is going to show show some AWS demos and uh then we close up with the future forward insights and hopefully we have some time for questions and and hopefully also answers uh just briefly showing the presenters today so my name is Yuri Hokas i'm located in surprisingly sunny sunny Helsinki i have a long long operational history i'm working currently as a F code AWS lead also as like a DevOps and AI consultant here I see myself as a enabler AW enthusiast i have been working in IT a long long time ago so I have seen emerge of Linux and VMware and cloud adaptation infras dockers whatever but the AI is mind-blowing and the speed is unprecedented so happy to talk about this today and here is my co-presenter so deep go ahead yeah uh thank you for having me Yuri uh my name is Shadep i'm based in sunny Amsterdam today which is also um not that regular um I'm a partner solutions architect with AWS uh I work closely with our partners like Fode and I'm also one of the Genai ambassadors for AWS in the region and in that capacity I also work with our um service teams so very happy to talk about agents it's an exciting space uh it's evolving rapidly as Yuri mentioned uh I'll be also giving uh some demos and if demo gods are with me uh I I hope these demos are successful um so yeah we we have a packed agenda for today cool cool so let's go uh to the first topic of of today so AI transforming the software development like how it re reshapes the development practices and what is the multi- aents role role in that i I I think that we should first start start with the uh like definitions and and make a clear distinct u difference between AI assistance and AI agent so AI assistants are uh ones that require human interaction so it augments the productivity and creativity so it it's like a coding assistant writing code that you want checking the code that you want or or helping the project management uh to do their work or marketing to do their work and and summarizing summarizing data and and and so on and AI agent agents on on the other hand they are kind of like automated task and they are they are working autonomously uh to streamline some processes perform some task work together um with the with orchestrator and and um they are really good working with the repetitive tasks and um you can scale up scale up um uh the processes uh with with the agent so with agents you can for example run continuous integration testing um have a agents monitoring your security anomalies and and and and so on so um you can think that a single agent is like a narrow focus solves one problem analyze and score submitted uh code for example and the multi- aent uh cooperation is like uh uh solving the bigger bigger task at hand working together with the different different agents for example you can have a research agent and then executor agent and so on so data flows back and forth between the agents and here is um like a um picture like on the left side this is I I I I would say that before is maybe wrong word maybe I should have used current current uh situation so so there is um with certain product there is a lot of teams working together they they have like different uh version different features or versions that they are working on and and like only way to scale scale the speed speed of the development is is by adding adding people to the teams or adding totally totally new teams and of course they there might be some friction between teams like when you bringing bring in new features it might have like some dependency of of other teams work and there might be even like heated discussion like who is wrong and who should make a changes in in their implementation in order to get this new new feature uh working working with the pro product and maybe maybe in the future hopefully in the near future we can have even this kind of like platform engineering type of approach that the developers can really focus on on the new feature thinking from maybe a little bit higher level have have a like better better view for actual feature instead of like uh uh writing writing test or writing documentation because those are going to be your like virtual team workers the agents that are going to most likely perform most of those those tasks and and also I I would see that the friction between uh teams will um uh go down because the agents are the one who are kind of like tracking the dependencies all the time and when you are pushing your new new code un code there will be test which says that okay this will collide with some features or some some ways of working on the other feature so you are immediately on top of top of that and It might even suggest that you should try to change it this way so there won't be like a collision and uh I would see that uh this right side could we have most likely everyone have seen the Amazon warehouse where the robots are working autonomously and bringing packages back back and forth to the correct correct location i would see that the agents are working in software development similar kind of way in in the future um AI agents they are not only going to affect the software development itself uh there will be new roles in CI/CD like uh service desk uh operations marketing stuff sales HR legal for example keeping track of some legal regulations and and and so on but today we are going to focus a little bit more on the software development so what the agents are really good good uh with so it's going to be like customer requirements uh specification technical designs of course the assistant part with the code generation then then of course testing testing and the deployment but uh um I would say that the adoption of the genai it's not a scientific problem at the moment so it's more like a it or engineering problem and we have solved these kind of problems before with the with the microservices or with the dockers or like with the cloud cloud native uh culture so we know that we can solve this so it's more more I see this more as like a cultural adoption problem than than just like a scientific problem for us and when we are talking about agents and today if we are talking about like multi- aent uh collaboration or or like um this this is a like example workflow workflow which could go like round round and round but this is just like example of of one workflow so so you have have the input data which comes to the orchestrator and it it is like a aware of the agent that it has its its its disposal so it makes a educated educated decision like who who is the agent that is going to perform perform the task maybe reject that task which is of course need to be uh there needs to be feedback loop for for why um then when the selection is made the agent process um uh is triggered it it it will give some uh response it uh the response target may most likely will be the new agent and but um just to emphasis here is that uh we really need uh early on store all the data we need to have like a feedback feedback loop in in this kind of operations so so we see the operational um um metrics we see the financial metrics and there needs to be continuous development incremental improvement of agents maybe also also like making decision that should this task be actually like two agents or even more agents that performing this task in in parallel line on on on how to how to go on with that so I would see that this is like uh the orchestrator has like a deck of cards in its hand and uh those are the agent that it's going to play based based on the input input that is coming coming through of course uh with the new new field there comes like challenges and and considerations um for operations i would say that uh it's really important to have like a par parallel development of of like new agentic functionalities like new agents or improving existing agents uh orchestrational capabilities so it needs to be able to make a smarter decisions based on the data that it has um security of course when there is like new connections agent need to data from new new locations it it kind of like uh expands the attack vector vector so you need to be really secure so that's where the governance and uh guard uh guardrails and rules uh come come to play and of course keep in mind that uh data also need to be secure on on transit uh scalability uh this is something that you should uh really think before actually like launching launching big uh because it if if such happy happy thing happens that the business is booming and you get lot of new new customers or or if it is is like internal service and everybody starts to starts to use it so it it needs to be able to scale up but also there needs to be like a financial uh like um stability so if if the operations is scaling up uh does the cost cost of your operation also is is it linear and if it's linear is this like viable from a business business point of view and like I said it's a like big cultural shift shift so developers need to be able to trust these new new team members so they need to be sure that uh the uh things that they are performing they are uh current they are like correct they are secure and they also need to start to think new way like could this task that I'm currently working on for example the day every every Monday I come to work I need to do something so could this thing be a aic work so I don't need to take care of it and and ruin my day because doing it manually and then um the some key takeaways for for this uh first part first part is is the like the modular approach so you really should break the task into uh concrete small small uh parts to increase the agility and fault tolerance and and clearer the context clearer the task better the result is that the agent is going to provide and I I would see that the main uh benefit for for from the AI is is the like reduced human toil so uh try to uh encourage people to think new way like how to automate the routine routine things evaluate your daily work where does the time and effort go and uh try to improve from that of course uh like I mentioned before continuous improvement so agent and orchestrator they need to learn from the historical data you need to uh try to find new ways to solve uh problems more cost efficiently more more timely timely manner and of course as a action step for everyone like evaluate your CI/CD and DevOps pipeline for the possibilities or opportunities for the multi- aent uh out automation and at this point I think I will hand it over to Jack Deep who is going to show some amazing stuff for us um so yeah before I go into the demo I just want to briefly talk about the generative AI stack of AWS and how we categorize uh different applications different things that we have in the generative AI stack um so the bottom layer that you see here is um is the foundation layer right where we have the most comprehensive infrastructure services to build and train AI models so here you see Amazon Sage Maker um and you also see the purpose-built chips uh called AWS trainium and inferencia and multiple GPU options that are available the middle layer is Amazon Petra where uh customers can directly start using these models um via single API and start building the application there are multiple tooling available with Petrog to um build the these applications and in subsequent slide I will um I will talk about some of these capabilities and the top layer is um the applications to boost productivity and here the two main um services that we have are uh Amazon Q business and Q developer and for this session we are going to focus mainly on Q developer uh to see how Q developer can help um um developers in their overall SDLC um life cycle right so it's not just about completing the code it's not just about writing a function it's uh much more than that and we'll see how the agent tech uh capability of Q developer can help customers So uh when it comes to the different capabilities that Bedrock has um you can build applications based on or you can customize with with your own data there are multiple options to it you can use uh knowledge bases which is kind of a managed rag workflow you can uh fine-tune the model uh to create a private copy of the model for your own data um it also has orchestration capabilities and agents and flows are two important uh items here uh because um yeah we will we will look into bit more details of how we create agents in in Amazon Bedrock and what are different components involved in that from the developer experience point of view Bedrock also has IDE it has the prompt optimization prompt management these kind of capabilities now one of the core things with uh bedrock is a model choice and evaluation because what we what we believe is that there is no one model which fit all the requirements right based on the use case customers need to select which model they want to use customers may have different type of requirements related to intelligence right let's say if you have a simple use case of summarization or some text generation you may not need a super intelligent model if you have a requirements related to latency you may need to select a model which is um which which gives a quick response right so based on these kind of requirements you need to select a model and that's where uh bedrock gives you capability where you can select a model and via single API like if you want to switch or try out different models you don't really have to change your code you just have to change the model ID and all the complexity of changing the model is taken care by by Petro um it also has the evaluation piece where uh you can evaluate the response using uh the programmatic approaches using human in the loop using um using LMS judge approach and so on and finally um inference at scale uh so there are multiple options uh how you how you get the inference whether you want to do with on demand pricing or or batch um you and further optimize the cost using uh prompt caching approaches or intelligent prompt routing some of these features that I'm mentioning are in are in preview um and then there are multiple regions which are available um when it comes to uh security and responsible AI u these are also the fundamentals with Petro um so it has capabilities like VPC private link where uh you don't have to route your traffic over internet and you can have private link connectivity with with bedrock um you can also use betro guard rails to secure your genai application so if you would have heard about prompt injection or um toxicity those are the things that you can uh avoid um by using u guardrails and then it also has open-source integration so integration with uh with tools like lang chain langraph llama index uh for lang chain langraph and lama index we have the the SDK Python package uh which uh help you connect with Petro and different capabilities that Petro offers and in fact the demo that I will show the multi- aent capability I'll show the the langraph um view of that and how it is done in in Langraph okay so let's let's uh briefly talk about um agents because it is important to understand some fundamentals about the agents right so uh when when all these foundation models or these large language models evolve they are really great at having um conversations or uh generating content but how can we u get them to take some actions or or uh you know do more things to solve some complex problems or to connect with the enterprise data of any organization um that's where agents come into play right so what bedrock uh agents do is they plan and execute uh multi-step task um using your company systems or data sources and it can answer um um questions um related to product availability or or related to orders and and some other complex scenarios so as you see first step is decomposing the task uh second it will um execute the actions you can define which actions are available it can identify which actions it has to execute it will go through this loop until it has the final answer and then finally it give the answer mhm so uh when it comes to bedrock agent it has um capabilities which are required to build the agentic applications uh so for example uh choosing the foundation model um you can decide which model uh you want to choose whether you want to choose cloud 3.7 which is which has the thinking capability or you want to use llama model um you can decide it also has um tool configuration so for example you want to connect with your own APIs you can define a lambda function you can call your APIs and so on uh you can also create your own knowledge base which has your enterprise documentation and then you can connect with that um it also has options for um memory uh management or or we can say uh session or state management uh it also has a multiolaboration capability uh and then finally the the tracing debugging uh and integration with guardrails now the multi collaboration capability which is uh there in bedrock it it is still a a preview feature now why it is so important because when we create when we create an agent app application what we are doing is we are giving u the foundation model access to different tools right but then over a period of time number of these tools may get um increased or their u you know the task could get lot more complex so that's why uh multi- aent collaboration and becomes very important where you can create different agents which has a specific purpose so for example if you are an enterprise you may want to create an agent which is uh designed to handle task related to financial documents right there could be another agent which is designed to handle task related to let's say HR policies and at some point in time you want to use both of these agents together then you need to define how these agents should talk to each other whether you want to have a supervisor agent which u gives the task to these different sub agents or these agents should talk to each other so Betrock uh multiolaboration capability basically takes away the the overhead the development overhead from from uh developers from users and it takes care of that uh orchestration you just have to focus on defining individual agents and also you can simply configure the orchestrated orchestration strategy um with this now let's let's look into uh Q developer because uh Q developer is is super relevant for today's uh topic um so based on uh the recent Gartner study uh what we have seen is that 73% of the development time is is spent on uh running and maintaining application and only 27% of time on innovation or um writing something new writing a new code or developing a innovative feature right because lot of times developers need to spend time on fixing let's say environment issue or we have all seen okay if we are using node there could be python package mismatch and dependency errors or something is running on my local machine but it is not running on on when I deploy it in production these kind of issues right um and and that's where Q developer try to um optimize the whole process for developers so as I uh mentioned mented in the initial um initial introduction that uh Q developer uh it it it only it not only focuses on the the code generation capability but it basically try to help in the overall development life cycle in the whole um SDLC journey right uh it also gives out the most u accurate uh coding recommendation And there are uh some some benchmarks around that uh when it comes to uh recommending the accurate code and then it also has the agent capability which can