In this DevOps Sauna episode, we talk about AI in the DevOps toolchain. Henri Terho from Copado and Lauri Huhta from Eficode join the conversation. We bring forward the most common definitions of AI and discuss how the concept has evolved throughout the years.

Lauri P (00:00):

Thank you for taking the time Henri and Lauri. Good to have you in the DevOps Sauna podcast. And I have to start by telling a little story how we got to here, which was that I stumbled upon, Henri, your podcast. And what was it called again?

Henri (00:18):

Well, me and Nick Asharma podcast called Software Sauna, and now we are in the DevOps Sauna at this moment. So I think that was what you, Lauri, were looking for.

Lauri P (00:29):

Absolutely. I wanted to get you out with the sauna name of the podcast, that it's actually a sheer coincidence that there are two sauna podcast. But then somebody told me that F-Secure also has a Sauna podcast.

Henri (00:46):

Do they?

Lauri P (00:47):

I haven't validated it, but maybe I should have.

Henri (00:50):

Every single Finnish software company makes a Sauna podcast or something. Or every Finnish company has an idea that comes to their mind is that, let's link this to sauna, it's a Finnish thing. So that might be-

Lauri P (01:00):

Yeah, absolutely.

Henri (01:01):

the original idea there.

Lauri P (01:02):

It's not only the sauna in relation to the podcast, but also the offices. And long, long time ago when Hewlett Packard opened their office to Finland, back in time, Hewlett Packard had a rule that their offices shall only have up to three floors. And in Espoo where their office is located, the fourth floor is their sauna and that is on the roof. And according to the story, they actually had to go through the burden of persuading their headquarters to accept that they have four floors in the office. But luckily the only thing that happens in there is located in the fourth floor is the sauna.

Henri (01:52):

It seems like that F-Secure has a cybersecurity sauna also.

Lauri P (01:57):

There we go. So-

Henri (02:00):

Validated.

Lauri P (02:01):

... it is really proliferating. Today we are talking about very, very famous topic and I would say that some people really know a thing or two about it, and I'm referring to the people on the line now. But this has also been a popular topic, because it has been easy to gain popularity and attention by talking about our topic, which is AI. And as we were discussing about this subject, we resorted to talk about the role of artificial intelligence in DevOps toolchain. So why don't we start with the first question and try to get to the definition. How should we define AI in DevOps?

Lauri H (02:46):

Maybe I can take the floor because I usually have a headache when people talk about AI because it's... Well, it became so quickly like a hype word. So the people who are actually working technically with it, it takes and takes the power out of those that it's just thrown around as a hype word. So I actually, for this, I Googled about three definitions how people see AI, and I'm going to just quickly go through them. And as you will see they vary a lot.

Lauri H (03:22):

So the first definition I found was that "AI is engineering of making intelligent machines and programs". And I would say, if you work in tech and you make some programs, you can give them a large set of rules, they are intelligent, but I wouldn't say that they feel the definition of AI just yet.

Lauri H (03:46):

And the second one that I found was maybe getting a bit closer. They said that "AI would be a technique that enables machines to mimic human behavior". Well, maybe in the decision making front, that would be right. But can we already accomplish something like that with scripting and automation. We can make a program that acts like a human. But I think the third one that I found is what resonates with me the most. I think this is what I would aim to talk about AI in this form.

Lauri H (04:28):

The third definition was "a program that can sense, reason, act, and adapt". So with AI, it's important that we've taught it with data, and it can sense the environment, sense the data, reason, act, and then adapt to the situation. But when we are building something with AI, we wanted to accomplish something that we haven't set a tight set of rules for it. So we wanted to see new situations and knowing how to react and adapt to those situations. So I think when we talk about AI, we should understand that we could do a lot of things, which reassemble like a human actions, but we want it to know more than that. We want it to learn itself, hence the machine learning.

Lauri P (05:27):

What do you say, Henri?

Henri (05:29):

I think there's three very good definitions. Now that you've laid out those definitions, then I can go through them and debunk a little bit about them. And maybe we can end up into a little bit of a better definition maybe through that. If you're thinking about the first one, "engineering of making intelligent machines and programs" that as you said, that super vague. What is intelligence? What is an intelligent machine? We can do many machines.

Henri (05:52):

Basic computer is a lot more intelligent in, for example, doing maths than a human. And the definition is not really there. And the second one was the: "mimicking human behavior". Typically when we talk about AI, we just don't want it to mimic human behavior. We want it to do something better than humans. So for example, well, again, the computers are still better in maths than we are in general sense. But typically one thing, some kind of a amalgamation of these two, that there's a way for us to interact with the system or the AI or something in a much more human relatable way.

Henri (06:28):

And I think the last one that says "sense, reason, act and adapt", that's quite an intelligent definition of AI, then doesn't limit it to mimicking humans, for example, in the way that it performs. It can be something that for example is... Well, one thing is here is how do we want to talk about it is something that we cannot fathom or understand. Certainly is intelligent. Do we want to limit it to something that we understand, is a really good question.

Henri (06:54):

Of course, these go totally to the future and not to the definitions of what we probably talk about as AI is now, AI is mostly just a PowerPoint. So that might be the current thing. But I think one major thing is also deterministic versus non-deterministic systems. That in deterministic systems we can see what's happening and we can always go back through all of the reasoning chains and all of that and check the math on what's actually going on and we know where their whole system ends up.

Henri (07:24):

But then typically AI systems or man-made systems are non-deterministic. So they are really difficult to know what's actual happening next and what's happening and what's the processing in there because it's so complex. Of course, processors in themselves. If you go down to the architectural level are deterministic, but you get some non-deterministic behavior from there if you use a lot of self-learning and other kind of algorithms on top of it, which is still a really interesting discussion even on how the neurons work in our brains. That's then going again into a totally different level. So I like the last definition, the best here.

Lauri P (07:58):

I was thinking about this mimicking human behavior and now in the context of DevOps, it's hard to talk about DevOps without talking about automation and especially in the context of automation and this more true than anywhere else, what you said about not like explicitly not to mimicking, a test automation. Like why on earth would anyone mimic a human behavior considering that the whole role of test automation is to try to do it in a such a way where everything is repetitive and can be done over and over again without exhausting the resource, I.E, the human.

Lauri P (08:33):

And then the other interesting thing about what you very, very last said about the systems being a non-deterministic. I really have to look that up. I remember evaluative experiments from the late '90s where they take FPJ circuits or field. Was it called field-programmable gate array. But it's basically like a programmable hardware circuit that can simulate a certain definition of gates.

Lauri P (09:03):

And then they applied an evolutionary behavior to that. So they basically you spawn a few of these configurations, and then you pick the best of them. And then again, use them as an input and spawn another configuration of gate arrays. And you do it like 10,000 generations or something. But remember that the starting point is very, very deterministic. They're basically Boolean gates and you build configuration of Boolean gates. But when you're doing 10,000 evolutions or generations, and when you look at the output, it's in explicable in that those engineers who design the original gate arrays and the configuration, they cannot explain how the outcome works.

Lauri H (09:51):

Yeah. I would agree that as the human brain and as the machine, they learn to live in the environment that they've been working in. So as they get that data, the initial data, but then of course, when you send it in production or actually use it, then it actually sees that there's this kind of... It can sense different kind of things and it will add up to those. And in that point, the engineers wouldn't really know how to explain how it makes those decisions, at least very easily that there's some little bit of electricity that comes to it.

Lauri H (10:31):

But I think when we talk about AI, we quite fast go to the explanation that it's this black box that just gives us answers that, on data scientists just build this black box and fill it with data and it gives us answers. And sometimes that takes the power out of the initial work that the data scientists do because, in the beginning it's really understandable machine learning algorithms that based on algebra and mathematics, and then there's the deep learning.

Lauri H (11:11):

So if you just say that this field is black box that you fill in data and no one understands it, it's understandable how very technical people get annoyed by that definition, because then it seems like they are not the ones who build it and actually have the knowledge to make something that can make these answers and solutions that we wouldn't initially come up with.

Henri (11:39):

That's maybe one of the points also that, typically these AI systems can reach states where we didn't think that they would end up in, in that kind of a figure out things that we as humans didn't, for example, figure out in different kind of an optimization path. And that's also the value in the system that basically you're saying that AI should behave like humans, but really most of the power comes from a different kind of end-result in there that it ends up in different situation. Because for example, it can read data much more better, or check for correlations much better in the mathematical data sense and then create models in there. But based on the same kind of neutral neural thinking, for example, now the neural networks are in fashion in AI field.

Lauri P (12:26):

What comes to your mind when you first think about available applications of AI in DevOps? So where should companies start, or where should teams start when they are looking to adapt AI to DevOps?

Lauri H (12:45):

Well, at first, I would say that before you start seeking into doing something with AI, you should have established a good data infrastructure, because as then machine learning models and all the neural networks and deep learning has gone further and further. And it started with just building the models and people were focusing on that with the data scientists. Now they are realizing that the importance of data, the amount of it and how good quality data, or how it's processed to be a good quality data, that's a huge factor. So before you, in your company think about doing AI, I think it's wise to see like, do we actually have the data? Do we have the data processes already running?

Henri (13:46):

And people have started to figure out it's not model, it's as you said, the input is really important for it as pretty much the AI themselves generate the models. So those have been becoming of second importance. And most of the data and getting it into such a shape that we can enter it somewhere. And there's a whole another field of basically AI researchers now looking into how to make these data pipelines in such a way that we don't have to massage the data or clean it up or anything.

Henri (14:12):

If you talk to any of the people who are involved with AI, they're going to say that most of their working time goes to actually making the data into a concise format. There's also errors, there's always in different formats coming from different places and using 90% or over 95% of their time to actually do data janitorial tasks. And then they're just clicking run model and it generates model for that data basically. So most of the time is going into that anyway. So super important to getting that figured out.

Lauri P (14:41):

How comparable are the models from each other between vendors? If you think about the data model. And if we say that the data model and data labeling is critical. Well, Henri, if I understand what you said, and if I tried to paraphrase that was that, it is okay if the data model is not complete and perfect, because we can apply learning into that data model itself. But intuitively it means that when the data model is more clean and more aligned between different business systems, then it would be more powerful. But how important is that after all?

Henri (15:28):

Really good question. I think it also becomes about the value of the data and how compatible the data are from the two sources as quickly, even if you have two vendors, well, I'm guessing you're talking about you have your software running in two different platforms, for example, Amazon and Microsoft. They might not report the same things or think that the same stuff is important when you are running that.

Henri (15:49):

So if the data is not compatible in that way, then it's of course difficult to integrate. But if the data is the same, it's shipped in a different format, then it's just a matter of normalizing that data into a unified format, be it by humans or be it by self-learnings. So there's a lot of stuff going on in there that you can take basically any data and just look at the correlation in there and learn something new from there.

Henri (16:15):

But it's quite important for companies nowadays to really think on how they are storing their data, to make it as easy to do this step, as typically this step takes a lot of effort. And if you've already thought about it even a little bit before you start saving your data, you're producing a lot of value back in the, well, back to the future. But when you start doing AI on top of that data, you are doing your future self a service.

Lauri P (16:41):

I was thinking as simply as different schema configurations within the same product, like you take JIRA and you observe how different companies use JIRA. And it's like a night and day between two companies. And then the question follows like, how can anyone build any remotely functional AI implementation of two data schemas? But I wish AI could help us there as well before we get into the actual topic.

Henri (17:13):

Personally, I've been doing a lot of digging around the Jira data model and then some of their competitors as well. And as you said, typically for a human it looks every single project is different. But when you really feed it to some of the normalizing algorithms that can be used in there, that's been generated by AI and not by a human, they behave quite nicely actually.

Henri (17:33):

So that's also one of the powers of AI that for humans it looks totally incomprehensible. And how are these even comparable and all the software projects are different? But when you really start massaging the data and giving it to self-learning systems, you suddenly realize that well, most software projects are actually surprisingly similar.

Lauri P (17:50):

Wow.

Henri (17:51):

Maybe not with the parameters that we understand.

Lauri P (17:54):

Exactly. Lauri, any thoughts around that?

Lauri H (17:59):

Yeah. I think like coming to that from just a bit of different angle. Like usually if you think about the data and the model, even if it's a bad model, machine learning model, but you have excellent data, even that can feed a good line to give some kind of prediction and give some kind of understanding of it. But if you have a really bad data, it doesn't really help much in that you have a really good model because it's just so messy like all the values are here and there. So that's why I think, well, if you we talk about a vendor like JIRA and the data that they produce, they have pretty well streamlines projects in there, and the way that you feel in everything.

Lauri H (18:54):

But then again, there's a lot of that you can configure yourself. So you can of course always help it. You can always help it by standardizing how you use JIRA, so it produces better data for you. But then of course, if you may run it, as Henri said, if you run it through like some AI model or some neural network, it kind of starts to understand those animalities. Even though it's weird, it can understand the weirdness. But of course the data, if it's systematic, it always helps.

Henri (19:32):

And of course, if you have a chaotic process, you cannot fix it by AI telling you it's a chaotic process. So if you think about it, you're giving an AI system data to, hey, please interpret us what the heck is going on in our software product end-result, maybe then it's like "hey it's chaos,  you don't have a formal process. You're just doing things here and there". That might be the truth behind it. There's also those kinds of software projects sometimes.

Lauri H (19:58):

Exactly. Exactly.

Lauri P (20:00):

We have avoided one specific term around this conversation. I don't know if it's for reason or inadvertently. RPA (Robotic Process Automation).

Lauri H (20:10):

When we try to think about some solutions that we could solve with AI, I think the road meets with RPA in a way that if you just simply think some kind of automation, the solution could be achieved with both ways, and actually the paths of them, like AI and RPA, they are coming together in a way that nowadays, if you have some RPA tool that maybe tests your UI system or editing already, some of the vendors are utilizing AI. They are using computer vision to build these tools.

Lauri H (20:55):

And that's like, if the vendor already is using the AI, it's really beneficial for the people using the product, because it has a lot of data. It has seen so many instances. So instead of telling the operation that go here, this is a login window. Do this and that, it doesn't really see the operations as the coordination. It can actually understand with computer should use what's on the screen. So maybe it can't automatically do something, but at least it can recommend some workflows for you.

Henri (21:35):

Yeah, that's typically it. But you also do when you ask experts about help on your systems that you want them to tell some best practices, for example, hey, how do I solve this chaos and what can I do? This is something that also an older term, basically for AI was expert systems that was used that they can replicate the way the experts can tell you the best ways to do things. And this is also something that's really surfacing now with AI technology.

Henri (22:00):

And going back into this RPA. RPA quarter as well. My thoughts on the field are mixed in there that I think RPA is interesting in the way that it mimics human behavior. It takes this other definition of AI that we want to make a piece of software that mimics human behavior in software as easily as possible, and make it easy for developers to use and guide that AI in the right places.

Henri (22:25):

So it's kind of the hands off the computer system that the RPA is currently doing. We want it to behave the same way as that humans do in different software systems. And I think it's an interesting field of developing as it will probably lead to some convergence in for example, user interfaces in there that humans and machines can use the same interfaces. And I think that's quite interesting times.

Lauri P (22:51):

There are already physical robots that are taught in the way that you take a fresh robot out of the box. And then you basically move the limb of the robot by your own hand. And you teach there the sequence of moments and limits of the physical space. So assisted training. Maybe there's something similar for RPA where you simply show the places and then somehow they figure it out. Going back to the DevOps and looking into the different definitions of AI so far that we have related, where would you be most likely to already see AI in action, in the realm of DevOps?

Henri (23:40):

Well, one thing is, a lot of stuff, for example, in linters and editors, is something that's already using AI. There's a lot of examples in these in editors that you don't even know they're using AI, for example, code completion, tools that tell you that, hey, you're probably trying to type something like this. It hasn't been like a clear evolution in DevOps like hey, now we are using AI. Try it out. No.

Henri (24:02):

Before they've just used like statistical maps on this is the most probable thing that you're doing. And then behind the background, without even telling you, many companies have built quite complex predictive models on what code should look like and enhance that experience even without you knowing. So there's a lot of tools. Well, probably I think most of you guys have really not been talking about code, we're talking about PowerPoint or we're talking about Word. They also have quite powerful, predictive tooling. Predicting the way that you're writing text. So those are also nowadays powered by, well, maybe I don't know the specific techniques, but some kind of AI or ML models in there.

Lauri H (24:38):

And I think, well to explain for someone who in the morning opens a terminal and just uses beam in there, and he just closes their terminal, after eight hours just spends the whole day in there, for that kind of person it's pretty, it sounds gimmicky if you say that, just use this code completion tool that uses AI. But what is good to understand in there that the AI, well, it can suggest you by the past experiences of other users too, like what probably you want to do.

Lauri H (25:16):

But also, well, you could accomplish that somehow without an AI too. But AI would give a lot more context to it, so making it more powerful. So it understands what kind of projects you are working on and it can, based on that, give you the completions and recommendations on what to do.

Henri (25:39):

For example, in my daily job, I work for a company called Qentinel. We, for example, do with AI in here is that, we do that typical, looking at your test cases and see what's going on, but actually digging through your company's test cases and showing you how similar are your test cases compared to what everybody else has written in the company? Or is there something, hey, you've written something that nobody else has ever written in a test code? Are you sure that this is right or are you sure that this is what you want to do? For example. So giving you these kind of hinters and pointers on what's actually going on there, it's quite a powerful tool with AI.

Lauri H (26:13):

And I think from that... Well, on tools that we usually rely on vendors to give us, like think about code analytics and all that. I think it's really, really important that those vendors adapt AI in their systems. Because if you try to do some kind of code analysis only inside your company, and you have some kind of tool in that, it knows the situation is only related to your teams and your companies. But if it applies to a lot of different companies, but code analysis tools are good example of that, that it knows what like... It learns from a different kind of organizations, well-performing and performance. So it's important that the tools that we use actually utilize AI and data when they are built.

Henri (27:16):

I think you have a really good point of that. You want to have data from all the different companies to do the comparisons on what other best practices, but then you want to have the context of your own company applied on top of that. And I think that's where AI tools are now taking over. And that's why you see them everywhere that they are not just like, hey, we as a company have this best practices for JavaScript and we implement that as a file and everybody should use it.

Henri (27:39):

As it's been, for example, in JavaScripts or the Google and Facebook standards have become the de facto standards as everybody just took those linters as they were available as open source. But now that they can kind of, we can take that as the basis and then take your context on top of that and let the system do the recommendations based on what you've actually been doing.

Lauri P (27:57):

I write quite a bit of text as a marketeer and I use Grammarly, and Grammarly has an editor where when you start writing, it will ask a few hues or attributes about the text icon. Is your audience public or general or expert or scientific, and what are you trying to achieve? Are you trying to inform them or persuade them or entertain them or whatever? And then when you start writing or you basically paste your first draft to the editor, it will suggest what you should do with it text.

Lauri P (28:34):

And it can do relatively complicated things like refactor entire long sentences, which can take like two lines, for instance, two lines and say, you shouldn't say it this way, but you should say it that this way. And then it asks, "Shall I correct this for you?" So effectively they have refactoring English language into achieving what you want in a better way. What about refactoring code using AI?

Henri (29:06):

I think that's already happening through editors pretty much, and doing kind of the same thing. And now, it just becoming more and more context-sensitive. I think it's even being the norm in like a refactoring code and since the '90s, there's been a lot of linters even then. Even they've been using it. Some of these new AI methods or ML methods or statistical methods or whatever you want to say them, in coding it's been, I think even more accepted than in typical public writing.

Henri (29:36):

And all of that historical data has led to the fact that now we have VS Code and all the others, which have super powerful plugins for that. And of course, because we are still training them in the same way that you are training Grammarly. You're telling them what you're trying to achieve with this text and giving them that also as a learning material.

Lauri H (29:54):

And actually to draw some bridge from that Grammarly too... Excuse me. To draw some kind of bridge from Grammarly to code linters using AI. Because when Grammarly tells you like, maybe you wanted to write it in this way, they also want to have some data of the people reading it. So they want to know what people want to read. What keeps people reading this on the site or engaged with this information.

Lauri H (30:28):

And that end-to-end process is important when you think about AI in linters, because of code completion tools, because different contexts require different kinds of, well, use cases. So you wanted to have like, okay, is this like a test? Are you writing a test here or is this kind of be, it's just a part of your CICD pipeline? So, even though something like a lint or code completion tools, they are like a small part, but they can utilize from the end to end process, from a lot of different data.

Lauri P (31:10):

Again, it's my obligation to bring up terms that you guys are approaching, but not stating out loud. And I know that we're probably alienating some of the audience by using the ops suffix for something. But there is this MLops and I'll stop there and let you continue from there, because I don't know your personal point of views to this star ops approach. But what about the MLops?

Lauri H (31:49):

I think two years ago when machine learning started to be really, really, really popular, I saw some terms that... Because we usually put like DevSecOps, which is like security in DevOps operations. So I saw like flying around a term like a dev MLops, but I think that died quite quickly because from the data science side, there started to come this machine learning ops, MLops term that is gaining popularity nowadays.

Lauri H (32:27):

And this actually in Finland, maybe one or two companies, really few companies that are actually doing this. And to explain it is that it's trying to bring together the technical and the cultural and all the silos together that work in the data science field. So till this point, if you look at data scientist point of view, he wants to get a clean set of data that he can build a model into. But to think where that data comes from, it requires data engineers.

Lauri H (33:08):

Well, now there has been a lot of talk. It actually was a joke that when you work on a machine learning project, it's going to be 80% of data preparation, which includes data acquisition and then the cleaning and all that. And then only 20% is focused on the actual model building when you build a machine learning model. And that's actually, even though it started a bit as a joke, but when you build these big systems that productionalize some ML models, it became quite a true. So I think what MLops is trying to do is to build smarter systems in here that we have good systems that acquire the data automatically, of course, and then clean the data and actually make sure that it's good quality data.

Lauri H (34:02):

And then there's good versioning in it. So the data scientist can you use the data like continuously and they can rely on it that it's always clean it comes, there's a lot of data and this is because lately we have noticed that the biggest or really big factor in the accuracy of these models is the amount and the quality of this data. So actually then I think how we see this in action, I think first companies were so quick to hire data scientists when they heard about machine learning and AI.

Lauri H (34:43):

Then they realized that we need the data. We need to hire the data engineers now. But now what I'm seeing also alongside the data engineers that they want to hire, they want to hire machine learning engineers, which kind of tie this DevOps together that they all like DevOps processes together in the machine learning world that can actually have the knowledge, how to productionalize the models and the data and how to enhance the data that comes from the models in the production back to the feedback loop into the operations. So I think MLops, it has similarities with the DevOps, like all this continuous. Everything has to be continuous, but brought into the like data sense.

Henri (35:37):

These are all MLops, AIops, DevOps, and all of these are kind of, in my opinion, talking about the process of making something. DevOps is about what's the process of actually making software in an efficient way and what's the culture and how people interact in there. And MLops is the same thing that you're actually doing the tools for actual process to make it better. And how do we actually smoothly make AI and machine learning models out of there and how this interacts with data and addressing all of those pain points with automation in there, so the humans don't have to do the same things over and over again.

Henri (36:09):

And the same thing is happening in these multiple fields. And now it's time for AI to get the automation treatment in there, which was something that we also talked a little bit about showdown for example the Finnish Valohai which is they're doing a lot of stuff on automating all of the infrastructure around AI, for example, and I'm getting that data in there. As we said, engineering is still in these new fields, that thing that takes a lot of time in the actual development cycle, and how to get that in there. How to get the data in and how we get all of that stuff in.

Henri (36:40):

There's a lot of great solutions out there, and we are slowly building on top of that. So that the role of AI is evolving in all the companies. Now it's not just a gimmick anymore, and it's much more easier that they can use because there's a lot of tooling around it.

Lauri H (36:52):

And I think like, well the traditional data scientists that I've met, they don't often come from a software engineering background. A lot of data scientists that I meet, they might be chemical engineers, bioengineers, or just pure mathematicians. And their aim in their work is just to, they know all the math behind it. They just need to know how to, they need to learn enough code to actually implement a machine to do these things.

Lauri H (37:28):

So if you have worked in a software field, you understand that there's a lot that comes with it. Well, nowadays everyone is working in cloud. You have to understand how the system works. You have to have the monitoring systems there. And then because the data science field to be a data scientist is, it requires a lot of you. You need to understand a lot that is not software engineering. So to bring this kind of MLops thinking where there are other engineers around the data scientist who actually builds the model, help to actually build the system, that what the data scientists work on can actually have the biggest impact.

Lauri P (38:24):

It sounds to me like once the roles begin to diverge from each other, so you end up having this kind of more hardcore machine learning and artificial intelligence specialists who are not required to know the business domain that deeply. And then conversely, you will end up having the business analysts who know the particular domain particularly well, but they don't have to understand the technology that much because they have the technologists.

Lauri P (39:00):

So it's really, really interesting how these different roles that together try to get the job done because they won't be able to comprehend it the whole domain independently. And somehow it feels to me, I think either of you had a note in preparation that if we can automate something with any practical, with any technique and the technique is not relevant, the fact that we achieve our task is the relevant part.

Lauri H (39:28):

Yeah, exactly. I think that comes to that, well, as AI became so quickly so popular and there's a huge market. Of course, like having knowledge in AI and machine learning many people have that as an aim then they want to have slice of that cake in the market, like they want to have expertise. And from that, you want to think that any solution would be sold or any problem could be solved with utilizing AI, but that's a bit of fun a unhealthier way of thinking, because there's a lot of things as we've talked in this podcast also that, AI comes with a lot of automation tasks.

Lauri H (40:19):

So if you can do it without AI, then you should be happy with it. If you can do it without AI, probably it's way less work. And the important thing is that if you don't AI to solve something, then you probably don't need that much of data to solve it. So you can cut one layer out of the solution.

Lauri P (40:44):

So eventually it will all boil down to putting different solutions that solve the same problem and just let them compete and see which solve is the best way irrespective of how it's built.

Henri (40:57):

Yeah. It could be that people who are using pen and paper, might be the best solution.

Lauri P (41:01):

I was about to say, we have been quite dev-heavy now, but now I think back on our conversation, and oh boy, we have also talked about data labeling and the importance of data, which goes back to the operations as well. But maybe to round up and start getting to the end of the session is talk a little bit more about AI in the DevOps' ops side.

Henri (41:27):

They're looking at the operation side, there's of course a huge amount of stuff that AI can now do. And many of the best companies use AI to optimize their op side as well. That's not really... But there isn't too many of these generic frameworks in there. For example, looking at how do you manage a lot of really difficult ops environments where you have thousands of containers? For example. And which should be killed and which should be not?

Henri (41:53):

There's actually, now that I think about it, there's actually quite a lot of tooling in that, for example, we use AI to find you some containers, which have hand in your ops environment, which should be probably killed and all of these analytics tools on these complex environments like your internal networks between your servers and all of these, there's a lot of tooling that uses AI to analyze what's actually going on there and point you to the right direction.

Henri (42:17):

Or maybe even do those operations by themselves, like optimizing your actual running of your business in there. And I think most of these new data center providers, for example, are using a lot of this stuff to optimize where are you actually getting your resources from and where do all of your actual processes live in the rack. And there's a lot of AI also going in there, but I think they mostly call it ML or statistics.

Lauri H (42:41):

Maybe like the ops side of AI would be. Well, mostly you see it in decision-making when you need to, well, lead anything. Or if you need to manage your teams or your clusters with data, then there comes the utilization that it can actually understand the environment. And the biggest part of it is like the predictions and predictive maintenance, which is really important and cost saving that you can actually predict something bad happening before it happens. And then that's like a key, key factor you want to know.

Lauri H (43:25):

You want to know if your systems can handle or if your systems are under the risk of not handling the workload or even your software development team or any team, data could help in that. It could tell you beforehand that this is something that is not really smart thing to do.

Henri (43:48):

That's actually an interesting thing that I've noticed with, for example, predicting maintenance, it also requires a big culture shift that they can use sometimes. Basically, you're not trusting the algorithm to tell you that this machine hasn't broken yet, but you should now change it to avoid disaster later, and trusting that machine, that this is actually saving you money that I now go in and trade switch up perfectly good working computer, for example, from a rack before it fails. It is something that requires of course, a big shift typically in the human side of the culture that we trust that algorithm that it is actually right.

Lauri P (44:25):

In aviation, there are two terms, one is safe-life and one is a fail-safe. And probably in the upside fail-safe is more ingrained of a thinking than safe-life thinking. But what you're describing is effectively a safe-life thinking.

Henri (44:45):

And I'm guessing that's something that's going to change also in there that you're starting to focus on different things in there and keeping it in a certain working order and fail-safing or life-safing stuff before anything happens and making sure that it's always healthy. Of course, then there's the other side that some companies might go the other route and they can take the maximum out of their resources before it fails, for example.

Henri (45:09):

There might be two tiers of the operations that they use AI to really, really, really push their systems hard and they go up every single bit of stuff and then break it and then change into new components for example, or something else. There's a lot of places where we could optimize with AI and always with, how could you say that first thought that comes into mind, there might be some really other intuitive ways to do it.

Henri (45:32):

And for example, Netflix is doing a lot of chaos engineering that they're talking about is that they're just randomly shutting down machines in their infrastructure and seeing what happens. And then because of the way they've engineered their whole infrastructure is totally resilient on that, that some random machine, just somebody goes in... That random AI system goes in and shuts down something and the system still works, because they've totally ingrained that into their daily philosophy and all of the stuff that they have there. So they've managed it in such a way that doesn't even matter and they want to check that it doesn't matter in the system. So there's a lot of different ways of going around this.

Lauri H (46:06):

And I think like if you want a complete automation or want to give the full control to AI, we would, in a way also want to fail safe that the machine knows how it's probably going to fail. And through that when it actually happens, hopefully, it could have some solutions or recommendations how to then fix it. So if it's something possible that the machine could do, then it could actually do the self-maintenance there.

Henri (46:41):

And for example, just we take every single ounce of power out of our computers, and when they fail, we have the data that, okay, probably this failed because the power supply failed. 90% chance that the power supply was the part that failed. Then we could go in and switch that and then again, start doing it. If that was the most typical thing.

Lauri P (46:59):

Not the same, but it reminds me of one of the cars. Probably 20 years ago there was a new car model. I don't want to say the brand here, but it was one of the safety systems, the airbag or ESP, or ABS. And there was basically a hardware that controls it. But the problem was that as soon as you initialize their software on that hardware, the software crashed at about 0.9 seconds of uptime. And they really didn't have time to fix it.

Lauri P (47:29):

And the good thing was that, that subsystem was able to rebuild itself and initialize itself in 0.01 second. So only if you automatically made it reboot every less than 0.9 seconds, everything was fine. And the likeliness of you getting into an accident during that fraction of a second of a reboot was so minuscule, but it wasn't a big of an issue. I want to leave some time for our rapid-fire questions for both of you. But before going there, I'd like to give you the floor one more time for you to just wrap up this topic for your last word, and then we take our rapid-fire questions with each one of you.

Lauri H (48:09):

I think what we have talked a lot is that, if you are thinking about anything AI, or if you are interested in it or already do it, it's good to think about everything around it also. So first of all, well, do you need AI to solve these problems and to actually solve some problems with AI, you need to have a good data solutions or data infrastructures in place. So even though it's something that it's really shiny at the moment and it's like the field of AI and data science is so hot right now, it would be fun to jump into it. But it's never or almost never the first solution. There's so many things that need to be happening before you utilize it.

Lauri P (49:07):

Thank you. Henri.

Henri (49:08):

I think we've talked a lot about AI, and in the end it's, well, in a sense it's nothing special. It's still just a little piece of software that runs in your infrastructure. And the problem points of implementing AI will not be the actual AI algorithms typically. Of course, you might have to buy better PCs or more memory or something, pretty trivial problems to solve like that. But the main problem come from your culture and how to handle data. How have you formatted data? Have you done it in a certain way? Is there a standardized way of handling things which can be easily understood with a reasonable amount of effort for implementing the AI.

Henri (49:42):

And the same in the other crossroad shifts that we've been talking about, how do you then actually utilize the data that the AI producers? Do you believe the numbers that come out of it and can you verify that that's actually what's happening in there? So I think AI in itself might just be a function called in your program code, but getting that function call to work requires a lot of cultural stuff and a lot of changes in your organization, which you may need to be mindful of. And when you get the function call working, it might be super powerful for your business.

Lauri P (50:10):

Awesome. Thank you. And we have a habit of asking the same questions for every visitor that ever comes to our podcast. I start with Henri, and try to just answer without thinking too long. Fill in the following sentence. DevOps is?

Henri (50:29):

DevOps is quantifying culture into automation for humans.

Lauri P (50:32):

What three questions do you ask to tell if a company needs your help?

Henri (50:36):

How reliable is your deployment? Do you trust your software? Do you have any problems in getting your software to be used reliably when your customer is actually using it? I'm in the testings here. So I have a lot of reliability and trust questions in there.

Lauri P (50:52):

You are called to help your customer. What's the first thing you do?

Henri (50:56):

First things first, we want to understand what you do as a customer. What's the current process, what's the culture. As we said, that the thing that you're building is built by humans. I want to understand how you do it.

Lauri P (51:06):

What is something people often get wrong about, is it then test automation?

Henri (51:10):

One thing is that it's kind of like, hey, I want to buy one DevOps, please. It solves my problems when I buy one DevOps. It's not that. It's culture change, it's technology, it's automation. It's changing the habits that you do.

Lauri P (51:24):

What trends or new things you would to see become mainstream?

Henri (51:28):

People are talking about data-driven organization. That's still super difficult for people to actually do. There are really few data-driven organizations I'd like to see that become more mainstream.

Lauri P (51:38):

What is your secret personal productivity tool?

Henri (51:41):

I actually still use Post-its a lot of them.

Lauri P (51:44):

Super. What book have you completed most recently?

Henri (51:48):

Most recently I completed Echopraxia, a sci-fi book about consciousness. Really good.

Lauri P (51:57):

Cool. What is something that brings great joy in your life?

Henri (52:00):

Going outdoors now is super nice sitting outside.

Lauri P (52:03):

What is something that you are grateful for right now?

Henri (52:06):

Hey, being on this podcast and talking to you guys, talking to some professionals in topics that I'm interested in.

Lauri P (52:13):

Cool. Lauri, the same questions for you. Fill in the following sentence. DevOps is?

Lauri H (52:17):

DevOps is... Well, the boring term is always like bringing in dev and ops. But nowadays what I think the importance of DevOps is to bring the ops inside the DevOps. So we need to talk about the cultural side in the DevOps.

Lauri P (52:36):

What three questions do you ask to tell if a company needs your help?

Lauri H (52:40):

I'm going to adjust this a bit too because I mainly work as a data engineer, but of course, in DevOps environments. So three things I usually ask like: from where and how do you collect your data? Where is it and is it available for everyone and is it always up to date?

Lauri P (53:02):

You are called to help your customer with DevOps. What's the first thing you do?

Lauri H (53:07):

First thing I do is to actually see how they are using the systems. Because I can ask them what tools they have in use. But there's thousands of ways of using those. I want to know how they are using them.

Lauri P (53:25):

What is something people often get wrong about DevOps?

Lauri H (53:28):

Often, people silo DevOps into small things like they think it's only like building CI/CD pipeline. So it's just test automation. But I think like a big part of it is like, is the cultural change. Well, it is in the name, but it's not emphasized as much as the technical sides.

Lauri P (53:49):

What trends or new things you would like to see become mainstream?

Lauri H (53:53):

Well, it probably became a bit clear in this podcast, but I like the rise of MLops, because I think with all the machine learning and data science and all that, the growth has been so small because there's huge silos. So I love to see people's as Henri told, Valohai doing MLops operations in Finland. So I'm excited for that.

Lauri P (54:15):

What is your secret personal productivity tool?

Lauri H (54:18):

This must be quite unusual, but I think my guilty pleasure playlist, what gets me going in the morning. I think the best code I've written is like listening to Taylor Swift or Ariana Grande or something like that. And I think working in Finland where the music is a bit heavy and all that, this is a quite unusual answer.

Lauri P (54:41):

What book have you completed most recently?

Lauri H (54:43):

I just recently went on a binge on Patrick Lencioni, and the latest book I finished was The Ideal Team Player.

Lauri P (54:57):

What is something that brings great joy in your life?

Lauri H (54:59):

I can find joy in basically anything. I can find joy in day-to-day life in work and just people around me. That's just what I need.

Lauri P (55:13):

And lastly, what is something you're grateful for right now?

Lauri H (55:16):

Right now I'm grateful for seeing that there's a lot of communities in tech coming together. And actually, during these times that people have worked remotely, there has been a lot of new meetups that have been generated, even in a niche field, because the barrier to entry to have some kind of meetup is so low at the moment. So different kinds of experts are coming together at the moment and it's super cool to see.

Lauri P (55:50):

Thank you. And thank you Henri and thank you Lauri for joining. Very, very pleasant conversation. And I'm convinced that the audience finds something for them to think about whatever role they're coming from, in which level of their organization or the area of responsibility for. So thanks so much for joining. And I'm pretty sure that we will hear from each other in the future as well.

Henri Terho
Chief R&D Evangelist at Qentinel
LinkedIn: linkedin.com/in/henriterho

Lauri Huhta
Data Engineer / DevOps Consultant at Eficode 
LinkedIn: linkedin.com/in/lauri-huhta