Sorry! Your browser is not supported on this site and it might be acting a bit wonky. Please use Firefox, Chrome or Edge instead

The latest insights about AI/ML in DevOps

Heidi Aho

Written by:
Heidi Aho

AI is not a silver bullet to solve all your DevOps woes, though it is plating elements of the software lifecycle in silver as more AI/ML elements are folded into the pipelines of heavy-hitters around the world. All of this leads to an interesting conversation on AI/ML in DevOps.

Last month, Europe’s leading DevOps conference saw a cohort of AI experts descend on Helsinki. We’ve taken insights from four of them to explore uses of AI/ML in DevOps.

“What is human learning in the context of software testing? Getting people to test without being explicitly told to do so. That’s an even harder problem than putting machine learning into our programs, guys.” Ingo Philipp, Tricentis, DEVOPS 2018

As the AI industry matures, it will create fresh new dilemmas for DevOps to solve

SingularityNET is a decentralized AI network which lays the foundation for agent-based computing. Its founder, Dr Ben Goertzel, claims agent-based computing make testing interesting to say the least. An AI might one day be getting information from other AI agents, and you don’t know how those interactions will play out at the time you create the code. Distributed AI is hence creating a very liquid environment. If large swathes of code can change dynamically, that makes it hugely more difficult for example to automate test cases.

DevOps methodologies and technologies need to be used by data science teams creating AIs in the future, but how exactly DevOps is to be executed is unknown territory. This is the brave new world of DevOps. As Goertzel said at DEVOPS 2018: “I'm more like the kind of guy who creates new kinds of problems for DevOps people to solve!"

DevOps for AI

Forrester’s Diego Lo Giudice points out that AI will be used to test AI systems. AI itself needs to be tested and built using the AI-enhanced methodologies that have entered the DevOps playing field. More on those below.

Over 37% of companies are using AI/ML to improve quality

According to Forrester’s research back in 2016, just over a third of companies are using AI in software testing. AI can be used to optimize the testing process, improve mean time to repair defects, optimize the fixing of defects, and use AI to test AI systems.

Ingo Philipp from Tricentis drilled further into the details of how AI/ML can take work off the hands of humans when it comes to testing: test strategy optimization, automated test design, automated exploratory testing, automated defect diagnosis, and user experience analysis. As for the best uses of AI in software testing according to Tricentis, those are outlined below. 

The best uses of AI in software testing, according to Tricentis

  • Redundancy Prevention: AI eliminates and prevents redundancies in test case portfolios to achieve the same results in terms of business risk coverage but with less effort
  • Risk Coverage Optimization: AI minimizes the number of test cases needed by highlighting the risk contribution of each test case so that you can maximize risk coverage
  • False positive detection: AI reduces the effort required for results analysis by indicating whether a failed test case actually detected a defect in the application, or just broke due to technical issues with the test case itself
  • Resilient automation: Referring to Tricentis’ RPA, which uses Model-Based Automation. Resilient bots that can adapt to change in minutes rather than days. There’s a walkthrough of this use of ML in Ingo’s presentation.
  • Portfolio inspection: AI tracks flaky test cases, unused test cases, test cases not linked to requirements, untested requirements, etc. to indicate weak spots in test portfolios.

But don’t forget that AI/ML requires a lot of human effort: War stories

AI isn’t a silver bullet that eliminates human error, and the activation energy required of humans to get AI/ML off the ground can be significant.

Using AI isn't easy

When it comes to beginning your process of automatization, Elisa’s Jere Nieminen highlights that if you don’t have a firm grip on the data you’re getting ML to analyze from a QA perspective, there’s no use moving the project forward until you do. What’s more, automation can be a huge burden for testers. You still need to write the framework, which is a tall order especially for non-technical testers: a point made by Forrester’s Diego Lo Giudice.

Even when your AI/ML is ticking along, understanding the events you’re looking at can be difficult. Nieminen recommends creating a process for understanding anomalies and enhancing the data set so you gain a view on the root cause, too.

Human judgement and perspective needed

Humans also need to provide sound judgement and perspective, because ML can't provide the big picture. In Elisa’s case, the up-to-800 million log rows of the CDN was being tracked by ML. However, this didn’t provide a holistic account of how Elisa’s streaming service was working, due to survivorship bias. Some sessions may not have made it to the CDN: cases that the ML knows nothing of, examples of poor user experience that have fallen through the cracks.

Here's an example of how human error can slip into an automated system. Jere Nieminen recounts how his team missed an anomaly during the Christmas period, which led to the ML seeing it as the new normal and not notifying them of future instances of it. This led to them taking longer to notice that one of their storages was having performance issues.

Augmentation: the holy grail of the AI-infused DevOps

We’ve established that humans are still needed to get AI and ML right. For the foreseeable future, AI/ML in DevOps will consist of augmentation: humans improving their performance thanks to AI/ML.

This holds tremendous potential. According to Forrester's 2017 Q3 Global Agile Software Online Survey, 46% of tests were still done manually in 2017, a time sink which hampers the ability to deploy code frequently.

Set AI/ML to work in saving you time by doing what it does best. Diego gives the example of a company that produces servers. Every time a new server is brought to market, they have a trillion configurations to test on their hands, an astounding amount which would take 17 years to complete. What the company did is build a model to recommend 500 business critical configurations that they could test while still providing adequate coverage. Testing now took only a handful of days.

The best DevOps/AI hack is to deal with good old human stupidity first

Tricentis’ Ingo Philipp said it best: “The number one testing tool is not the computer, it is still the human brain. So my advice to you is: don’t expect artificial intelligence to solve all your problems in testing now, do something about your natural stupidity first!”

You can watch all four talks on DevOps and AI this piece was based on for free.

The journey towards more returns begins with one step  Contact Eficode