Launchable Draws on DevOps to Deliver AI-Powered Software Test Automation

Launchable Draws on DevOps to Deliver AI-Powered Software Test Automation

In AI, Technology News by James KobielusLeave a Comment

Launchable Draws on DevOps to Deliver AI-Powered Software Test Automation

The News: Demand is growing for AI-powered software test automation, as evidenced by research focused on key trends in quality assurance and testing, such as the World Quality Report 2018-2019.

Startup Launchable has emerged from stealth to introduce its forthcoming AI-driven software test automation solution. Launchable was co-founded by Kohsuke Kawaguchi, creator of the open-source Jenkins continuous integration/continuous delivery (CI/CD) automation server, and Harpreet Singh, former head of the Bitbucket group at Atlassian.

Launchable Draws on DevOps to Deliver AI-Powered Software Test Automation

Analyst Take: Currently in beta and due for general availability later this year, Launchable’s SaaS solution uses machine learning (ML) to speed the software testing process. For each incremental code change in the CI/CD process, the solution uses ML to identify which new tests are most likely to fail. Launchable’s ML engine learns this by studying data on past code changes and test results, as drawn from Git repositories and other CI systems. The data is refined and stripped of sensitive information and is then used to train Launchable’s ML model.

This enables the Launchable SaaS offering to predict, for each new code change, which new tests are more likely to fail. Having quick feedback of this sort enables developers to run only a meaningful subset of tests, rather than test all code modules and integrations after each incremental code change. This shortens the time needed to test and refine code prior to putting it into production. It enables testing of larger codebases more efficiently using less manual effort. In addition, it enables testers to adaptively identify which tests are most relevant, and thereby better focus ongoing efforts to eliminate bugs from deployed code.

What’s most noteworthy about the company’s emergence from stealth are the Launchable co-founders, Kawaguchi and Singh. Prominent DevOps industry figures—most notably, the Jenkins CI/CD automation server’s creator—have essentially validated that AI-driven test automation is coming big time into every software development shop.

Launchable Automates Decision Support for Agile Coding

In a CI/CD context, the Launchable adaptive AI can drive automated testing of source code changes upon check-in as well as notification of development and operations personnel when the tests fail. It can ensure that developers never have to wait more than a few minutes for feedback on their latest code changes. It can also help testers to keep pace with the growing volume, velocity, and variety of code changes, so that the most relevant changes can be tested 24×7.

AI-based test automation will become a standard feature of most cloud DevOps tools by the middle of this decade, if not sooner. That’s because this methodology supports the agile, incremental nature of modern code development. Using this new generation of tooling developers can:

  • Run a high-confidence subset of the most critical integration tests more often so that more bugs can be caught same-day rather than overnight.
  • Start code reviews partway through a continuous-integration validation test once the confidence level on these tests rises above a minimum confidence threshold.
  • Do tests on incremental builds while development of other code is still in progress, eliminating the need to delay testing until a complete build is ready.

Low-Code Tools Are Adding AI-Driven Software Test Automation

When Launchable makes its SaaS offering available later this year, it will join a growing range of low-code tool vendors that support AI-accelerated software test automation. Incumbents in this segment include AI Testbot, Appdiff, Applitools, Appvance, Autify, Functionize, Infostretch, Mabl, ReTest, Selenic, Test.ai, and Testim.io.

What these vendors all offer, to varying degrees, are the following AI-powered software testing capabilities:

  • Test optimization: AI-driven automation solutions can leverage historical quality-assurance data to identify appropriate test scenarios. They can weed out pointless, resource-consumptive tests. They can optimize test orchestration plans for each code release. They can prioritize tests based on automated identification of test failures that don’t indicate a problem in the application under test. And they can assess pass/fail outcomes for complex and subjective tests.
  • Quality monitoring: The tooling can identify software quality issues, apply test inputs, validate outputs, and emulate users or other conditions. It can automate accuracy, transparency, repeatability, and efficiency of software tests. It can find and fix broken tests and verify that the user interface appears right when viewed by the user. It can leverage different learning methods– including supervised, unsupervised, and reinforcement — to detect defects proactively, predict failure points, and optimize testing.
  • Deployment acceleration: These tools can automate scripting, execution, and analysis of tests as fast as code gets deployed or changed. It can accelerate detection of software defects, speed the feedback loop on defects from operations back to development, and boost the range of test cases that can be executed in parallel on every run.
  • Coverage assurance: AI-driven tooling can automate assurance of continued full coverage of all testing scenarios. It can update tests as code changes, tweak tests for statistical outlier cases, leverage optical recognition of pattern-based user-interface controls to make test automation more resilient to changes, and track anomalous, unused, and unnecessary test cases to indicate coverage gaps in test case portfolios.

More vendors are sure to enter this space over the next one to two years. Nevertheless, there remains a clear window of opportunity for a new startup such as Launchable to gain traction with its AI-automated software test offering. The reasons are several:

  • Few of the incumbent AI-driven software test automation tools has gained broad adoption.
  • Most incumbents focus on testing code changes on web and mobile applications, rather than on containerized cloud-native apps running deployed into a world of distributed enterprise microservices where Jenkins is ubiquitous.
  • Leading low-code integrated development tool vendors have not yet entered this space in a major way.

Nevertheless, many already offer varying degrees of ML-augmented coding for rapid application development, so it would not be surprising to see many of them extend this capability to enable ML-automated software test automation.

Futurum Recommendations for Launchable

What’s ahead? Based on our analysis of this market, Futurum recommends that Launchable wrap up its ongoing beta testing by mid-summer 2020 so that it can make its SaaS offering generally available by Q4 at the latest.

We also recommend that the vendor:

  • Invest its recently-closed seed funding of $3.2 million in building up B2B marketing and sales channels that target highly technical customers.
  • Recruit low-code tool vendor partners for joint marketing and sales to customers who need a strong solution for automated code testing in GitOps workflows.
  • Build out the visual, declarative, and self-service capabilities of the SaaS offering in order to make it suitable for test automation by coders of any skill level.
  • Create a tiered subscription pricing structure that encourages customers to sign up for free trials in order to evaluate and use the SaaS solution for a limited volume of basic web/mobile software testing by a single user. The company can then charge extra for enabling greater numbers of users to source test data from a wider range of repositories and to automate testing of larger volumes of more complex, distributed, containerized cloud-native code builds in diverse application domains.
  • Develop case studies, whitepapers, and other thought-leadership assets that document the requirements and ROI from implementing AI for software test automation in various vertical use cases.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Other insights from the Futurum Research team:

Semiconductors Are Hot, But Was Q4 EPYC for AMD?

Huawei Finds Its Way Into Britain’s 5G Plans, Albeit Partially

Epic Systems Moves Away From Google Cloud Citing Security Concerns — The Healthcare Industry Should Take Note

Image Credit: Launchable

 

The original version of this article was first published on Futurum Research.

James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications.

Leave a Comment