How AI will fully automate QA
Thoughts on how AI will impact the QA industry
Selenium will be twenty years old this year. When it was born, there was no Chrome, no cloud, no smartphones, and no DevOps. While there was AI, the state of the art was Deep Fritz.
But, though the tools of QA have changed in those 20 years, with the development of Puppeteer, Playwright, and the emergence of dev tools baked directly into browsers, the fundamental techniques and challenges of QA remain essentially unchanged.
QA remains a largely manual chore: loading every page, clicking every element, filling every input, and trying every state. This manual approach is time-consuming and error-prone, and every error erodes trust in the tooling as it fails to meet expectations. Existing tools also struggle to keep pace with the rapid development cycles and frequent changes in modern software development.
The current state of QA
The current answer to this problem is:
- Outsource: The outsourced QA market was worth $36.4 billion in 2022. Companies outsource their QA efforts to providers or offshore teams to reduce costs and access a larger pool of resources. However, outsourcing can lead to communication challenges and a lack of domain knowledge, impacting the quality and effectiveness of testing.
- Build it yourself: Most orgs are going to be writing and maintaining tests themselves. This slows development velocity and hinders the ability to deliver new features and bug fixes quickly, putting the company at a disadvantage.
- Don't test: Some orgs may forego testing altogether, relying on end-users to identify and report issues. While this approach may seem cost-effective in the short term, it obviously leads to poor user experiences, increased support costs, and damage to the company's reputation in the long run.
There are a few ways QA teams can improve their testing. Automation frameworks like Playwright have gained popularity recently, enabling testers to write and execute tests programmatically. This reduces the manual effort and allows faster and more reliable test execution. However, creating and maintaining automated tests requires deep engineering expertise and can be complex and costly.
Shift-left testing integrates testing earlier in the development process by moving testing further down the stack into the integration layers and relying on unit tests. But then you need a lot more unit tests to cover the work of a single later-stage E2E test.
But no current option answers the core issue of QA-it is a lot of work.
This is where AI is going to step in.
AI. First a stepping stone, then a leap forward
How is AI going to step into the QA role?
Initially, this will be through the offload of the "chore" component of QA testing. AI can assume the manual, repetitive, time-consuming tasks of testing. This might take the form of:
- Finding elements: AI can automatically identify and locate elements on the page, such as buttons, input fields, and menu items. This can save QA teams significant time and effort, as they no longer need to manually track and identify elements when creating test cases or executing tests.
- Writing assertions: AI can automatically generate and reason about assertions, statements that check whether a particular condition or expected outcome is true. Complex assertions are extremely difficult to write and reason in existing testing frameworks. AI can dramatically reduce the manual effort required by QA teams and improve the accuracy and coverage of the tests.
- Generating test cases: AI can automatically generate test cases based on design documents, user stories, or existing test cases. AI can identify the key scenarios and edge cases that need to be tested and create corresponding test cases, again saving QA teams time and effort.
How can AI take over these manual components of testing? At Momentic, we use two key features.
Firstly, test generation. Test generation allows users to define high-level goals in natural language. AI can generate tests from user stories, documentation, or PRDs. The AI agent then translates those goals into on-page actions.
This approach lets users focus on the desired outcome rather than the required steps. For instance, if there is a user story that is "As a user, I want to be able to add items to my shopping cart and proceed to checkout, so that I can easily purchase products online."
Based on this user story, an AI system could generate the following tests:
- Test 1:
- Navigate to the online store homepage
- Search for a specific product
- Click on the product to view its details
- Click the "Add to Cart" button
- Assert that the product is added to the shopping cart
- Test 2:
- Navigate to the shopping cart page
- Verify that the previously added product is present in the cart
- Click the "Proceed to Checkout" button
- Assert that the user is redirected to the checkout page
- Test 3:
- On the checkout page, fill in the required information (e.g., shipping address, payment details)
- Click the "Place Order" button
- Assert that the order confirmation page is displayed
- Verify that the order details are correct (e.g., product, quantity, total price)
By automating the generation of these tests, AI can save QA teams significant time and effort while improving the coverage and accuracy of the testing process.
Secondly, AI Checks. AI Checks enable users to create human-readable assertions to verify various page elements. These checks are more versatile and reliable than traditional expect or assert statements in common testing libraries. They can do this in two ways:
- They can make assertions about the page's text content. For example, users can verify the presence or absence of specific text, evaluate logical statements (e.g., checking if an article's publish date is older than 30 days), or ensure no error messages are displayed on the page.
- They can allow users to make assertions based on the page's visual elements. This can include verifying the presence or absence of specific images, evaluating logical statements related to visual elements (e.g., confirming that the most expensive item is displayed at the top of a list), or checking the general state of the page (e.g., ensuring that the login page is visible).
Related, AI can detect regressions and help eng teams triage issues so they know which ones are important to fix. This allows QA teams to prioritize their efforts on critical issues that directly impact user experience and functionality. By leveraging AI's ability to analyze vast amounts of data and identify patterns, teams can quickly pinpoint and address the most pressing problems, resulting in faster issue resolution and improved software quality.
This is now. Developers can already author E2E tests with Momentic to improve their testing velocity and product quality.
AI agents embedded in the testing cycle
What about the future? What is QA going to look like as AI gets smarter?
We are rapidly moving towards a future where AI will be so easy to use that a single person can incorporate it as a small part of their workflow rather than requiring a whole separate QA cycle. AI agents will automatically check if the code satisfies requirements docs, matches the design, and follows best practices.
Imagine a developer providing a high-level concept or requirement to an AI agent. The AI agent then takes this concept and autonomously builds the software, tests it, analyzes the results, and iterates on the design and implementation. Upon completion, the AI provides a green check with all the audits it ran and surfaces any warnings or errors that need to be fixed.
AI could even go further by automatically fixing issues and merging the changes, further streamlining the development process.
In this future, the developer's role shifts from writing code and conducting manual testing to guiding and overseeing the AI agent. The developer becomes more of an architect, ensuring that the AI agent understands the requirements and is aligned with the overall vision for the product.
Of course, this vision of the future is not without its challenges. Ensuring
that AI agents can understand and interpret human requirements accurately will
be a key hurdle to overcome. There will also be concerns about the
interpretability and explainability of the AI agent's decisions and actions.
But the potential benefits of AI-driven software development, with its ease of
use and automated quality assurance, are too significant to ignore.
Trust, and building for the AI we have
How do we get to these futures?
There is a trust hurdle. QA is an integral part of development; organizations must be comfortable handing it wholesale to AI. If a bug goes through to production for users to find, it's hard to recover trust in the machine again.
The way around this is to build a better process, surface metrics, and explain how AI makes decisions in detail. Organizations can gradually build trust in AI's capabilities by providing transparency into the AI's decision-making process and the factors it considers when testing software. This may involve displaying detailed logs, highlighting key metrics, and explaining the AI's actions.
It will be great when AI can take this 100% off our plate. But we're not there yet. We're not building for the AI we wish we had, we're building for the AI we have right now. This means a human-in-the-loop approach is necessary to ensure the AI's reliability and accuracy. This means that while the AI handles most of the testing tasks, human QA professionals will still be involved in reviewing and validating the AI's work.
Over time, as the AI becomes more reliable and proves its effectiveness, the need for human intervention will decrease, allowing the AI to take on more responsibilities independently.
What Fully automated QA looks like
Faster testing cycles, improved test coverage, and reduced manual effort. AI will enable QA to scale by making an individual engineer vastly more efficient, cutting down on manual testing and maintenance time. Instead of having separate teams for development and QA, a single team of developers could oversee multiple AI agents working on different projects simultaneously. This could lead to significant cost savings and increased productivity.
The future of QA is inextricably linked to the future of AI. As AI agents become more sophisticated and integrated into the development process, we will shift from manual testing to autonomous, AI-driven development cycles.
This shift has the potential to bring about a new era of software development, one that is faster, more efficient, and more reliable than ever before. The QA professional's role will evolve from a manual tester to an AI overseer and guide.
If you want to get started on the road to this future, reach out to us at sales@momentic.ai.
Published
Jun 2, 2024
Author
Jeff An
Reading Time
9 min read