Selenium will be twenty years old this year. When it was born, there was no Chrome, no cloud, no smartphones, and no DevOps. While there was AI, the state of the art was Deep Fritz.
But, though the tools of QA have changed in those 20 years, with the development of Puppeteer, Playwright, and the emergence of dev tools baked directly into browsers, the fundamental techniques and challenges of QA remain essentially unchanged.
QA remains a largely manual chore: loading every page, clicking every element, filling every input, and trying every state. This manual approach is time-consuming and error-prone, and every error erodes trust in the tooling as it fails to meet expectations. Existing tools also struggle to keep pace with the rapid development cycles and frequent changes in modern software development.
The current answer to this problem is:
There are a few ways QA teams can improve their testing. Automation frameworks like Playwright have gained popularity recently, enabling testers to write and execute tests programmatically. This reduces the manual effort and allows faster and more reliable test execution. However, creating and maintaining automated tests requires deep engineering expertise and can be complex and costly.
Shift-left testing integrates testing earlier in the development process by moving testing further down the stack into the integration layers and relying on unit tests. But then you need a lot more unit tests to cover the work of a single later-stage E2E test.
But no current option answers the core issue of QA-it is a lot of work.
This is where AI is going to step in.
How is AI going to step into the QA role?
Initially, this will be through the offload of the "chore" component of QA testing. AI can assume the manual, repetitive, time-consuming tasks of testing. This might take the form of:
How can AI take over these manual components of testing? At Momentic, we use twokey features.
Firstly, test generation. Test generation allows users to define high-level goals in natural language. AI can generate tests from user stories, documentation, or PRDs. The AI agent then translates those goals into on-page actions.
This approach lets users focus on the desired outcome rather than the required steps. For instance, if there is a user story that is "As a user, I want to be able to add items to my shopping cart and proceed to checkout, so that I can easily purchase products online."
Based on this user story, an AI system could generate the following tests:
Test 1:
Test 2:
Test 3:
By automating the generation of these tests, AI can save QA teams significant time and effort while improving the coverage and accuracy of the testing process.
Secondly, AI Checks. AI Checks enable users to create human-readable assertions to verify various page elements. These checks are more versatile and reliable than traditional expect or assert statements in common testing libraries. They can do this in two ways:
Related, AI can detect regressions and help eng teams triage issues so they know which ones are important to fix. This allows QA teams to prioritize their efforts on critical issues that directly impact user experience and functionality. By leveraging AI's ability to analyze vast amounts of data and identify patterns, teams can quickly pinpoint and address the most pressing problems, resulting in faster issue resolution and improved software quality.
This is now. Developers can already author E2E tests with Momentic to improvetheir testing velocity and product quality.
What about the future? What is QA going to look like as AI gets smarter?
We are rapidly moving towards a future where AI will be so easy to use that a single person can incorporate it as a small part of their workflow rather than requiring a whole separate QA cycle. AI agents will automatically check if the code satisfies requirements docs, matches the design, and follows best practices.
Imagine a developer providing a high-level concept or requirement to an AI agent. The AI agent then takes this concept and autonomously builds the software, tests it, analyzes the results, and iterates on the design and implementation. Upon completion, the AI provides a green check with all the audits it ran and surfaces any warnings or errors that need to be fixed.
AI could even go further by automatically fixing issues and merging the changes, further streamlining the development process.
In this future, the developer's role shifts from writing code and conducting manual testing to guiding and overseeing the AI agent. The developer becomes more of an architect, ensuring that the AI agent understands the requirements and is aligned with the overall vision for the product.
Of course, this vision of the future is not without its challenges. Ensuring that AI agents can understand and interpret human requirements accurately will be a key hurdle to overcome. There will also be concerns about the interpretability and explain ability of the AI agent's decisions and actions.
But the potential benefits of AI-driven software development, with its ease of use and automated quality assurance, are too significant to ignore.
How do we get to these futures?
There is a trust hurdle. QA is an integral part of development; organizations must be comfortable handing it wholesale to AI. If a bug goes through to production for users to find, it's hard to recover trust in the machine again.
The way around this is to build a better process, surface metrics, and explain how AI makes decisions in detail. Organizations can gradually build trust in AI's capabilities by providing transparency into the AI's decision-making process and the factors it considers when testing software. This may involve displaying detailed logs, highlighting key metrics, and explaining the AI's actions.
It will be great when AI can take this 100% off our plate. But we're not there yet. We're not building for the AI we wish we had, we're building for the AI we have right now. This means a human-in-the-loop approach is necessary to ensure the AI's reliability and accuracy. This means that while the AI handles most of the testing tasks, human QA professionals will still be involved in reviewing and validating the AI's work.
Over time, as the AI becomes more reliable and proves its effectiveness, the need for human intervention will decrease, allowing the AI to take on more responsibilities independently.
Faster testing cycles, improved test coverage, and reduced manual effort. AI will enable QA to scale by making an individual engineer vastly more efficient, cutting down on manual testing and maintenance time. Instead of having separate teams for development and QA, a single team of developers could oversee multiple AI agents working on different projects simultaneously. This could lead to significant cost savings and increased productivity.
The future of QA is inextricably linked to the future of AI. As AI agents become more sophisticated and integrated into the development process, we will shift from manual testing to autonomous, AI-driven development cycles.
This shift has the potential to bring about a new era of software development, one that is faster, more efficient, and more reliable than ever before. The QA professional's role will evolve from a manual tester to an AI overseer and guide.
If you want to get started on the road to this future, reach out to us at [email protected].