Three years ago, OpenAI's-GPT 3.5 was text only, and offered fairly basic conversational and reasoning ability. OpenAI's multimodal current model, OpenAIo1, manages a top 10% score on the US bar exam.
It's a technological leap forward like no other in history. And, with a recent McKinsey study suggesting that 92% of businesses are planning to invest more in generative AI over the next three years, the next few years are set for a surge in the number of organizations using AI agents for autonomous decision making.
Will this have a significant impact on software testing? You bet - software is one of the areas where AI agents have the potential to really shake things up.
In fact, it's already happening, with a host of [innovative new AI testing tools]/ on the market that leverage natural language processing and other technologies to offer some serious cost and efficiency benefits for software teams of all sizes.
Here's everything you need to know about the most transformational testing trend yet.
That image in your head of androids coding away in little cubicles (and, perhaps, clamoring round the free office pizza and chasing deadlines with coffeelike your human engineers) isn't actually that far from the truth.
There aren't any physical robots (sorry), but what this image does hit on is the idea of AI agents as sort of 'digital coworkers', rather than rules-based automation apps or software tools.
AI testing agents don't just automate more tests - they use machine learning and vast datasets to optimize your entire testing cycle.
Think of them as a step beyond the more traditional automation you might carryout using tools like Selenium. Rather than following a predetermined set of actions, AI agents carry out tests, suggest improvements of their own accord, adapt tests to changing scenarios, and optimize their testing approach over time.
The characteristics that set AI testing agents apart from other forms of automation are:
All too often, AI agents get banded in with AI-optimized automated testing workflows. They are not the same.
Both can be extremely helpful in the right circumstances - but if you're looking into AI as a way to improve your testing processes, it's important to understand the differences.
Take a look at the table below for comparison:
As you can see, AI agents are the most flexible, dynamic option out of the three. Does this mean you should use an AI agent for every single aspect of testing automation?
It depends.
You probably wouldn't implement an AI agent just to automate low-level, repetitive unit testing. AI agents are powerful things, capable of building, running, and optimizing extremely complex tests. You'd struggle to get a sensible ROI for low-level automation.
Ultimately, the number one reason to implement an AI testing agent is to save huge amounts of time on your more complex tests. And, if you were implementing an AI testing agent for those complex tests, you might as well max out the bang for your buck and roll out AI across your testing workflows.
AI agents can be embedded throughout your testing workflows for a range of tasks, including:
Manual test case development, particularly for complex cases like end to end tests, can be a huge time drain for software and QA teams. AI testing agents can build these in just a few minutes, using plain English prompts from your team.
Fed up of fragile CSS/XPath locators breaking your test scripts? AI testing agents can adapt test cases to changes in UI or requirements clauses automatically - say goodbye to all those hours manually updating test scripts whenever your codebase changes.
Need more test data? Your AI agents can pick up on patterns in your existing user data to create huge amounts of artificial, but realistic, test data that will return useful results. You can generate personal data (names, emails, addresses, phone numbers), authenticator code, TOTP data for login processes, and more.
AI agents go beyond simple find-and-fix procedures. The more you test using an AI agent, the smarter it becomes, as it uses data analytics and machine learning to identify where bugs are most likely to occur in your codebase. This improves the efficiency of your testing processes, and allows developers to focus their efforts on potentially problematic or high-risk areas.
AI testing agents aren't limited to one or two types of testing. You can use them to test a significant portion of your codebase in various ways, including:
AI testing agents get smarter and adapt to your software the more they test -but you want to be getting meaningful insights from Day 1. Training the agent with realistic, accurate data ensures you can hit the ground running and start realizing the benefits of AI software testing as soon as possible.
These could include logs, past test results, code coverage reports and more -carry out a data audit pre implementation to guarantee quality. It's best practice anyway, and remove the risk that you'll need to retrain your AI agent as a result of poor data quality.
Throwing all your existing processes out and starting from scratch with AI will set you back more than it will benefit you.
Whilst you'll definitely see long-term benefits and your investment will payoff, the disruption caused by sudden change will set you back - particularly as you'll need time to implement and train your AI testing agent.
Instead, start by implementing AI into a few of your testing processes, and workup. It's important to pick the right use cases here, so that your business can start seeing immediate benefits from the investment and minimize setbacks. Look for:
"Momentic makes it 3x faster for our team to write and maintain end to end tests."
Alex Cui, CTO, GPTZero
Momentic is an AI testing agent designed to supercharge your testing processes - maximize speed and coverage, minimize time and expense sunk into external QA, and free your engineers to focus on valuable project work.
If, like Alex and his team, you're keen to save over two thirds of the time you spend on key testing processes, why not schedule a conversation with our founder?