A glossary of different types of software testing. Functional testing, performance testing, smoke testing, UI testing – you name it, it’s in here.


Software testing isn’t just one activity; it’s a collection of approaches, each designed to answer a different question about how your software behaves.
As a result, there are many different types of software testing. In fact, we’d say that number is more or less equal to the number of coders “working on something internal for Google” you’ll meet at the average San Fran house party.
By which we mean n+1. There is always one more than you think, even when you’re certain you’ve encountered every last one.
This glossary will help you keep track of the various types of software testing you’ll come across, sorted into useful categories.
Skip to:
You get a human tester to interact with your app without scripts or automation tools. Less common nowadays, but still useful for exploratory work, usability insights, and edge cases that require human intuition rather than clinical machine-led logic.
Automated testing uses scripts and frameworks to execute tests repeatedly. It’s ideal for regression testing and CI/CD pipelines, where speed and consistency matter.
Like automated testing, but better. Rather than wasting engineer's time writing test scripts and running routine maintenance, you get an AI to do it. AI testing tools can generate tests, adapt to UI changes automatically, and even predict where bugs are most likely to occur.
Unit testing checks the building blocks of an application, usually individual functions or methods, are working as needed. Engineers often run them on their own code during development (you can use AI natural language tools to make this process faster).
Integration testing focuses on how components interact. Even if units work individually, integration testing ensures they function correctly together, so that the way one component works doesn’t negatively affect another.
System testing validates the entire system as a whole, ensuring all components work together in a production-like environment.
The most complex test in this section, end-to-end testing, simulates real user workflows from start to finish, such as signing up, making a purchase, or completing a transaction. Previously considered too complex to automate, end-to-end testing is ripe for AI-led automation given the time and cost savings involved.
Acceptance testing determines whether the system meets user or business requirements. This includes both user acceptance testing (UAT) and business acceptance testing (BAT).
Functional tests check whether features behave according to requirements: does the software do what it should?
You’re answering a yes/no question here. Either the requirement is met, or it is not. The way in which it meets that requirement is a matter for non-functional testing.
Non-functional testing tests how the software meets a particular requirement. For example:
Regression testing ensures new updates haven’t broken existing functionality. This is often heavily automated.
Smoke testing is simply a quick, high-level check to confirm that critical features work before deeper testing begins.
Sanity testing is a narrower check performed after small changes to make sure that specific fixes or updates work as expected.
‘Performance testing’ is pretty broad as a testing category. Ultimately, performance testing is all about how the system behaves under different conditions.
This might include:
Recovery testing assesses how well the software recovers from crashes or failures.
Failover testing ensures backup systems take over seamlessly when the primary system fails.
Security testing identifies vulnerabilities and protects against threats. It’s absolutely essential in keeping your users safe and maintaining their trust.
There are different types of security testing, it may include penetration testing, authentication checks, and data protection validation.
Mutation testing introduces small changes (bugs) into the code to verify whether existing tests can detect them. In other words, it’s a test for your tests. Meta.
Usability testing focuses on how intuitive your app is to use. This often involves real users providing qualitative feedback; your app will be used by humans, so human input here is valuable.
Accessibility testing ensures your software is usable by people with disabilities. This could include screen reader compatibility, keyboard navigation for a variety of different setups, and how visually accessible your interface is.
Compatibility testing verifies that your app works across different devices, operating systems, and environments.
Configuration testing checks how the system performs under different hardware or software configurations.
Installation testing ensures the software installs, updates, and uninstalls correctly. You don’t want everything to go south every time you ship an update.
Exploratory testing is a creative, unscripted approach where testers explore the system, learn as they go, and design tests in real time. Short on human time/resources? An AI agent can do this too!
Ad-hoc testing is completely informal testing with no documentation or predefined structure. It’s often used to find and fix glaringly obvious issues as a sort of quick win before more systematic testing begins.
Black box testing tests the system without any knowledge of internal code; the focus is exclusively on inputs and outputs.
White box testing involves full knowledge of the internal logic and structure. This is often used for code-level validation and coverage.
The best of both worlds? A hybrid approach where the tester has partial knowledge of the system internals.
API testing validates how different systems communicate via APIs, including data exchange, response times, and error handling.
UI testing ensures the user interface behaves correctly and renders as expected. This may be via ‘visual testing’, in which a testing tool takes screenshots of an interface for analysis.
Data-driven testing uses multiple datasets to validate functionality, often within automated frameworks.
A methodology where tests are written before the code. This means that development follows the following cycle: failing test → code → refactor. It’s a strong pairing with shift-left development.
BDD focuses on writing tests in human-readable language that align with business requirements, improving collaboration across teams. It’s a strong approach to pair with AI-led natural language test creation.
Alpha testing is conducted internally before releasing the product to external users.
Products ‘in beta’ are released to a limited external audience to gather real-world feedback and uncover issues missed internally.
Static testing involves reviewing code or documentation without executing it. Examples include code reviews and static analysis.
Dynamic testing involves running the software and observing its behavior. This includes most testing types outlined above.
Localization testing ensures the software works correctly in specific regions, languages, and cultural contexts. You might want to test:
Internationalization testing ensures the system can support multiple regions and languages without major changes.
There are quite a lot of different types of software tests.
In case it isn’t clear: you won’t be running all of these, all of the time. Some are inherently more applicable to different types of software than others.
When you’re thinking about how to test, it’s always useful to consider what you’re testing.
"It’s like giving someone your QA checklist and watching them execute it for you."
Sriram Sundarraj (Engineering Lead, Retool)
Whichever type of software testing you carry out, your chosen testing tool should save your engineers time, empower your team to release faster, and be intuitive enough to use on Day 1.
Our intuitive AI features, including natural language test creation, self-healing test, and autonomous agentic AI, allowed our customer Retool to 8x their release cadence and save over 40 engineering hours per month with Momentic
Want numbers like Retool’s? Book a demo today