How to Fix Flaky Tests in Selenium: A Definitive Guide

July 18, 2025

The CI/CD pipeline glows red. A critical end-to-end test has failed, blocking a deployment. Yet, when you run the exact same test on your local machine, it passes flawlessly. You run it again in the pipeline; it passes. A third time? It fails. This maddening cycle is the hallmark of a flaky test, a pervasive issue that erodes trust in automation, slows down development velocity, and drains engineering resources. Dealing with selenium flaky tests isn't just a minor annoyance; it's a significant threat to the efficiency and reliability of a modern software development lifecycle. According to a study by Google engineers, flaky tests are a serious problem even at scale, causing significant overhead. This guide provides a definitive, in-depth approach to not just temporarily patch, but systematically diagnose and eliminate the root causes of flakiness in your Selenium test suite, transforming your automation from a source of frustration into a pillar of confidence.

Understanding the Anatomy of a Selenium Flaky Test

Before we can fix a problem, we must understand its nature. A flaky test is a test that exhibits both passing and failing outcomes over time for the same, unchanged code and test environment. This non-deterministic behavior is the antithesis of what automated testing aims to achieve: reliable, repeatable verification of application functionality. Ignoring them is not an option. The cumulative cost of selenium flaky tests manifests in several damaging ways: developer distrust, wasted engineering hours re-running and debugging, and a slower time-to-market as teams become hesitant to trust their CI/CD pipelines. As Martin Fowler notes, non-deterministic tests can completely undermine the value of a test suite.

To effectively combat this issue, it's crucial to categorize the primary sources of flakiness. While they can seem random, these failures almost always stem from a handful of common anti-patterns and environmental factors. Understanding these root causes is the first step toward building a robust and reliable testing strategy.

The Core Culprits of Test Flakiness

  • Asynchronous Operations (The #1 Cause): Modern web applications are not static pages. They are dynamic, constantly fetching data, rendering content, and running animations in the background. AJAX/XHR requests, JavaScript frameworks like React or Angular updating the DOM, and CSS transitions all happen asynchronously. A test script that doesn't wait for these operations to complete before interacting with an element will inevitably fail intermittently. It's a race condition: sometimes the test is faster, sometimes the application is. This is the single most common reason for selenium flaky tests.

  • Environment and Infrastructure Instability: The environment where tests run is a significant variable. Discrepancies between a developer's local machine and the CI/CD environment can introduce flakiness. This includes issues like network latency causing slow API responses, overloaded test runners leading to timeouts, or subtle differences in browser versions or WebDriver binaries. A Forrester report on containerization highlights how technologies like Docker can mitigate these issues by creating consistent, reproducible environments, a key strategy in stabilizing test execution.

  • Brittle Locators and Poor Test Code: How you tell Selenium to find an element is paramount. Relying on auto-generated, dynamic IDs (e.g., id="gwt-uid-123") or long, fragile XPath expressions (e.g., /html/body/div[1]/div/div[2]/div/div[3]/button) creates tests that break with the slightest UI change. Furthermore, poor test design, such as tests that depend on the state left by a previous test (test dependency) or hard-coded Thread.sleep() calls, are guaranteed recipes for flakiness.

  • Application-Side Issues: Sometimes, the problem isn't the test; it's the application itself. Race conditions within the application's code, third-party scripts (like analytics or A/B testing tools) that alter the DOM unexpectedly, or random pop-ups (e.g., cookie consents, promotional modals) can interfere with test execution. A robust testing strategy must account for these external factors. Identifying these often requires close collaboration between QA and development, as documented in IEEE research on collaborative software quality practices.

The Cornerstone of Stability: Mastering Waits in Selenium

If asynchronous operations are the primary cause of selenium flaky tests, then intelligent waiting is the primary solution. The fundamental mistake many automation engineers make is not giving the application enough time to reach the state the test expects. However, the solution is not to litter your code with arbitrary pauses. This section dives deep into the correct waiting strategies that form the bedrock of a stable Selenium suite.

The Anti-Pattern: Thread.sleep()

First, let's address the most common but incorrect approach: the static sleep or pause. In Java, this is Thread.sleep(5000), and in Python, time.sleep(5). This command tells your test to halt execution for a fixed duration, regardless of the application's state. This is a major anti-pattern for two reasons:

  1. It's Inefficient: If the element appears in 500 milliseconds, your test still waits for the full 5 seconds, needlessly slowing down your entire test suite.
  2. It's Unreliable: If, due to network lag, the element takes 5.1 seconds to appear, your test will fail. You are constantly guessing the right amount of time, a guess that will eventually be wrong.

Using static sleeps is a direct path to tests that are both slow and flaky. The professional approach is to use dynamic, or 'smart', waits.

Implicit Waits: The Global Safety Net

An implicit wait tells WebDriver to poll the DOM for a certain amount of time when trying to find any element. It is set once per driver session and applies globally. If an element is not immediately available, WebDriver will wait up to the specified duration before throwing a NoSuchElementException.

Java Example:

driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(10));
driver.findElement(By.id("someElement")); // Will wait up to 10s for this to appear

Python Example:

driver.implicitly_wait(10) # seconds
driver.find_element(By.ID, "someElement") # Will wait up to 10s

Pros & Cons: Implicit waits are simple to implement and can catch many basic timing issues. However, their global nature can mask performance problems and they only apply to finding elements. They do not wait for an element to become clickable, visible, or have specific text. For this reason, many experts, including those who contribute to the official Selenium documentation, recommend against mixing implicit and explicit waits and favor the latter for its precision.

Explicit Waits: The Precision Tool

An explicit wait is the gold standard for handling asynchronicity. It allows you to define a specific condition to wait for, with a maximum timeout. The driver will poll the condition at a regular interval until it returns true or the timeout is reached. This is the most effective way to fix selenium flaky tests caused by dynamic content.

This is achieved using a combination of WebDriverWait and ExpectedConditions.

Java Example:

WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
WebElement element = wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("dynamic-element")));
element.click();

Python Example:

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

wait = WebDriverWait(driver, 10)
element = wait.until(EC.visibility_of_element_located((By.ID, "dynamic-element")))
element.click()

Here, the test will wait up to 10 seconds specifically for the element to become visible. If it's visible in 1 second, the test proceeds immediately. This is both efficient and robust. There are dozens of ExpectedConditions available, such as:

  • elementToBeClickable(): Waits for an element to be visible and enabled.
  • presenceOfElementLocated(): Waits only for the element to be in the DOM, not necessarily visible.
  • textToBePresentInElement(): Waits for an element to contain specific text.

Using explicit waits correctly, as detailed in tutorials from sources like Baeldung, is a non-negotiable skill for any serious automation engineer.

Fluent Waits: The Power User's Choice

A Fluent Wait is a more advanced type of explicit wait. It allows you to configure the polling frequency and to ignore specific types of exceptions during the polling period. This is useful for complex scenarios where an element might briefly be in an invalid state (e.g., a StaleElementReferenceException) before stabilizing.

Java Example:

Wait<WebDriver> wait = new FluentWait<WebDriver>(driver)
       .withTimeout(Duration.ofSeconds(30))
       .pollingEvery(Duration.ofSeconds(5))
       .ignoring(NoSuchElementException.class);

WebElement foo = wait.until(new Function<WebDriver, WebElement>() {
  public WebElement apply(WebDriver driver) {
    return driver.findElement(By.id("foo"));
  }
});

While more verbose, a Fluent Wait provides maximum control for the most challenging asynchronous situations, a technique often explored in advanced testing discussions on platforms like the Sauce Labs blog.

Bulletproof Locators: Strategies for Reliable Element Selection

After timing, the second most common source of selenium flaky tests is the use of brittle locators. A locator is the query string Selenium uses to find an element on a web page. If this query is weak, it will break easily when developers make even minor changes to the UI, leading to a constant cycle of test failures and maintenance.

Building a stable automation framework requires a deliberate and hierarchical approach to selecting element locators. The goal is to choose locators that are tied to the business function of an element, not its incidental presentation or position on the page.

The Hierarchy of Locator Stability

Not all locators are created equal. A best practice is to choose the most stable and unique option available, following this general order of preference:

  1. Dedicated Test Attributes (data-testid): This is the gold standard for modern test automation. It involves developers adding a specific attribute to the HTML solely for testing purposes, such as data-testid="login-button". This decouples the test from implementation details like CSS classes or DOM structure. It creates a formal contract between the application and the test suite. This approach is heavily advocated by testing experts and front-end communities, as it makes tests resilient to refactoring and styling changes. As noted by testing evangelist Kent C. Dodds, this strategy aligns tests with how a user perceives the application.

    <button data-testid="submit-order-button">Submit</button>
    # The most robust locator
    driver.find_element(By.CSS_SELECTOR, "[data-testid='submit-order-button']")
  2. Unique id and name Attributes: When dedicated test attributes aren't available, a unique and static id is the next best choice. These are intended to be unique within a page, making them highly reliable. The name attribute is also a strong candidate, especially for form elements like <input>, <select>, etc.

  3. CSS Selectors: CSS selectors are powerful, performant, and generally more readable than XPath. They are excellent for finding elements based on their class, attributes, or relationship to other elements. However, they can become brittle if they rely on complex chains or auto-generated class names from frameworks. A good CSS selector is concise and descriptive.

    • Good: form.login-form input.password (descriptive and based on function)
    • Bad: div.row > div:nth-child(2) > div.generated-class-xyz (relies on structure and generated styles) The MDN Web Docs provide an excellent reference for mastering CSS selectors.
  4. XPath: XPath is incredibly powerful but should often be a last resort. Its ability to traverse the entire DOM tree (e.g., finding a parent or sibling) is useful in specific, complex situations where other locators fail. However, it is also the easiest to abuse. Absolute XPath (e.g., /html/body/div[2]/div[1]/...) is extremely brittle and should never be used in a production test suite. Relative XPath, especially one that locates an element by its visible text (//button[text()='Log In']), can be a valid strategy, but it can be slow and may fail with internationalization. The official W3C XPath specification demonstrates its power, but also its complexity, underscoring the need for careful use.

Advanced Strategies and Best Practices for Eliminating Flakiness

Once you have mastered waits and locators, you can move on to higher-level strategies that address the systemic causes of selenium flaky tests. These practices involve test architecture, data management, and CI/CD integration, creating a resilient ecosystem around your tests.

Isolate Tests with Proper Data Management

Tests should be atomic and independent. A common source of flakiness is test dependency, where TestB fails because TestA did not run or left the application in an unexpected state. To prevent this, each test must be responsible for its own setup and teardown.

  • State Setup via API: Instead of using the UI to log in, navigate through multiple pages, and create the data needed for a test, use direct API calls. For example, to test a user's profile page, use an API to create the user and log them in, then have Selenium navigate directly to the profile URL. This is drastically faster and bypasses numerous potential points of UI failure. This approach is a core tenet of efficient test automation described in Martin Fowler's Test Pyramid concept.
  • Clean Slate Principle: Every test run should start from a known, clean state. This often involves running database seeding scripts before a suite runs and ensuring that tests clean up after themselves (e.g., deleting any records they create). This prevents pollution between test runs.

Implement the Page Object Model (POM)

The Page Object Model is a design pattern that is essential for creating maintainable and scalable Selenium test suites. It reduces code duplication and centralizes locators, which indirectly but significantly reduces flakiness. In POM, each page of your web application is represented by a corresponding class. This class contains all the locators for elements on that page and methods that represent the user interactions.

Example Page Object (Python):

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

class LoginPage:
    # Locators are centralized here
    USERNAME_INPUT = (By.ID, "username")
    PASSWORD_INPUT = (By.ID, "password")
    LOGIN_BUTTON = (By.CSS_SELECTOR, "[data-testid='login-button']")

    def __init__(self, driver):
        self.driver = driver
        self.wait = WebDriverWait(self.driver, 10)

    def enter_username(self, username):
        self.wait.until(EC.visibility_of_element_located(self.USERNAME_INPUT)).send_keys(username)

    def enter_password(self, password):
        self.driver.find_element(*self.PASSWORD_INPUT).send_keys(password)

    def click_login(self):
        self.driver.find_element(*self.LOGIN_BUTTON).click()

    def login(self, username, password):
        self.enter_username(username)
        self.enter_password(password)
        self.click_login()

When a locator changes, you only need to update it in one place: the page object. This makes maintenance vastly simpler and more reliable. The benefits of POM are well-documented across the testing community, including in resources from major tech companies like GitHub's own Selenium wiki.

Implement a Smart Retry Mechanism

Sometimes, despite all best efforts, a test might fail due to a transient network blip or a momentary infrastructure hiccup. In these cases, a retry mechanism can be a pragmatic solution. Most modern testing frameworks (like TestNG, JUnit, pytest) have built-in support for automatically re-running a failed test.

Important Caveat: A retry mechanism should be a safety net, not a primary strategy. If a test consistently passes only on the second or third try, it is still a flaky test. The retry is hiding a deeper problem that needs to be fixed. Use retries sparingly and monitor your test results to identify tests that frequently rely on them.

Ensure Environment Consistency with Containers

As mentioned earlier, environment drift is a major cause of flakiness. The best way to combat this is to run your tests in a consistent, controlled environment. Docker and other containerization technologies are perfect for this. You can define a Dockerfile that specifies the exact operating system, browser version, WebDriver version, and any other dependencies. This ensures that the test environment in your CI/CD pipeline (e.g., Jenkins, GitLab CI, CircleCI) is identical to the one on your local machine, eliminating an entire class of "it works on my machine" problems.

Triage and Analyze with Better Tooling

Finally, you can't fix what you can't see. Enhance your test framework to provide better diagnostics on failure.

  • Automatic Screenshots: Configure your framework to take a screenshot of the browser at the moment of failure.
  • Video Recordings: Tools like Selenium Grid 4 or third-party services can record a video of the entire test session, which is invaluable for debugging race conditions.
  • Flakiness Tracking: Implement a system to track which tests fail intermittently. A test that fails once might be a fluke; a test that fails 5% of the time over 100 runs is clearly a selenium flaky test that needs immediate attention.

Eradicating selenium flaky tests is not a one-time project but a continuous commitment to quality and discipline. It requires a cultural shift where flaky tests are treated with the same severity as bugs in production code. By moving away from brittle solutions like Thread.sleep() and embracing robust practices—mastering dynamic waits, enforcing stable locator strategies, architecting tests for independence using the Page Object Model, and standardizing environments—you can transform your test automation suite from an unreliable gatekeeper into a trusted partner in your development process. The journey begins with a single test. Analyze its failure, apply the principles of waiting and locating, and build from there. The reward is not just a green pipeline, but faster feedback loops, higher developer morale, and ultimately, a more confident and rapid delivery of high-quality software to your users. The fight against flakiness is a fight for a more efficient and reliable engineering culture.

What today's top teams are saying about Momentic:

"Momentic makes it 3x faster for our team to write and maintain end to end tests."

- Alex, CTO, GPTZero

"Works for us in prod, super great UX, and incredible velocity and delivery."

- Aditya, CTO, Best Parents

"…it was done running in 14 min, without me needing to do a thing during that time."

- Mike, Eng Manager, Runway

Increase velocity with reliable AI testing.

Run stable, dev-owned tests on every push. No QA bottlenecks.

Ship it

FAQs

Momentic tests are much more reliable than Playwright or Cypress tests because they are not affected by changes in the DOM.

Our customers often build their first tests within five minutes. It's very easy to build tests using the low-code editor. You can also record your actions and turn them into a fully working automated test.

Not even a little bit. As long as you can clearly describe what you want to test, Momentic can get it done.

Yes. You can use Momentic's CLI to run tests anywhere. We support any CI provider that can run Node.js.

Mobile and desktop support is on our roadmap, but we don't have a specific release date yet.

We currently support Chromium and Chrome browsers for tests. Safari and Firefox support is on our roadmap, but we don't have a specific release date yet.

© 2025 Momentic, Inc.
All rights reserved.