Cypress Test Retries: A Band-Aid for Flakiness or a Strategic Tool?

September 1, 2025

A failing CI/CD pipeline is the modern developer's equivalent of a dead-end street. When an end-to-end test suite, meticulously crafted in Cypress, suddenly fails due to a seemingly random, non-reproducible error, the pressure mounts. The immediate temptation is to reach for a quick fix, and Cypress offers a particularly alluring one: test retries. With a simple configuration change, a failing test can be given a second or third chance to pass, potentially turning a red pipeline green. But this convenience raises a critical question that separates disciplined engineering teams from those accumulating technical debt: are Cypress test retries a pragmatic tool for managing transient instability, or are they merely a band-aid masking deeper, more sinister issues within your application or test code? This comprehensive guide explores the dual nature of Cypress test retries, providing a framework for leveraging them strategically while avoiding the pitfalls of masking true test flakiness.

Understanding the Root of the Problem: What Causes Test Flakiness?

Before we can properly evaluate the role of Cypress test retries, we must first dissect the problem they claim to solve: test flakiness. A flaky test is one that can both pass and fail when run multiple times against the same code without any changes. This non-determinism is the bane of continuous integration, eroding trust in the test suite and slowing down development cycles. According to research conducted at Google, flaky tests are a significant and costly problem even in mature engineering organizations.

The causes of flakiness are often multifaceted, but they typically fall into several common categories:

  • Asynchronous Operations: Modern web applications are a symphony of asynchronous events. Tests often fail because they attempt to interact with an element before it has appeared, become interactive, or finished an animation. A test might be faster than a background API call that populates a data grid, leading to a race condition where the test fails intermittently.

  • Environment and Infrastructure Instability: End-to-end tests do not run in a vacuum. They depend on a complex stack of services, databases, and network infrastructure. A momentary spike in server load, a brief network hiccup, or a dependency on an unstable third-party API can cause a test to fail. As noted by experts on software development practices, such non-determinism is a key challenge in creating reliable automated tests.

  • Brittle Selectors: Tests that rely on volatile selectors, such as auto-generated CSS classes or element text that changes, are prime candidates for flakiness. A minor, unrelated change by a developer can break a selector and cause a test to fail, even if the underlying functionality is intact.

  • Test Data and State Pollution: Tests that are not properly isolated can interfere with one another. A test that creates a user but doesn't clean up after itself can cause a subsequent test, which assumes a clean state, to fail. This is particularly common in tests that share a database or rely on a specific application state. The principle of test isolation is a cornerstone of reliable automation, as emphasized in numerous software engineering publications.

Ignoring these root causes and immediately applying a retry mechanism is akin to taking a painkiller for a broken bone—it might temporarily alleviate the symptom, but the underlying injury remains and can worsen over time.

How Cypress Test Retries Work: A Technical Deep Dive

Cypress provides a built-in mechanism to automatically re-run a failing test a specified number of times. This feature, known as Cypress test retries, is a powerful tool for handling the transient failures discussed above. Understanding its mechanics is crucial for using it effectively.

The configuration is straightforward and can be applied at different levels of granularity, offering flexibility for various scenarios. The official Cypress documentation provides a comprehensive overview of these options.

Global Configuration

You can set a default retry policy for your entire project within the cypress.config.js file. This is useful for establishing a baseline behavior for your test suite. You can specify different retry counts for when running via cypress run (typically in CI) and cypress open (for local development).

// cypress.config.js
const { defineConfig } = require('cypress');

module.exports = defineConfig({
  // other configurations
  retries: {
    // Configure retries for `cypress run`
    // Default is 0
    runMode: 2,
    // Configure retries for `cypress open`
    // Default is 0
    openMode: 0
  }
});

In this example, any test that fails during a cypress run command will be retried up to two more times. If any of the three attempts (the initial run plus two retries) passes, the test is marked as successful. It will only be marked as failed if all three attempts fail.

Per-Test or Per-Suite Configuration

Sometimes, a global retry policy is too broad. You may have a specific test or suite of tests that is known to be susceptible to a particular transient issue (e.g., interacting with a slow, third-party service). In these cases, you can override the global configuration at the test or suite level.

// Overriding at the suite level
describe('Dashboard Analytics Suite', { retries: 1 }, () => {
  it('loads the main chart with data from a third-party API', () => {
    // Test code that might face transient network issues
  });

  it('exports a report to PDF', () => {
    // Another test in the suite
  });
});

// Overriding at the individual test level
it('should process a large uploaded file', { retries: { runMode: 3, openMode: 1 } }, () => {
  // This specific test is known to be flaky due to long processing times
});

The Mechanics of a Retry

It is critically important to understand what happens when Cypress retries a test. Cypress does not simply re-run the failed command. Instead, it re-runs the entire test from the very beginning. This includes re-running all beforeEach and afterEach hooks for that test. This behavior is designed to ensure the test is re-attempted from a clean, isolated state, which is a fundamental principle of good test design discussed in many developer-focused publications. This clean-slate approach is vital, but it also means that retries can be time-consuming, as the entire setup and execution logic is repeated.

The 'Band-Aid' Argument: Dangers of Over-relying on Retries

While the mechanism is simple, the decision to use Cypress test retries is complex. Applying them indiscriminately can introduce significant risks and mask serious problems, effectively acting as a band-aid over a festering wound.

1. Masking Root Causes and Accumulating Technical Debt The most significant danger is that retries can make a flaky test pass without ever addressing the underlying cause. A race condition in your application's JavaScript, a poorly optimized database query, or an inefficient component rendering cycle might be the real culprit. By using a retry, you silence the warning signal. This creates a false sense of security and allows technical debt to accumulate. As highlighted in industry reports from McKinsey, unaddressed technical debt can severely hamper a company's ability to innovate and respond to market changes.

2. Inflating CI/CD Execution Time and Costs Every retry adds to the total execution time of your test suite. Imagine a suite of 500 tests where 5% (25 tests) are flaky and require two retries to pass. If each test takes 45 seconds, the retries alone add over 37 minutes to your pipeline's runtime (25 tests 2 retries 45 seconds). In cloud-based CI environments where you pay per minute of compute time, this directly translates to increased operational costs. The DORA State of DevOps Report consistently links elite performance to fast, reliable feedback loops—a goal that is undermined by long-running, retry-heavy test suites.

3. Eroding Confidence in the Test Suite The primary purpose of an automated test suite is to provide a reliable signal about the health of the application. When developers know that tests frequently pass only on the second or third attempt, they begin to lose trust in the results. This can lead to a culture where failures are dismissed as "just another flaky test," and real regressions may be overlooked. This phenomenon, often called "test failure fatigue," can render the entire testing effort ineffective.

4. Complicating the Debugging Process When a test fails even after all retries are exhausted, debugging can be more challenging. The state that caused the final failure may be different from the state that caused the initial failure. While Cypress does a great job of providing screenshots and videos for each attempt, the developer still has to sift through multiple failure contexts to diagnose the problem, which can be more time-consuming than debugging a single, deterministic failure. This added complexity works against the agile principle of rapid feedback, a concept well-documented by sources like the Atlassian Agile Coach.

The 'Strategic Tool' Argument: Using Retries Intelligently

Despite the risks, it would be a mistake to dismiss Cypress test retries entirely. When applied with discipline and clear intention, they can be a valuable and strategic part of a comprehensive testing strategy.

The key is to move from a reactive, indiscriminate application of retries to a proactive, targeted one. Here are scenarios where using retries is a justifiable and intelligent choice:

1. Mitigating True Environmental Transience Sometimes, the source of flakiness is genuinely outside the control of your application code. This is common in complex staging or pre-production environments that integrate with dozens of other services. A brief network partition between microservices or a momentary blip from a third-party API (like a payment gateway or analytics provider) can cause a test to fail. In these cases, a single, targeted retry is a pragmatic solution to prevent a legitimate code change from being blocked by an unrelated infrastructure issue. The goal is to increase the signal-to-noise ratio of your pipeline.

2. As a Deliberate, Temporary Stop-Gap A powerful team policy is to treat test retries as a temporary measure, not a permanent solution. The rule is simple: if you add a retry to a test, you must also create a ticket in your project management system to investigate and fix the root cause of the flakiness. This allows you to unblock the CI/CD pipeline and merge critical changes while ensuring the underlying problem is tracked and prioritized. This approach aligns with the DevOps principle of continuous improvement, as advocated by organizations like the DevOps Research and Assessment (DORA) group.

3. Isolating and Monitoring Flaky Tests Advanced teams often implement a strategy of "flaky test quarantining." Instead of letting flaky tests with retries slow down the primary CI pipeline, they are moved to a separate, non-blocking test run. This keeps the main pipeline fast and reliable for developers. The quarantined suite can be run periodically with retries enabled, not to make them pass, but to gather data on their flakiness rate. Tools like the Cypress Dashboard provide analytics that are invaluable for this, helping teams identify which tests are most problematic and deserve engineering attention.

Beyond Retries: Proven Strategies to Fix Flaky Cypress Tests

The ultimate goal should always be to eliminate flakiness, not just manage it with retries. A robust test suite is a deterministic one. Here are actionable strategies to build resilience into your Cypress tests, reducing the need for Cypress test retries in the first place.

Master Cypress's Built-in Retry-ability

Cypress was designed to handle modern, asynchronous web apps. Most Cypress commands have built-in retry-ability and default timeouts. The key is to leverage this correctly.

  • Embrace Chained Assertions: Instead of checking for an element and then acting on it in separate steps, chain your commands. cy.get('.my-button').should('be.visible').click(); is far more robust than two separate commands because Cypress will automatically retry the entire chain until the element is visible before attempting the click.
  • Avoid Fixed Waits: The command cy.wait(5000) is a major anti-pattern and a leading cause of both slow and flaky tests. If the element appears in 1 second, you've wasted 4 seconds. If it takes 6 seconds, your test fails. This is a practice strongly discouraged by the official Cypress best practices guide.
  • Use cy.intercept() for Network Requests: A far better approach is to wait for specific network events. If your test depends on an API call to finish, use cy.intercept() to declare the route and cy.wait('@alias') to explicitly wait for that request to complete before proceeding.
// Good: Wait for the network call to finish
cy.intercept('POST', '/api/users').as('createUser');
cy.get('[data-cy=submit-user-form]').click();
cy.wait('@createUser').its('response.statusCode').should('eq', 201);
cy.contains('User created successfully!').should('be.visible');

Write Resilient Selectors

Stable tests require stable selectors. Relying on CSS classes meant for styling or on element text is brittle.

  • *Prioritize `data-Attributes:** The most robust strategy is to add dedicated test attributes to your markup, such asdata-cy="submit-button"`. These attributes are independent of styling and content, making your tests resilient to UI refactoring. This approach is a widely accepted best practice in the testing community, often discussed on platforms like MDN Web Docs.

Ensure True Test Isolation

Each test should be able to run independently and in any order without affecting or being affected by other tests.

  • Reset State Programmatically: Use beforeEach hooks to ensure a clean slate for every test. This could involve clearing cookies (cy.clearCookies()), resetting local storage (cy.clearLocalStorage()), or making API calls to reset the database to a known state. This programmatic setup is far more reliable than relying on the UI to navigate back to a starting point.

The debate over Cypress test retries is not about whether the feature is good or bad, but about how it is used. When wielded as a blunt instrument to silence a noisy CI pipeline, it becomes a dangerous band-aid, hiding systemic issues and fostering a culture of low confidence in your test suite. However, when applied with surgical precision—as a temporary measure tied to a remediation plan, or as a pragmatic buffer against genuine environmental instability—it transforms into a strategic tool that maintains development velocity without sacrificing long-term quality.

The most mature engineering teams view a flaky test not as an annoyance to be silenced with a retry, but as a valuable signal. It's a bug, either in the application or the test itself, waiting to be discovered. The ultimate goal is not to have a suite that passes thanks to retries, but to have a suite so robust and deterministic that retries become unnecessary. Challenge your team to adopt this mindset: fix first, retry sparingly, and build a foundation of automated testing that truly accelerates, rather than impedes, your development process.

What today's top teams are saying about Momentic:

"Momentic makes it 3x faster for our team to write and maintain end to end tests."

- Alex, CTO, GPTZero

"Works for us in prod, super great UX, and incredible velocity and delivery."

- Aditya, CTO, Best Parents

"…it was done running in 14 min, without me needing to do a thing during that time."

- Mike, Eng Manager, Runway

Increase velocity with reliable AI testing.

Run stable, dev-owned tests on every push. No QA bottlenecks.

Ship it

FAQs

Momentic tests are much more reliable than Playwright or Cypress tests because they are not affected by changes in the DOM.

Our customers often build their first tests within five minutes. It's very easy to build tests using the low-code editor. You can also record your actions and turn them into a fully working automated test.

Not even a little bit. As long as you can clearly describe what you want to test, Momentic can get it done.

Yes. You can use Momentic's CLI to run tests anywhere. We support any CI provider that can run Node.js.

Mobile and desktop support is on our roadmap, but we don't have a specific release date yet.

We currently support Chromium and Chrome browsers for tests. Safari and Firefox support is on our roadmap, but we don't have a specific release date yet.

© 2025 Momentic, Inc.
All rights reserved.