Most Momentic tests are step-based: each step names a specific action (click the Sign in button, assert the dashboard loads). Step-based tests are fast, deterministic, and cacheable. Agentic testing is the opposite end of the spectrum. You give Momentic a goal, and an AI agent figures out the steps on the fly. Agentic steps are slower than step-based ones, but they thrive in situations where the exact flow isn’t predictable ahead of time.Documentation Index
Fetch the complete documentation index at: https://momentic.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
When to reach for agentic testing
- Dynamic flows where the UI changes based on feature flags, A/B tests, or user state
- High-level acceptance checks, “confirm a new user can sign up and reach the welcome screen”, without prescribing each click
- Exploratory coverage: let the agent probe areas of your app that aren’t worth a dedicated deterministic test
- End-to-end smoke tests after deploys, “complete an order”, with the agent handling whatever the current UX looks like
AI action
AI action is the primitive that powers agentic testing. It accepts a natural language goal and lets the agent drive the browser or app until the goal is complete.signup.test.yaml
Version: V2 vs V3
AI action ships in two versions, selectable per step from the Version dropdown in the editor.- V3 (alpha, recommended): a planner-style agent that drafts the full flow up front, caches the resolved steps after the first successful run, and self-heals when a cached step misses. Reruns are faster and more deterministic than V2. On web, V3 also supports optional Pre-condition and Post-condition checks that run as protected guards around the generated flow and cannot be modified by the agent.
- V2: the previous generation. A fully dynamic agent that decides each step on the fly and does not cache the generated trajectory. Kept available as a fallback for flows where V3 does not yet work well.
Web: V3 is available for browser tests. Mobile: V3 is currently Android-only;
iOS AI action steps continue to run on V2.
Pairing with assertions
Wrap agentic steps with explicit assertions (AI check, Page check, Element check) so you always verify the outcome, not just that the agent “finished”. Agentic steps are powerful but non-deterministic, assertions keep your tests honest.Reliability tips
- Keep goals short and specific. “Sign up a new user with a fresh email” is better than “Test the onboarding flow thoroughly.”
- Provide context the agent can’t infer. If there’s an invite code, pass it in via variables.
- Add a fallback assertion right after the agentic step so failures surface with a meaningful message.
- Combine with Auto-heal and Step cache, the agent’s successful traces are cached and replayed deterministically on subsequent runs.