Skip to main content

Documentation Index

Fetch the complete documentation index at: https://momentic.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Momentic is a managed testing platform for iOS and Android. Tests are YAML, executed on managed remote emulators and simulators. A multi-modal step cache stores locator metadata per step and auto-heals in place when the UI drifts. AI primitives cover action, assertion, visual diff, and typed extraction. AI providers route with cross-provider failover. A dashboard captures run videos, view hierarchies, heal events, and AI reasoning. Maestro is an open-source mobile UI automation framework from mobile.dev. Tests are YAML flows that drive the device through its accessibility layer. It’s well-suited to fully-scripted suites where the app team owns stable resource IDs and there’s a hard requirement for an OSS CLI with no SaaS dependency. AI features (assertWithAI, assertNoDefectsWithAI, extractTextWithAI) are experimental commands that wrap a customer-supplied LLM.

Speed and caching

MomenticMaestro
What’s cachedMulti-modal locator data per step (docs).Nothing. Selectors re-resolve every run.
Heal on missRe-resolves and updates the entry in place. Heal event on the run.Not supported. A miss is a test failure.
StorageManaged, git-aware.N/A.
Cost of a UI changeAuto-heal absorbs renamed IDs, localized strings, reordered hierarchies.One YAML edit per broken selector.
Smart waitingBuilt-in: navigation, load, screenshots, DOM / view-hierarchy mutations, same-origin requests. 3s default, configurable.Animation tolerance + extendedWaitUntil. No XHR tracking on WebViews.
Per-test execution capNone.15-minute soft limit on Maestro Cloud.

How the multi-modal cache works

A cached step stores more than one way to find the target: where it sits on screen, what it looks like, what text it contains, and the accessibility and structural attributes around it. Which of those signals matters for a given step is inferred from the natural-language description. “The red Cancel button below the Order Summary header” leans on visual and positional signals; “the Sign in button” leans on accessibility and text. When a step replays, the runner checks the stored signals against the live UI and runs the action without invoking the LLM when there’s a match.

What happens on a UI change

A practical sequence that exercises the headline difference. Take a mobile sign-in screen with an email_input resource ID, after a passing baseline run where the cache is warm. Refactor: the app team renames email_input to email_field and changes the surrounding container hierarchy. Maestro replay:
  1. tapOn: { id: "email_input" } retries against the device for ~5s.
  2. The retry window elapses with no match. The step fails.
  3. The test stops; the CI job is red. Someone edits the YAML to use the new ID or switches the selector to a text match, opens a PR, gets it merged, and re-runs CI.
Momentic replay:
  1. The cached locator for the Email step misses on the live UI.
  2. The locator agent re-resolves the original natural-language description Email.
  3. The new locator binds, the step runs, the test passes.
  4. The cache entry is updated in place. A heal event is attached to the run for review. Subsequent runs hit the cache normally.
This is the multiplier in suite-level wall-clock: a renamed ID, a localized string, or a reordered hierarchy is a no-op for Momentic and a manual PR for Maestro. At suite size this is the difference between a quiet morning and an on-call rotation.
Selector commands have a default ~5s retry window with animation tolerance. WebViews require app-side debug enablement; XHR is not tracked. Long flows reach for extendedWaitUntil { visible: ..., timeout: ... }.

Locators and AI primitives

MomenticMaestro
Locator modelNatural-language descriptions resolved by an AI agent against a11y tree + view hierarchy + screenshot. Cached, auto-healed.Static selectors: text, id, index, point, relational.
Visual cuesColor, icon, relative size, position part of the locator.Not supported.
Agentic stepact accepts a multi-step goal; the agent plans and executes.Not supported.
WebViewBundled Chromium; works inside hybrid apps without setup.Debugging must be enabled by the app team.
AI assert defaultFirst-class step type, fails the test by default.Experimental. Defaults to optional: true, so a failed AI assertion silently passes unless every call site sets optional: false (Maestro docs).
Visual diffassertVisual, agent-scored against a golden.assertScreenshot (pixel / hash diff).
AI providerManaged; cross-provider failover handled by the platform.Bring-your-own LLM, credentials, fallback.
Momentic mobile step types
  • Action: act, tap, doubleTap, longPress, type, swipe, scroll, back, dismissKeyboard, launchApp, terminateApp
  • Assert: assert, assertVisual, checkElement<...>
  • Extract: extract (typed via JSON schema)
  • Control flow: if/then/else, modules, parameter inputs
Maestro assertWithAI default, for contrast (source)
Since assertWithAI is an experimental feature, optional is set to true by default to prevent unstable AI responses from breaking your CI/CD pipelines. If you want a failed AI assertion to stop the test, you must explicitly set optional: false.
In practice this means a team that wants AI assertions to actually fail the test has to remember optional: false on every call site, with no project-level default.

Recovery, quarantine, and CI

MomenticMaestro
Failure recoveryLLM agent proposes test edits in the dashboard.Not supported.
QuarantineFirst-class: tests run, results report, exit code unaffected unless --only-quarantined.Not supported.
Sharding--shard-index <i> / --shard-count <n>, 1-indexed. Deterministic alphabetical partition.Parallel on Cloud; shard via CI config.
Reportersjunit, allure, playwright-json, buildkite-json.junit, HTML, HTML-detailed; Cloud dashboard.
Device fleetRemote Android 14/15 emulators and iOS 26 simulators with sub-1s provisioning, multi-region. Local AVDs / simulators supported.Hosted virtual devices on Cloud. Physical hardware via CLI.
  • Default: quarantined tests run, results report, exit code unaffected.
  • --skip-quarantined: skipped entirely.
  • --only-quarantined: only quarantined tests run; statuses affect exit code.

Authoring side-by-side

# Maestro
- launchApp
- tapOn: { id: "email_input" }
- inputText: "[email protected]"
- tapOn: { id: "password_input" }
- inputText: "secret"
- tapOn: "Sign in"
- assertVisible: "Welcome back"
- assertWithAI:
    assertion: "The dashboard chart is visible and not cut off."
    optional: false # required, or a failed assertion silently passes
If email_input is renamed or "Sign in" is localized, the YAML breaks until someone edits it. Agentic v2:
fileType: momentic/test/v2
id: sign-in-and-verify
steps:
  - act: Sign in with [email protected] / secret
  - assert: The dashboard chart is visible and not cut off
Explicit v2 (same flow, step-by-step):
fileType: momentic/test/v2
id: sign-in-and-verify
steps:
  - type:
      text: [email protected]
      into: Email
  - type:
      text: secret
      into: Password
  - click: Sign in
  - assert: The dashboard chart is visible and not cut off

A more realistic test

The hello-world above doesn’t show the v2 surface. A representative onboarding regression with module reuse, parameter inputs, typed extraction, and a conditional looks like this:
onboarding.test.yaml
fileType: momentic/test/v2
id: onboarding-with-promo
steps:
  - launchApp
  - module:
      path: ../modules/sign-in.module.yaml
      inputs:
        email: "{{ env.QA_EMAIL }}"
        password: "{{ env.QA_PASSWORD }}"
  - act: Skip the onboarding tour and land on Home
  - tap: Account
  - type:
      text: "{{ env.PROMO_CODE }}"
      into: Promo code field
  - tap: Apply
  - if:
      condition:
        assert: A success banner saying the promo was applied is visible
      then:
        - extract:
            goal: The discounted monthly total shown on the plan card
            schema:
              type: object
              properties:
                amount:
                  type: number
              required: [amount]
      else:
        - assert: An invalid-promo error is visible
  - assertVisual:
      that: The plan card is fully visible and not cut off
The matching module:
../modules/sign-in.module.yaml
fileType: momentic/module/v2
moduleId: sign-in
name: Sign in
steps:
  - type:
      text: "{{ inputs.email }}"
      into: Email
  - type:
      text: "{{ inputs.password }}"
      into: Password
  - tap: Sign in
  - assert: The Home tab is visible

When to pick which

Maestro is the right call if you have a small fully-scripted suite, your app team maintains stable resource IDs the test relies on, you have a hard requirement for physical hardware in the CLI, or you need an OSS framework with no SaaS dependency. Momentic is the right call if wall-clock run time matters at scale, your suite is large enough that selector maintenance is a real recurring cost, you want AI assertions that fail the test by default, and you expect healing, recovery, quarantine, sub-second emulator boots, and run videos out of the box.