At Momentic, we're all about the vibes.
So vibe coding resonates. And leads to the obvious question: If you can vibe code, can you vibe test?
Absolutely. We'd argue that vibe testing has been around longer than vibe coding. Vibes are essential to robust testing. Testing has to include a certain amount of intuition-users aren't QA automatons, they're humans who interact with software based on feeling and instinct. They use your product on vibes; thus, you must add some vibes to your testing process.
But vibe coding is obviously taking vibes to the max. So, if we were to follow the Karpathy framework, what would vibe testing look like?
Here's Karpathy's concept of vibe coding:
There are three core elements to vibe coding in that first sentence that we can translate to testing.
The vibes are calling, and in testing, they beckon you away from rigid methodologies toward organic exploration. This principle transforms testing from a mechanical checkbox exercise into an intuitive journey guided by your instincts and observations.
Instead of writing detailed test plans, you explore the application organically, following your instincts about where issues might lurk.
Exponentials represent the explosive scaling potential when AI amplifies your testing capabilities beyond human limitations. This approach leverages automation to generate and execute test scenarios at a scale no manual tester could achieve, embracing quantity as a pathway to quality.
This principle invites you to liberate yourself from implementation details and experience the product with fresh, user-focused eyes. By maintaining deliberate ignorance about how the system works internally, you position yourself to discover the same surprises and frustrations your users might encounter.
Don't get bogged down in how features were implemented-focus on whether they work for users. Test the application as if you had no idea how it was built, because that's how your users will experience it.
There are two ways to look at vibe testing.
The first is the strong version. Strong vibe testing is testing at its most chaotic and intuitive extreme-the pure embodiment of Karpathy's original vision. You completely abandon test plans, documentation, and methodologies. You simply open the application and follow whatever draws your attention, with zero structure or methodology.
When you encounter issues, you copy the error message directly into an LLM and implement whatever solution it suggests without questioning the underlying causes. You don't maintain test cases, so why bother when AI can regenerate scenarios on demand? You don't track bugs systematically because your AI assistant will remember them (it won't).
Testing becomes a continuous stream of consciousness: click here, try that, ask AI, implement fix, repeat. You might have the AI generate hundreds of test variations and run them without reviewing them first. When stakeholders ask about test coverage, you vaguely gesture toward the AI and assure them, "the vibes are solid." The product ships when it feels right, not when specific criteria are met.
Strong vibes might look like this:
But there is also the weak version-perhaps better called the practical version-that integrates vibes into a structured framework. You start with traditional test planning but leave space for intuitive exploration. Your test documentation exists but is augmented by AI rather than replaced by it.
You still create formal test cases for critical paths, but use AI to expand them with edge cases you might have missed. When exploring the application, you document insights from your intuitive sessions and incorporate them into your test suite. Error messages are analyzed with AI assistance, but you validate the suggestions against your understanding of the system.
The AI becomes an intelligent collaborator rather than a replacement for your judgment. It helps you scale your testing through intelligent generation of test scenarios, priority suggestions based on risk analysis, and automated test maintenance. But you remain the curator of the test strategy, incorporating vibes as an enhancement rather than surrendering to them completely.
This might be a weak vibe workflow:
This balanced approach recognizes that both structure and intuition have their place, using AI to bridge the gap between methodical coverage and the unpredictable ways users will interact with your product in the real world.
There is already a backlash against vibe coding. It's seen, perhaps reasonably, as an option only for the non-serious, either in terms of coder or product. You can't build robustly on vibes alone.
Instead, the AI answer for serious programming is Cursor tab coding:
Cursor tab allows coders to stay in control of what the AI model is doing, but still lets the AI model do the work. It revolves around these concepts:
Unlike vibe coding, tab coding provides full visibility and understanding of the generated code. A version of this for testing would be ideal: guided, intentional automation while keeping the developer in control. Tab testing would likely look something like this:
Unlike "vibe testing," where you might just let AI generate a bunch of tests without understanding them, tab testing would keep you in the loop, preserving your agency while eliminating the tedious parts of writing test boilerplate. Tab testing, like tab coding, would be collaborative and intentional rather than surrendering control completely to the AI's "vibes." You'd maintain understanding of your test coverage while accelerating the testing process itself.
Should you be going fully with the vibes when testing? Probably not. But these extreme ideas always contain a kernel of truth. In testing, the balance has shifted too much towards frameworks, structure, and rigidity.
Even as just a thought experiment, vibe testing throws up some interesting ideas-embracing intuitive exploration, leveraging AI to generate test scenarios at extreme scale, and approaching software with fresh eyes, unbiased by implementation knowledge. Doing this, alongside strong frameworks and testing knowledge, creates more comprehensive, user-focused test strategies that catch the bugs that matter most.
The future of testing isn't abandoning structure entirely but finding the sweet spot where human vibes and AI speed amplify each other for better-quality code.