A couple of weeks ago, Sahil Lavingia, the doyen of Doge and CEO of Gumroad, posted:

Cute, but wrong. It’s backwards. QA isn’t the new SWE; SWEs are the new QA. And engineers are about to find out that owning quality is every bit as frustrating, political, and high‑stakes as the feature work they cherish.
The point Lavingia is making is an important one: there is no longer any delineation between engineering and testing. In an AI-first environment, what does it mean to code? It means to generate code with AI. What does it mean to test that code? It means to generate tests with AI.
The idea of keeping two separate teams for these two functions is ridiculous. Thus, Lavingia says, QA is the new SWE.
What’s the steelman case for QA to become SWE?
I don’t know if Lavingia had thought through all these points, but there is definitely proof in his pudding. So, why do I disagree?
The problem with "QA is the new SWE" is that it misses the shift happening. We're not elevating QA to engineering status; we're distributing QA responsibilities across the entire engineering org.
Think about what's changing. AI doesn't just help QA write code; it allows developers write tests at crazy speed. The bottleneck was never technical capability. It was ownership.
When you separate building from testing, you create a moral hazard.
Developers optimize for feature velocity because someone else will catch their mistakes. QA optimizes for finding bugs because that's how they prove value. Neither optimizes for shipping quality code.
This is role absorption. Modern engineering teams are eliminating the handoff entirely. You write the feature, you write the tests, you own the quality. No more throwing code over the wall.
The case for developers owning quality:
When the same person who architects a system also designs its test strategy, quality becomes a positive constraint from day one.
So yes, QA professionals can become developers. However, the bigger shift is that developers are becoming responsible for quality.
Engineers joined this field to build things, not to write assertions about things they already built. There's a fundamental identity clash here: we see ourselves as creators, not validators. Testing feels like janitorial work compared to architecting new systems. AI can now generate tests, but you still need to review, debug, and own them.
The fear runs deeper than ego. In most organizations, feature count remains the primary yardstick on roadmaps. Spending half your sprint on test coverage looks like reduced output, even when it prevents future fires. Your manager says quality matters, but the promotion committee counts shipped features. AI might write your tests 10x faster, but you're still the one explaining why you "only" shipped three features this quarter.
Then there's the skills gap nobody talks about. Even with AI assistance, writing deterministic, maintainable tests is its own discipline:
The accountability shift hits even harder. When you own quality, you own every production incident. The pager becomes your personal shame bell. That 2 a.m. alert is explicitly your failure. Ship ten features flawlessly, and nobody notices. Miss one edge case, and everyone knows your name. AI might have written the test that missed the bug, but guess whose name is on the commit?
Perhaps worst of all is the maintenance drag. Tests aren't write-once artifacts. They break with every refactor, every dependency update, every requirement change. It's code that never ships anything new but constantly needs fixing. While your peers are building the next big feature, you're updating assertions because someone renamed a CSS class. AI can help regenerate tests, but you're still the one who has to figure out which failures are real bugs versus which are just outdated assumptions. You're still the one running the test suite locally, waiting for it to finish, investigating the failures.
Here's the escape hatch that makes this transition bearable: moving from AI-assisted testing to agentic testing. The difference matters.
Today's AI coding tools are test assistants. They generate tests faster, suggest edge cases, or even update simple assertions. But developers still review every test, debug every failure, and own every outcome.
True AI testing agents operate differently:
We want to remove humans from the testing loop entirely. You define the quality strategy: what needs testing, what risks matter, and what user journeys are critical. AI agents handle everything else.
Instead of reviewing AI-generated tests, you're setting quality objectives. Instead of debugging why the AI's test is flaky, you're reviewing coverage reports that matter to the business. Instead of updating assertions after a refactor, the agent understands your refactor and adapts automatically. You can’t hate a job you don’t have.
We're reaching that point. Tools like Momentic represent the shift from AI-assisted to AI-autonomous testing. Independent quality agents who own the full testing lifecycle.
Developers no longer have to own the tests. The AI agent does. You define quality objectives and success criteria. The agent handles test creation, execution, maintenance, and evolution. When tests break, the agent fixes them. When requirements change, the agent adapts.
SWEs become QAs, but at a much higher level. In reality, AI becomes QA, and (as yet) they have no propensity for job dissatisfaction.