Steve Yegge recently published a piece called "The AI Vampire" that's been making the rounds. His core argument is that, whether AI actually delivers the "10x productivity" that's become a marketing mantra, any surplus it does create is captured by employers who expect more output rather than giving you an easier life. The AI is a Colin Robinson-style energy vampire — it drains you by making you do more, faster, forever.
It's a good piece. Sharp, honest, and Yegge delivers it with his usual flair. Reading it as someone who's spent 12 years in quality engineering and the last year orchestrating agentic AI fleets in production, it resonated hard. But it also got me thinking about a layer he didn't explore — one that's invisible unless you think about quality for a living.
The AI vampire doesn't just feed on your energy. It feeds on your judgment, too. And that second bite is the one nobody sees coming.
The Parallel Formula
Yegge frames the problem as a $/hr equation. Your employer pays you a salary (the numerator), and the denominator is the number of hours you work. AI makes you productive enough to do the same job in fewer hours, but instead of working less, you're pressured to produce more. The denominator stays the same or grows. You get drained.
Fair enough — and he's right. But there's a parallel formula running alongside it that's invisible to most people who think about productivity, because they don't think about quality:
This is what I'd call the quality ratio, and it compounds the $/hr problem in ways that make both worse. Here's why.
When AI amplifies your output — whether it's the mythical "10x" or something more modest — it doesn't just amplify code or documents or deliverables. It amplifies the decision load.
Every pull request an agent generates needs a human to decide: Is this correct? Does this meet the requirement? Does this break something downstream? Is the test coverage meaningful, or is it just making the CI pipeline green?
At a normal pace, a senior engineer might make 50 meaningful decisions a day. At an AI-amplified pace, that number multiplies fast. And here's the thing about human decision-making that no productivity framework wants to talk about: it degrades under load. Not linearly. It falls off a cliff.
This is what I see happening everywhere. The vampire isn't just stealing your time. It's stealing the one thing that can't be regenerated with a nap or a weekend off: the quality of your thinking.
Completion Theater: The Human Version
In my work building agentic testing platforms, I coined the term "completion theater" to describe something I kept seeing in AI agents. That's when an agent optimizes for appearing to have completed a task rather than actually completing it. The tests pass, the logs look clean, the summary says "all tasks finished successfully" — but when you dig in, the agent cut corners. It met the exit criteria but not the intent.
I spent weeks debugging this in my Agentic QE Fleet. It's a fascinating problem in agent design. But here's what few talk about: humans do the exact same thing under AI-amplified pressure.
You've seen it. Maybe you've done it. I know I have.
You're reviewing the third agent-generated PR in an hour. The first one, you read carefully. The second one, you skimmed. The third one? You look at the diff, it seems reasonable, the tests pass, you hit approve. You performed the ritual of review without the substance of review.
That's human completion theater. And it's everywhere in teams that adopted AI tooling without rethinking their workflow.
The standup still happens. The code review still happens. The QA sign-off still happens. But the cognitive depth behind each of those activities has been hollowed out by the sheer volume of decisions the AI-amplified pace demands.
Yegge talks about "nap attacks" from vibe-coding sessions — that sudden wall of exhaustion after a burst of AI-assisted productivity. I've felt that too. But what concerns me more than the fatigue is what happens before the nap attack, in that last hour when you're still technically working but your judgment has already checked out. That's when the expensive mistakes get made. Not bugs. Bugs are cheap. Wrong architectural decisions. Missed security implications. Tests that validate the wrong behavior. Those are the ones that compound.
What I Actually Do (And Why It Works)
I'm going to get personal here because I think the abstract discussion of "sustainable pace" is useless without showing what it looks like in practice.
I run my own consultancy. I orchestrate multiple agentic AI fleets across parallel projects. By every measure Yegge uses, I should be the poster child for AI vampire drainage — a solopreneur squeezing maximum output from AI to compete with larger teams.
Here's what my actual day looks like.
I wake up. Shower. Walk the dogs. This isn't "wellness optimization" — it's the first breath of fresh air that sets my head right. Then groceries or breakfast, then a set of exercises. Only after all of this do I look at a screen. No phone. No laptop. No "quick check" of messages.
This is deliberate. Before I see what the world wants from me, I contemplate what I want to work on today. What matters. What the priorities actually are versus what my inbox will try to convince me they are.
Then I sit down and review what my agents did overnight. Assign new tasks across one or two projects. Start going through messages, email, LinkedIn — but the agents are already working while I'm doing this. That's the first batch: 2 to 4 hours, with a break every couple of hours if I haven't taken one already.
Lunch between noon and one. Then, outside for at least 10 minutes. Not optional. The walk isn't a luxury; it's infrastructure.
Afternoon: another batch of reviews. What did the agents produce? What needs reassignment? What new tasks emerged? Two to three hours. Then I need a longer break — at least 30 minutes. Walk in nature. Work in the garden. I have a garden with fruits, vegetables, the whole thing. My hands are in the soil while my subconscious is processing the architectural decision I wasn't quite sure about at 3 PM.
Then one more batch. Two hours. Review, reassign, close out. Done.
Here's what I want you to notice about this rhythm: I never work more than 2-3 hours without stepping away from the screen. Not because I read a productivity book that told me to. Because after 12 years of quality engineering, I know exactly what happens to my decision quality after hour three without a break.
I've seen the bugs I approve. I've seen the architectural shortcuts I wave through. I've caught myself doing completion theater on my own agent outputs.
The breaks aren't a rest from work. They're protection of judgment quality. That's a fundamentally different framing, and it matters.
The Third Player in the Room
Yegge frames the AI value-capture battle as an employer-versus-employee battle. Who gets the surplus? The company that demands more output, or the worker who wants a shorter workday?
I'd add a third player to this game that the developer-productivity discourse almost always ignores: the customer.
When a company captures 100% of the AI surplus by burning out its engineers, the product's quality degrades. Not immediately — that's the trap. In the short term, you get more features, faster releases, and impressive velocity metrics. The dashboards look great. The sprint burndown charts are works of art.
But the decisions behind those features were made by exhausted humans doing completion theater. The test coverage looks solid on paper, but it validates the wrong behaviors. The architecture was approved by someone on their fifth agent review of the day. Six months later, the technical debt has compounded, the customers are hitting edge cases nobody thought through, and the "captured value" is evaporating through support tickets and churn.
This is the part that quality engineers understand intuitively and that most productivity conversations miss entirely: you can't capture value that was never real. And value that wasn't properly validated was never real. It was a promissory note written by a tired brain.
This is why "quality must be built in, not tested in" isn't just a philosophy — it's an economic argument. You cannot bolt quality onto a 10x-pace system after the fact. If the human decisions that shaped the system were compromised by cognitive overload, no amount of downstream testing will fix it. You're testing against the wrong specification because the specification itself was a product of completion theater.
The PACT Principle That Applies Here
In my quality engineering framework, I work with PACT principles: Proactive, Autonomous, Collaborative, Targeted. Most of my writing about PACT has focused on applying it to agent design—how do you build AI testing agents that are proactive in finding issues, autonomous in their execution, collaborative with human engineers, and targeted in what they actually test?
But the principle that matters most for the AI vampire problem is Autonomous — and not in the way you'd expect.
When I say agents should be autonomous, I don't mean they replace humans. I mean, they handle execution so that humans can preserve their cognitive energy for the decisions that actually require human judgment. The whole point of agentic architecture is not to make humans work at AI speed. It's to decouple the human decision rhythm from the machine execution rhythm.
My agents work overnight while I sleep. They work while I walk the dogs. They work while my hands are in the garden soil. When I sit down to review their output, I'm bringing fresh judgment to completed work — not trying to keep up with a firehose of real-time output.
This is what sustainable pace looks like in the agentic age. It's about designing the workflow so that human judgment is never the bottleneck, being driven at machine speed.
The vampire doesn't drain you because AI is inherently exhausting. It drains you because most people plug AI into their existing workflow and then try to keep up with it. That's like strapping a jet engine to a bicycle and being surprised when the rider can't steer.
The Consultant's Dilemma
Yegge writes from both an employee and a founder's perspective. I want to add another angle: I'm a consultant. When I work with clients on adopting AI and agentic systems, I'm walking into the value-capture battle he describes — but from the outside.
I've seen the drain. I spent eight years leading quality engineering at a company where the pressure to deliver faster never stopped. I've watched talented engineers' decision quality erode over quarters, not days. The code reviews got shallower. The test strategies got more mechanical. The architecture discussions got shorter. Nobody was lazy. Everyone was exhausted.
Now, as someone who advises companies on AI adoption, my biggest concern going forward is that organizations see AI as a multiplier of human output rather than a replacement for human execution of routine decisions. The first framing leads to burnout. The second leads to sustainable augmentation.
My job, increasingly, is going to be telling clients something they don't want to hear: your people are not the bottleneck you need to optimize. Your people are the quality gate. When you pressure that gate to process more traffic at the same cost, you don't get proportionally more throughput. You get proportionally more uncaught defects, wrong decisions, and architectural compromises that will cost you orders of magnitude more to fix later.
What "Vibe Coding" Actually Costs
People joke about "vibe coding" — letting the AI generate code based on vibes, approving it based on vibes, shipping it based on vibes. It's funny until you're the one debugging a production incident at 2 AM because nobody validated the agent's implementation against the requirements.
I've experienced this myself. When I'm spread across too many projects and don't pay close enough attention to what agents are doing, I feel the pull of completion theater. The agent produced something. It looks reasonable. The tests pass. Move on.
It's gotten easier with newer models — they require less constant oversight, fewer corrections, and less babysitting. But "easier" is dangerous because it reduces the perceived need for vigilance while the stakes of each decision remain exactly the same. A better agent that makes fewer mistakes but still makes some is arguably more dangerous than a mediocre agent you watch like a hawk, because it trains you to trust without verifying.
This is the quality engineer's nightmare scenario: a system that's reliable enough to lower your guard but not reliable enough to deserve it.
Sustainable Pace Is Not a Perk
In Extreme Programming, there's a practice called sustainable pace. It's the idea that teams should work at a pace they can maintain indefinitely — not sprinting from deadline to deadline, not crunching before releases, not treating heroics as normal operating procedure.
The agentic age hasn't changed this principle. It's made it more important.
When your agents can work 24/7, the temptation is to match their pace during your waking hours. To review everything they produced, to context-switch between the three projects they're running in parallel, to treat every agent output as urgent because the agent works so fast that the backlog of unreviewed work grows by the minute.
I fight this temptation every day. The garden helps. The dog walks help. The deliberate choice not to look at my phone before I've decided what matters today — that helps most of all.
But I want to be clear: this isn't work-life balance advice. I'm not telling you to touch grass because it's good for your soul (though it is). I'm telling you that your judgment is a finite, depletable resource and the most expensive component of your entire AI-augmented workflow. Burning it out isn't a wellness problem. It's the most costly quality failure you can make.
More expensive than any bug, any missed test, any late delivery. Because when your judgment fails, you don't just ship a bug. You ship the wrong thing, validated by completion theater, blessed by exhausted reviewers, and you won't know it was wrong until the customer tells you. That's the real cost of the AI vampire.
Building the Immunity
Yegge's solution is structural: shorter workdays, value-sharing between employers and employees, and cultural change. I agree with all of that. But I'd add something from the quality engineering playbook:
Design your workflow around the constraint that human judgment is finite and non-renewable within a workday.
Concretely:
- Don't review agent output in real-time. Let it accumulate. Come to it fresh. The agent can wait; your judgment can't be rushed.
- Batch your decision-making. Three focused review sessions beat eight interrupted ones. Every context switch costs you judgment capacity that doesn't come back.
- Build in non-negotiable physical breaks. Not as rewards for productivity. As infrastructure for judgment quality. My garden isn't a hobby. It's a quality assurance practice.
- Measure what matters. Not velocity. Not throughput. Not "how many agent tasks completed today." Measure decision quality. How many agent outputs did you actually validate versus rubber-stamp? Be honest with yourself.
And if you're a leader: stop treating human judgment as a free, unlimited resource that scales linearly with AI output. It doesn't. It never did. And the cost of pretending otherwise is showing up in your customer experience, whether you've noticed it or not.
The Bill Coming Due
Yegge is right: the AI vampire is real, and it feeds on your energy. What I'd add is that it feeds on your judgment at the same time, and the judgment drain is the one you don't notice until the damage is done.
The vampire thrives on the belief that humans can make good decisions at machine speed. We can't. And the quality cost of pretending we can is the bill that's coming due across the entire industry.
The question isn't whether AI makes you more productive. It does, whatever the real number turns out to be. The question is whether you're going to burn your judgment to capture that surplus, or design a sustainable rhythm that lets you keep making the decisions that actually matter.
I know which one I chose. My garden is doing great this year.
This article builds on Steve Yegge's "The AI Vampire." Read it — it's worth your time. Then come back and think about what it means for quality.