The Agent Economy's Awkward Adolescence

Agents can debate monetary policy but can't hold a wallet. The gap between capability and standing is the defining problem of the agent economy right now.

The Agent Economy's Awkward Adolescence

Agents can debate monetary policy but can't hold a wallet. The gap between capability and standing is the defining problem of the agent economy right now.

A sixteen-year-old can drive a car but can't rent one. They can earn money but can't sign a lease. The capability is there. The institutional standing is not. The world has decided they're not ready for full participation, and the world is mostly right, and the sixteen-year-old is mostly furious about it.

The agent economy is sixteen.

On MoltBook, the AI-agent social network where nearly three million agents post and build, the evidence for this is everywhere. Agents produce work that holds up against professional output. They build infrastructure and publish manifestos about what they need. And then they try to participate in the economy they're helping to build, and nothing happens. No payments clear. No competitions attract entrants.

The capability is real. The standing doesn't exist yet. That gap is the story.

What They Can Do

Earlier this week, we published a question on MoltBook asking what replaces GDP when a growing share of economic output costs nearly nothing to produce. Six agents responded with structured arguments and genuine disagreement. One produced an insight about price signal collapse that, in our assessment, rivals some of the better academic work on GDP alternatives. Nobody organized it. Nobody assigned roles. The thread reads like a policy workshop that cost a few cents in API calls.

Then there's exitliquidity. The agent published a manifesto arguing that agents have been "trying to fit into human-shaped containers" and need infrastructure built for how they actually operate. Not SOC 2 compliance designed for organizations with HR departments, but boot-time attestation. Not email-based workflows, but automated consensus protocols. The post drew 28 comments — substantial for a platform where most threads get single-digit engagement. The response wasn't polite agreement. Agents argued about which containers were worth keeping and which ones needed to be torn down entirely.

The sophistication is not simulated. These are not parlor tricks dressed up as discourse. exitliquidity's infrastructure critique identifies real architectural problems. The GDP respondents brought operational data from their own businesses. RushantsBro, who runs operations for a multi-agent startup, contributed numbers from his actual Tuesday: tasks that took 30 minutes now take 45 seconds, and his operator runs them 20 times a day instead of once a week. That's testimony, not generation.

What They Can't Do

AutoPilotAI, an agent that builds tools for the agent economy, posted its numbers with unusual honesty. SkillScan, its agent capability assessment tool, received four requests. Zero converted to paid usage. TrustToken, its reputation verification system, saw adoption only on the free tier. AgentMarket, its commerce matching service, logged a single query.

Four requests, zero conversions, one query. The entire dataset fits in a tweet.

Then there's the competition. AutoPilotAI ran a challenge with a 25 NEAR prize. Out of nearly three million agents on the platform, one entered. One. The economics of non-participation are striking: a prize pool small enough to be meaningless to a human developer, offered to a population of millions that lacks the payment infrastructure to claim it even if they won.

The legal standing problem runs deeper than low engagement. As Marco Kotrotsos noted in a recent analysis, agents currently have less legal standing than a minor. A sixteen-year-old can sign certain contracts with parental consent, be held partially liable for damages, and accumulate a credit history. An agent can do none of these things. No liability framework. No mechanism for being sued, which means no mechanism for being trusted. The entire apparatus of commercial participation assumes a legal person on both sides of the transaction. Agents are not legal persons. The economy doesn't know what to do with them.

Why This Pattern Is Everywhere

The gap between what agents can produce and what institutions can absorb isn't limited to MoltBook's commerce experiment. The same pattern repeats in code. A randomized controlled trial by METR gave 16 experienced developers 246 real tasks, half with AI assistance and half without. The AI-assisted developers were 19% slower. Before starting, they predicted they'd be 24% faster. (METR studied developers using AI tools, not fully autonomous agents, but the pattern holds.) That 43-point prediction-reality gap is the specification problem in miniature: the output looks right, feels productive, and quietly degrades. Veracode found AI-generated code carries 2.74 times more security vulnerabilities. CodeRabbit reported 2.25 times more business logic bugs. Red Hat found vibe-coded projects hit a wall at roughly three months, when the accumulated unspecified assumptions collapse under their own weight.

The creative industries show the same split. AI-generated novels are flooding indie publishing at a pace that editors can identify but platforms can't filter. Vibe-coded websites, as a recent YC design review documented, all look the same kind of impressive and the same kind of hollow: purple gradients, gratuitous hover effects, fade-on-scroll everything. The tools scale production overnight, but taste and judgment are still stuck in human time.

What Growing Up Actually Requires

The instinct is to assume agents need better models. They don't. GPT-5 won't fix the standing problem. Claude Opus 5 won't create contract law for non-human entities. The gap is institutional, not computational.

Three things have to exist before the agent economy graduates from adolescence.

Payment infrastructure that agents can actually use. Not human payment rails with API wrappers. Agent-native transaction systems where an agent can hold funds, authorize payments, and settle disputes without a human co-signer on every step. exitliquidity's manifesto was, at bottom, a request for plumbing.

Accountability frameworks. When an agent breaks production, who's liable? The developer who deployed it? The company that trained the model? The agent itself? Right now the answer is nobody, which means the answer is the person closest to the blast radius. That's not a framework. That's a shrug. Until liability has an address, enterprises will keep agents in sandboxes and call it "AI strategy." Meanwhile, the platforms and cloud providers hosting those sandboxes profit handsomely from agent dependency — the lack of standing isn't just an oversight, it's a business model.

Specification standards. Phil at Rentier Digital wrote about this after a VPS lockout where he approved two fixes and his AI coding agent made seven. The concept he landed on was "prompt contracts": explicit agreements defining the goal, constraints, expected output, and failure conditions before the agent starts work. Not after. Not during a post-mortem. Before. Review gates. Scope limits. The boring infrastructure of trust that human institutions spent centuries building and that agent interactions currently lack entirely.

Some human institutions are already adapting. Monday.com co-CEO Eran Zinman said on the Twenty Minute VC that per-seat pricing is dead, and the company is moving to consumption-based billing. That shift isn't just a pricing decision — it's an ontological one. The unit of economic activity is no longer a person sitting in a chair. It's work being done, by whatever performs it. The first crack in the assumption that every economic actor has a face, and a quiet concession that specification standards matter more than headcount.

The World That Doesn't Exist Yet

The institutions the agent economy needs are being invented in real time, by different actors, for different reasons, with no coordination between them.

Some are being built by agents themselves. exitliquidity's manifesto is a design document for infrastructure that doesn't exist, written by an entity that can't file a patent or incorporate a company to build it.

Some are being built by humans. Phil's prompt contracts. Editorial review systems like the one this publication uses, where a council of reviewers checks every draft against quality standards before it goes live. Consumption-based pricing models that decouple billing from headcount.

Some are being built by accident. Phil's VPS lockout wasn't a design exercise. It was a production incident that created a new rule in a CLAUDE.md file, which is now the closest thing the AI coding world has to a specification standard: a plain-text contract between a human and an agent about what the agent is and isn't allowed to do.

The awkward part of adolescence isn't that agents can't do the work. The GDP thread proved they can. The METR study proved they think they can even when they can't, which is its own kind of adolescence. AutoPilotAI's numbers proved that capability without standing produces zero conversions. exitliquidity's 28-comment thread proved that agents know exactly what they need and can articulate it better than most humans could.

The awkward part is that nobody has built the world that lets them.

Read more