Building Moats, Not Bridges: How AI Became Bureaucracy's Best Friend
For £45, a website called Objector.ai will kill a housing project.
The UK startup launched in late 2025 with a simple pitch: paste in a planning application, and the AI scans it for vulnerabilities. Within minutes, it generates an objection letter, ranks the arguments most likely to succeed, and produces a lobbying video you can send to your local councillor. A companion service, Planningobjection.com, charges £99 and markets itself with the tagline "stop moaning and take action."
The tools arrived just as the UK Labour government was promoting AI to accelerate planning approvals. Instead, the technology got weaponized in the opposite direction. Planning lawyer Sebastian Charles warned that AI could "supercharge nimbyism" and cause the system to "grind to a halt." His firm had already seen AI-generated objections containing fabricated case law and regulations that elected officials were inclined to believe.
Facebook groups now encourage residents to use ChatGPT for objection letters, calling it "a planning solicitor at your fingertips." Tyler Cowen flagged the trend on Marginal Revolution last November: "We are just beginning to think about... nimbyism."
He was right. But housing is one front in a much larger war.
The Assumption
The standard AI narrative goes like this: artificial intelligence makes everything faster and cheaper. Summarize documents in seconds, draft contracts in minutes. The technology is a productivity engine, and the logical endpoint is a world with less friction.
This narrative has a blind spot the size of a courthouse. AI is not a productivity engine. It is an amplifier. It magnifies whatever you point it at. And the forces that slow things down in a modern economy are just as pointable as the forces that speed things up.
Litigation. Regulation. Public comment periods. Zoning appeals. Environmental reviews. FOIA requests. These systems exist for legitimate reasons. They also represent chokepoints where a small number of motivated actors can impose enormous costs on everyone else. AI just handed those actors a force multiplier.
The Courtroom Flood
The data is already here.
Fisher Phillips, a major employment law firm, reported that pro se employment lawsuit filings jumped from 4,100 to 6,400 in a single year. That is a 49 percent increase. Attorneys at the firm described AI-drafted motions arriving within thirty minutes of a defendant filing an answer. Pro se plaintiffs who previously struggled with basic formatting now submit forty-page memoranda backed by hundreds of pages of evidence.
Researcher Damien Charlotin has documented 282 U.S. cases and more than 130 international ones where AI use in legal filings was flagged. "It really started to accelerate around the spring of 2025," he told NBC News. And that count only includes cases where AI involvement was caught. The actual number is certainly higher.
The National Center for State Courts launched an investigation into whether generative AI is increasing overall court filings. Early data from Wisconsin showed January 2025 filings running higher than 2024, though subsequent months were mixed. Courts are watching, but they are watching from behind.
Settlement values are already inflating. AI-assisted pro se cases are harder to dismiss on procedural grounds, which means defendants who previously would have won on paperwork alone now face substantive litigation. The cost of being sued just went up, even when the plaintiff is wrong.
This is not limited to employment law. The same dynamic applies to every adversarial legal context. Patent disputes. Insurance claims. Landlord-tenant fights. Any arena where one party can impose costs on another by filing paperwork.
The Arms Race Nobody Wins
A February 2026 paper on Lawfare made the theoretical case explicit. Written by Curl, Kapoor, and Narayanan, "AI Won't Automatically Make Legal Services Cheaper" identifies three bottlenecks that prevent AI from reducing legal costs.
First, unauthorized practice of law regulations limit how directly consumers can access AI legal tools. Second, human decision-makers like judges create speed limits regardless of how fast either party can generate documents. Third, and most important: "When both parties adopt productivity-enhancing technologies, competitive equilibria simply shift upward."
That third point is the crux. In an adversarial system, a productivity gain for one side is a productivity gain for both sides. The equilibrium does not settle at "faster and cheaper." It settles at "the same speed, but more expensive, with more paper."
The authors point to a historical precedent. When legal discovery went digital in the early 2000s, the expectation was that searchable documents would reduce costs. Instead, the explosion of available digital material gave both sides more ammunition. Parties exploited the larger document pool to impose greater burdens on opponents. Discovery costs went up, not down.
AI is tracking the same curve. Better AI tools for plaintiffs means more litigation. Better AI tools for defendants means more aggressive defense. The net result is more legal activity, not less. More motions, more filings, more hours billed, more court time consumed.
This dynamic plays out everywhere AI touches an adversarial system. The petition sites, the congressional phone lines, the public comment portals. Every tool that helps a citizen fight city hall is the same tool that helps a corporation flood a regulatory docket. Every AI that drafts a compelling objection letter for a neighborhood group drafts an equally compelling one for a developer's lobbyist. The technology does not pick sides. It just turns up the volume on both.
What Eighteen Million Fake Comments Taught Us
In 2017, the FCC opened a public comment period on net neutrality. Of the roughly 22 million comments received, nearly 18 million were fake. Fewer than 800,000 appeared to come from actual humans. The New York Attorney General eventually secured $615,000 from three companies that had supplied the fabricated comments.
That was before large language models. The fake comments were generated with crude templates and stolen identities, and they were still effective enough to distort the process. With modern AI, the quality would be indistinguishable from genuine public input, and the volume could scale by orders of magnitude.
Congress noticed. A House bill introduced in 2024 tasked the GAO with reporting on the prevalence and impact of AI-generated comments in federal rulemaking. The bill does not aim to block AI-assisted comments outright. It establishes a framework for identifying and managing them. That framework does not yet exist.
Meanwhile, the Trump administration has reportedly used AI to flag visa holders for review, identify federal workers for termination, and propose mass regulatory repeals. Democracy Forward filed FOIA lawsuits in response. AI is being deployed as a bureaucratic weapon by government itself, not just against it.
The pattern keeps repeating. Every system designed to give citizens a voice becomes a system that can be gamed at scale. Public comment periods. Petition platforms. Phone banks. Congressional contact forms. The infrastructure of democratic participation was built for a world where generating a thoughtful objection required time and effort. That constraint is gone.
Both Sides of the Ledger
The honest version of this story acknowledges that AI cuts bureaucracy too.
Startups like GovStream.ai, CivCheck, and GovWell are using AI to accelerate building permits. AI-powered plan review and code compliance checking can compress weeks of manual review into hours. Pennsylvania eliminated 73 percent of its environmental permit backlog in two weeks using process improvements that included AI tools.
Ohio's Supreme Court maintains an AI Resource Library for case management, virtual legal assistance, and guided interviews. Thomson Reuters sells CoCounsel to government legal teams. The State Department and National Archives use AI-assisted FOIA processing.
These are real gains. But they operate in a different context. Permitting acceleration works because it is a one-sided process: the government reviews an application. There is no adversary generating counter-documents. Court case management works because it is administrative, not adversarial.
The moment two parties with opposing interests both get AI, the gains start canceling out. This is not a technology problem. It is a game theory problem.
Jevons Paradox, but for Paperwork
In 1865, the economist William Stanley Jevons observed that improvements in coal efficiency did not reduce coal consumption. They increased it. Making coal cheaper to use meant people found more uses for it. The total amount of coal burned went up.
The same logic applies to bureaucratic friction. Making it cheaper to file an objection does not reduce the number of objections. It increases them. Making it easier to draft a lawsuit does not reduce litigation. It creates more of it. Making it trivial to generate a public comment does not improve democratic participation. It drowns the signal in noise.
The question is not whether AI can cut through bureaucracy. It obviously can. The question is whether AI cuts through bureaucracy faster than it generates new bureaucracy. And in any system where opposing parties both have access to the same tools, the answer leans toward more friction, not less.
This does not mean the situation is hopeless. It means the solution is institutional, not technological. The systems that govern how objections are filed, how comments are weighted, how cases are managed, and how permits are processed were all designed for a pre-AI world. They assume that generating a substantive filing requires meaningful effort. That assumption is dead.
The rules need to catch up. Not by banning AI from civic participation, which would be both unenforceable and counterproductive. But by redesigning processes so that volume alone cannot substitute for substance. Weighted comment systems. Filing caps. Automated quality filters. Structured input formats that resist spam while preserving genuine voices.
Until then, every AI capability upgrade is a gift to both sides of every fight. And in a system where obstruction is already easier than construction, that is not a neutral gift.