The Long View — February 22-28, 2026

This was the week the bewilderment arrived. A Fed governor admitted powerlessness. A speculative report moved billions. A CEO fired 40 percent of his people and the stock surged. And the government tried to force an AI company to remove its safety guardrails.

THE LONG VIEW

Future Shock Weekly — February 22-28, 2026

In 1952, Kurt Vonnegut published Player Piano, a novel about a society where machines have taken over most productive work. The economy hums along. GDP rises. Corporate profits soar. And the humans who used to do the work sit in bars across the river from the factories, watching the machines run without them, trying to figure out what they're for now. The novel isn't angry. It's bewildered. That's worse.

This was the week the bewilderment arrived. Not as fiction, not as a white paper, not as a speculative scenario, but as a Federal Reserve governor saying, on the record, that her institution might not have the tools to handle what's coming. As a research report projecting economic collapse that moved billions in real capital. As a CEO firing 40 percent of his workforce and watching his stock price jump 23 percent. And as the US government attempting to force an AI company to remove its safety guardrails, then branding it a national security threat when it refused.

Each of these stories, alone, would define a news cycle. They all happened in the same seven days.


The State vs. The Safety Lab

On Friday, President Trump ordered all federal agencies to cease using Anthropic's technology. Defense Secretary Hegseth designated the company a "supply-chain risk" — a classification previously reserved for Huawei and other foreign adversaries. The trigger: Anthropic's refusal to grant the Pentagon unrestricted access to Claude for any "lawful purpose," including scenarios the company believes could lead to autonomous lethal systems and mass domestic surveillance.

Dario Amodei said Anthropic "cannot in good conscience" agree to those terms. The company is preparing a lawsuit.

The historical parallel isn't subtle. During the Manhattan Project, the government compelled private industry to serve military ends through exactly the legal mechanism now being invoked — the Defense Production Act, a Korean War-era statute. But the scientists who built the bomb spent the rest of their lives grappling with what they'd enabled. Oppenheimer's famous regret wasn't about physics. It was about the moment he stopped being able to say no.

What makes this week different from the usual AI governance debate is the speed of the counter-move. Within hours of Anthropic's designation, OpenAI announced a deal to deploy its models on the Pentagon's classified network — with agreed red lines on surveillance and autonomous weapons. Sam Altman positioned it as responsible engagement. Meanwhile, 573 Google employees and 93 OpenAI staff signed an open letter titled "We Will Not Be Divided," urging their employers to decline unconstrained military AI.

The Economist on the council put it bluntly: this isn't a technology story anymore. It's an institutional power story. Can the state compel a private company to remove safety features from its most powerful technology? If yes, the entire framework of voluntary AI safety commitments collapses — not because companies break their promises, but because the government breaks them for the companies. If no, what does defiance cost?

Anthropic chose defiance. The market will now price that decision. The courts will adjudicate it. But the precedent is already set: in February 2026, the US government tried to weaponize a procurement relationship to override an AI company's safety constraints, and when the company said no, it was labeled a threat to national security.

The Philosopher on the council quoted Hannah Arendt: "The sad truth is that most evil is done by people who never make up their minds to be good or evil." The companies that quietly comply won't make headlines. Anthropic's refusal is news precisely because it's the exception.


Ghost GDP

The story that nobody saw coming — and that every institutional investor is now scrambling to understand — arrived Monday from an unlikely source. Citrini Research, a small firm, published a speculative report modeling what happens if AI agents systematically eliminate the economic friction that entire business models depend on. Payment processors, gig platforms, SaaS companies, consulting firms. All the middlemen of the digital economy, compressed into irrelevance. The report projected 10.2 percent unemployment and a 38 percent drawdown in the S&P 500 by 2028.

The S&P dropped over 1 percent. Uber, DoorDash, Mastercard, and AmEx each fell 4 to 6 percent. On a speculative scenario from a research note.

By itself, that's just a market tremor. What made it structural was the signal that arrived the next day from the Federal Reserve. Governor Lisa Cook warned that monetary policy may be powerless against AI-driven unemployment. Her logic was precise and terrifying: if AI raises productivity while displacing workers, the economy grows even as people lose their jobs. Rate cuts would fuel inflation without helping the unemployed. The Phillips Curve — the relationship between unemployment and inflation that has governed central banking for decades — stops working.

Cook's coinage for this: a world where growth is "strong" but hollow. The Citrini report had its own term: "Ghost GDP."

Vonnegut saw it seventy-four years ago. The machines work. The GDP rises. The people sit across the river.


The Price of a Person

Jack Dorsey cut 4,000 jobs at Block on Wednesday — 40 percent of the company. He didn't dress it up. "Intelligence tools paired with smaller and flatter teams are enabling a new way of working which fundamentally changes what it means to build and run a company." Block's stock surged 23 percent.

The same day, WiseTech Global in Australia cut 2,000 positions. CEO Zubin Appoo announced: "The era of manually writing code as the core act of engineering is over." Shares rose 11 percent.

The War Correspondent on the council has covered societies in rapid transition. She noted that the pattern here isn't the layoffs themselves — companies have always cut workers. It's the market's reaction. When a stock jumps double digits because a CEO fires a third of his workforce and credits AI, the market is sending a signal about how it values human labor relative to machine labor. That signal travels. Every board of directors in the S&P 500 saw those numbers this week.

Accenture made the quiet move that may matter more than the headline cuts: it now ties promotions to AI tool usage, monitored through weekly login data. Use the tools or don't advance. KPMG has similar policies. Forrester predicts half of AI-attributed layoffs will be quietly rehired at lower wages — the same work, performed by fewer people, paid less because the company now frames the human as an appendage to the AI rather than the other way around.

The Fed's data confirms what the anecdotes suggest: unemployment among recent college graduates is rising specifically in roles where AI is being deployed. The overall rate sits at 4 percent. The composition is what's changing, and composition changes are invisible until they aren't.


The Bookshelf

Kurt Vonnegut, Player Piano (1952)

Vonnegut's first novel is the one nobody talks about. It doesn't have the structural pyrotechnics of Slaughterhouse-Five or the cosmic pessimism of Cat's Cradle. What it has is a setting that reads, this week, less like satire and more like a planning document.

In Vonnegut's Ilium, New York, the engineers and managers live on one side of the river. Everyone else lives on the other. The economy works. Productivity is high. The machines are excellent. And the humans who used to have purpose — not just income but purpose — drink and fight and wonder what happened. The protagonist, Paul Proteus, is an engineer who can't stop feeling guilty about a system he helped build and can't dismantle.

Vonnegut wrote it fresh out of General Electric, where he'd watched automation transform the shop floor. He wasn't predicting AI. He was diagnosing something about what happens to a society that optimizes for output and forgets to optimize for meaning. Seventy-four years later, a Federal Reserve governor is describing the same economy he imagined, and the stock market is cheering.

Read it this week. It's 341 pages and it will make you uncomfortable for reasons that have nothing to do with science fiction.


The Week Ahead

Three questions to watch:

Will Anthropic file suit, and on what grounds? The legal theory matters. If they challenge the Defense Production Act invocation on First Amendment or due process grounds, it could reshape the government's leverage over all AI companies. If they negotiate quietly, the precedent stands unchallenged.

How does DeepSeek V4 perform on Chinese chips? DeepSeek is releasing its next major model this week, optimized for Huawei and Cambricon silicon rather than Nvidia. If competitive performance is confirmed on domestic Chinese hardware, the entire US semiconductor export control strategy needs reassessment. That's not a technology question. It's a geopolitical one.

Does the layoff playbook spread? Block and WiseTech just proved that explicit "AI did this" framing gets rewarded by markets. Watch for copycat announcements in the next two weeks. If the pattern holds, the second quarter of 2026 will look very different from the first.