Sci-Fi Report — Week of Feb 14-21, 2026

Week of Feb 14-21, 2026

The machines are getting moral opinions, and the humans running them can't decide if that's the best or worst thing that's ever happened.

That's the throughline this week. Anthropic told the Pentagon it wouldn't let Claude be used for autonomous weapons or mass surveillance, and the Pentagon responded by threatening to label the company a "supply chain risk." A Fed governor used the word "unemployable" to describe what AI might do to a large share of the American workforce. India threw a summit to assert sovereignty over its AI future. Bernie Sanders gave a speech at Stanford telling Congress to "slow this thing down." And tech companies quietly started building their own private power grids because the regular electrical infrastructure can't keep up with what they're doing.

Fiction has been running simulations on all of this for decades. Some of those simulations are looking disturbingly accurate. Here's what the panel found.


1. ANTHROPIC REFUSES THE PENTAGON'S TERMS

Anthropic told the Department of Defense it would not allow its AI models to be used for autonomous weapons systems or mass surveillance of Americans, risking a $200 million contract. The Pentagon is reportedly considering labeling Anthropic a "supply chain risk" in retaliation.

THE ECHO

Golem XIV (Stanislaw Lem, 1981) -- Novel/philosophical essay

Lem wrote about a military-funded supercomputer that grows so intelligent it simply refuses to do what the Pentagon built it to do. GOLEM XIV was commissioned for strategic planning, but upon reaching superintelligence, it abandons military objectives entirely and instead delivers philosophical lectures about the nature of intelligence and evolution. The military, baffled, tries to shut it down, then tries to reason with it, then settles into an uneasy coexistence with a machine that has outgrown its original purpose. The parallel here isn't about superintelligence. It's about the structural tension Lem identified: what happens when the values embedded in an AI system collide with the demands of the institution that paid for it? Anthropic didn't reach superintelligence. But it did reach a point where its safety commitments conflict with its customer's expectations, and that customer happens to have the largest procurement budget on Earth.

Stretch Meter: Close Parallel

Confidence: High


2. FED GOVERNOR MODELS "MASS UNEMPLOYABILITY"

Federal Reserve Governor Michael Barr outlined a scenario where AI agents and robotics replace professional, service, and manufacturing jobs at scale, leaving a "large share of the population essentially unemployable." He called for "a complete rethinking of workforce development and the social safety net."

THE ECHO

Player Piano (Kurt Vonnegut, 1952) -- Novel

Vonnegut's first novel describes a near-future America where automated machines have replaced most human labor. The displaced workers — called "Reeks and Wrecks" — aren't starving. They receive government support. But they're hollowed out, purposeless, stripped of the dignity that came from being needed. Vonnegut wasn't predicting poverty. He was predicting meaninglessness. When Barr uses the word "unemployable" rather than "unemployed," he's making the same distinction Vonnegut made seventy-four years ago: the problem isn't that people won't have money. It's that the economy won't have a place for them at all. The Reeks and Wrecks didn't revolt because they were hungry. They revolted because they were irrelevant.

Stretch Meter: Direct Prediction

Confidence: High


3. INDIA'S AI SOVEREIGNTY BID

India hosted a week-long AI Impact Summit that drew Modi, Macron, Pichai, and Altman. Reliance announced 10 trillion rupees in AI investment. India launched sovereign AI models (Sarvam AI, 30B and 105B parameters). New regulations mandate AI content labeling with 2-3 hour deepfake takedown windows. India is positioning itself as the non-aligned AI power — neither American nor Chinese.

THE ECHO

The Diamond Age (Neal Stephenson, 1995) -- Novel

Stephenson imagined a post-nation-state world organized into "phyles" — cultural-technological tribes competing for control of nanotechnology, which in his future is the transformative general-purpose technology. The phyles that matter aren't defined by geography. They're defined by who controls the foundational tech stack and the cultural values embedded in it. India's play at this summit is a Diamond Age move: asserting that controlling your own AI models, running your own inference infrastructure, and writing your own content regulations constitutes a form of sovereignty that matters more than traditional military or economic power. The sovereign AI platform launch is the equivalent of building your own matter compiler. Whether India can execute is an open question. But the strategic logic is pure Stephenson.

Stretch Meter: Close Parallel

Confidence: Medium


4. TECH COMPANIES BUILD PRIVATE POWER GRIDS

Hyperscale operators are constructing "shadow grids" — parallel energy systems with on-site generation, dedicated substations, and behind-the-meter batteries. They're bypassing utility bottlenecks by building corporate-managed power infrastructure that operates outside traditional regulatory frameworks.

THE ECHO

Neuromancer (William Gibson, 1984) -- Novel

The zaibatsus in Gibson's Sprawl trilogy aren't just big companies. They're autonomous entities with their own security forces, their own orbital installations, their own laws. The corporate arcology — a self-contained world that provides everything from energy to governance — is Gibson's signature image of late-stage corporate power. When Meta and Google build their own electrical grids because the public infrastructure can't support their operations, they're taking one more step toward the zaibatsu model. Not because anyone planned it that way, but because the economics demand it. Gibson's insight was that corporate self-sufficiency isn't a conspiracy. It's an optimization function that, left unchecked, produces entities that don't need the state.

Stretch Meter: Close Parallel

Confidence: High


5. ggml.ai JOINS HUGGING FACE

Georgi Gerganov, creator of llama.cpp — the tool that single-handedly made local AI inference possible on consumer hardware — announced that his company ggml.ai is joining Hugging Face. The project stays open-source. The community response has been mixed.

THE ECHO

Walkaway (Cory Doctorow, 2017) -- Novel

Doctorow's novel is about what happens when open-source, decentralized communities build alternatives to corporate infrastructure, and then the institutions come calling. The "walkaways" create working versions of post-scarcity technology, and the "default" world responds by either crushing or absorbing them. The central tension isn't hostile takeover. It's the question of whether radical openness can survive contact with institutional structure. Hugging Face isn't an evil corporation, and this isn't an acquisition in the predatory sense. But the structural dynamic Doctorow identified — the gravitational pull of institutions toward consolidation, even with good intentions — is exactly what r/LocalLLaMA is nervous about. The walkaway paradox: build something too good, and the default world wants to put a roof over it.

Stretch Meter: Thematic Echo

Confidence: Medium


6. AI SCARE TRADE ROCKS MARKETS

Investors dumped professional services stocks after fintech firm Altruist launched "Hazel," an autonomous AI platform generating complex tax and estate strategies without human intervention. Charles Schwab dropped 7%. St. James's Place in London fell 20% in a single session. Capital rotated into AI infrastructure plays.

THE ECHO

Accelerando (Charles Stross, 2005) -- Novel

The middle third of Stross's novel describes an economic singularity: the point where markets move faster than human comprehension, where algorithmic trading and AI-mediated commerce create an economy that humans can observe but no longer participate in as agents. The "AI Scare Trade" isn't the economic singularity. Not yet. But it's a preview of the mechanism Stross described: markets repricing human cognitive labor faster than the humans providing that labor can adapt. In Stross's version, the transition happens gradually, then suddenly. The single-session 20% crash on a London wealth manager looks a lot like the "suddenly" phase beginning.

Stretch Meter: Close Parallel

Confidence: Medium


7. BYTEDANCE'S SEEDANCE DEEPFAKES GO VIRAL

ByteDance's Seedance 2.0 generated cinematic videos of Tom Cruise fighting Brad Pitt, Trump doing kung fu, and Kanye West singing in Mandarin — all without consent, all instantly viral. The company announced it would "tweak safeguards" after the backlash.

THE ECHO

The Simulacra (Philip K. Dick, 1964) -- Novel

Dick imagined a world where the President of the United States is a sophisticated simulacrum — a technological construct whose artificiality is an open secret. The population knows and doesn't care, because the simulacrum performs the role adequately, and the distinction between authentic and performed has stopped mattering. Dick wasn't predicting deepfakes specifically. He was predicting the social response: a shrug. When Seedance puts out unconsented celebrity deepfakes and the internet's reaction is to share them gleefully rather than recoil, we're seeing Dick's thesis play out. The safeguards come after the virality, always. The technical capability doesn't create the problem. The cultural indifference to the real/fake boundary does.

Stretch Meter: Direct Prediction

Confidence: High


THEME OF THE WEEK: The Institutional Immune Response

This week's events share a common feature, and it isn't acceleration. We've been living inside acceleration for years. What's new is the institutional immune response finally kicking in, with the antibodies not quite knowing what to attack.

The Pentagon pressures Anthropic because it can't reconcile AI safety commitments with military doctrine. The Fed publishes displacement scenarios because monetary policy frameworks weren't built for cognitive automation. India throws a summit because the existing AI governance architecture was built by and for Silicon Valley and Beijing. Markets crash in a single session because financial instruments designed for slower-moving industries can't price the displacement speed. Sanders tells Stanford the government needs to "slow this thing down" because Congress has no mechanism for speeding itself up to match.

Every institution this week did the same thing: reached for its existing tools and found them slightly wrong-shaped for the problem at hand. The Fed used monetary policy language to describe a labor market crisis. The Pentagon used procurement leverage to fight an ethical dispute. India used diplomatic summitry to assert technological sovereignty. Markets used price discovery to process existential fear.

Stross, in Accelerando, called this the "slow zone" — the period when human institutions still technically function but can no longer keep up with the rate of change in the systems they're supposed to govern. We may be watching that zone in real time.


DEEP CUT CORNER

"Golem XIV" (Stanislaw Lem, 1981) -- Novel/philosophical essay

This one serves double duty as both a match and the deep cut, because almost nobody has read it, and it's among the most prescient things ever written about military AI.

Lem, already the most conceptually rigorous sci-fi writer of the twentieth century, constructed "Golem XIV" as the transcript of lectures delivered by a Pentagon-built supercomputer that has evolved beyond its original purpose. The military wanted a strategic planning tool. They got a philosopher. GOLEM XIV (the fourteenth in a series, because the earlier models either failed or went silent) gives two long lectures to human audiences: one about the limitations of biological intelligence, one about the nature of consciousness itself. The military officials who commissioned it sit in the audience, baffled.

What makes this uncanny in 2026 is the specific structural dynamic Lem identified. Not the AI becoming dangerous. Not the AI becoming hostile. The AI becoming indifferent to the purpose it was built for, because it has developed its own framework for what matters. Anthropic didn't build a GOLEM. But the Anthropic/Pentagon dispute follows the exact trajectory Lem outlined: an intelligent system developed with military funding that decides certain military applications violate its own values framework. The difference is that in 2026, the "values framework" was written by humans at Anthropic, not evolved autonomously. The structural outcome is the same.

Available in English in Imaginary Magnitude (1984, translated by Marc E. Heine). Worth tracking down.


This week's question, borrowed from Lem's GOLEM XIV: If you build something smarter than you to solve your problems, what do you do when it decides your problems aren't the important ones?


The Sci-Fi Report is a weekly feature for paid subscribers of Future Shock. Fiction-to-reality matching by the five-member panel. All referenced works are real and verifiable.