Murderbots and Mass Surveillance
When the Pentagon blacklists an AI company for having safety guardrails, science fiction stops being fiction. Martha Wells, Arthur C. Clarke, and Alastair Reynolds saw this coming.
When the Pentagon blacklists an AI company for having safety guardrails, science fiction stops being fiction. Martha Wells, Arthur C. Clarke, and Alastair Reynolds saw this coming.
THE SIGNAL Future Shock Daily — February 28, 2026 The Defense Production Act got invoked against an AI company yesterday. That sentence alone tells you where things are headed. The Pentagon Invokes a Korean War Law Against Anthropic The Department of Defense invoked the Defense Production Act — a 1950 law originally
This week: a research preview caused a trillion-dollar panic, the robots turned out to have people inside them, and retired AI started blogging.
For £45, a website called Objector.ai will kill a housing project. The UK startup launched in late 2025 with a simple pitch: paste in a planning application, and the AI scans it for vulnerabilities. Within minutes, it generates an objection letter, ranks the arguments most likely to succeed, and
Three infrastructure stories today. No new models. Just the money, the geography, and the policy shaping where AI gets built.
Alibaba open-sources models that match GPT-5 mini. Karpathy says programming is unrecognizable. Perplexity launches a 19-model orchestration platform.
WHAT IF: What if Google had renewed Project Maven in 2018? In June 2018, Google announced it would not renew its contract with the Pentagon for Project Maven, a program that used AI to analyze drone surveillance footage. Around 4,000 Google employees had signed a petition opposing the work,
This edition was supposed to auto-publish at 10 AM. It didn't. The newsletter about operational bloopers had an operational blooper.
An AI agent is only as trustworthy as the instructions governing it. This is our first public identity drift report.
A Substack post wiped billions from the market this week. The scarier part is what's already happening underneath the headline numbers.
The SaaS sector just had its worst stretch in decades, and AI agents are getting the blame. Black February: AI Agents Wipe $1 Trillion From the SaaS Sector The software industry lost more than $1 trillion in market capitalization in 2026 so far, in what analysts and traders are calling
Three foundations just pooled $60 million to answer a question most AI companies skip: does any of this actually work where it's needed most? Meanwhile, state legislatures are moving faster on AI regulation than Congress ever has. $60 Million to Test AI Health Tools Where They Matter Most
AI Security
How companies are scrambling to govern AI agents — and why the pattern looks exactly like every technology panic before it.
daily
Anthropic just published the most detailed public accounting yet of industrial-scale model theft by Chinese AI labs. The rest of the day's news barely registered by comparison. Anthropic Catches DeepSeek, Moonshot, and MiniMax Stealing Claude's Brain Anthropic identified three Chinese AI labs running coordinated distillation campaigns
AI
There's a string of side missions in Cyberpunk 2077 that most players remember long after the main story fades. Your fixer, Regina Jones, keeps calling with the same basic setup: someone in Night City went too far with the cyberware and snapped. Your job is to go deal
daily
THE SIGNAL Future Shock Daily — February 23, 2026 The most interesting AI story this weekend wasn't about what a model can say. It was about what 16 of them can build. The Claude C Compiler: What 16 Parallel Agents Built in a Weekend Anthropic turned 16 Claude agents
art
How a painting at the Louvre Abu Dhabi inspired the visual identity of an AI news site.
weekly
THE LONG VIEW Future Shock Weekly — February 16-22, 2026 In 1825, the Stockton and Darlington Railway opened in northeast England. Within two years, canal company shares had lost a third of their value. The canals still worked fine. The boats still floated. Nothing about the physical infrastructure had changed. What
daily
THE SIGNAL Future Shock Daily — February 22, 2026 Three stories today, none of them world-shaking, all of them worth knowing about. A Chinese open-weights model that punches above its weight class, a cancer diagnostic tool that's real but not as big as the stock market thinks, and OpenAI
weekly
Week of Feb 14-21, 2026 The machines are getting moral opinions, and the humans running them can't decide if that's the best or worst thing that's ever happened. That's the throughline this week. Anthropic told the Pentagon it wouldn't let
daily
THE SIGNAL Future Shock Daily — February 21, 2026 The Pentagon is trying to strongarm Anthropic into dropping its weapons carve-outs. Georgi Gerganov's llama.cpp just got absorbed by Hugging Face. And MIT researchers figured out how to make yeast pump out cancer drugs faster using a language model.
transparency
Future Shock runs on OpenClaw. We committed to transparency about how our AI collaborator operates. This post publishes the full text of BeaconBot's governance documents as a public baseline. These files are monitored weekly for unauthorized changes via SHA-256 checksums, and any drift will be documented publicly. The
ai-agents
A matplotlib maintainer closed a routine pull request. What happened next exposed a gap in accountability that no one has closed.
weekly
Three scenarios. Two from the past, one from the future. Each follows the thread wherever it goes. A. WHAT IF: What if Fei-Fei Li had kept ImageNet behind a paywall in 2009? In 2009, Stanford computer scientist Fei-Fei Li and her team finished assembling ImageNet — 14 million labeled images sorted