The Signal — March 1, 2026
The Anthropic story didn't end Friday. It got worse. Plus: DeepSeek cuts Nvidia out of V4 testing, and OpenAI fires an employee for prediction market insider trading.
The Anthropic story didn't end Friday. It got worse.
Anthropic Is Now a "Supply-Chain Risk" — The Industry Splits
The Trump administration escalated its confrontation with Anthropic over the weekend. After Anthropic refused Pentagon demands for unrestricted military access to Claude — specifically autonomous weapons and mass surveillance — President Trump ordered every federal agency to stop using Anthropic technology. Defense Secretary Hegseth then classified the company as a "supply-chain risk to national security," a designation typically reserved for foreign adversaries like Huawei.
Anthropic's response: the designation is "legally unsound" and they'll challenge it in court.
Hours after the Anthropic fallout, OpenAI announced a deal to deploy models on the DoD's classified network. Sam Altman framed it as a middle path — the agreement includes explicit prohibitions on domestic mass surveillance and requires human responsibility for use of force. Altman said he hopes the Pentagon will offer the same terms to every AI company. Whether that's diplomacy or positioning depends on who you ask.
The employee response is the part that doesn't fit the simple narrative. Over 570 Google employees and 93 from OpenAI signed an open letter titled "We Will Not Be Divided," pushing their own companies to maintain the same red lines Anthropic drew. Internal pressure and executive strategy are moving in opposite directions at multiple labs simultaneously.
The legal fight will take months. The precedent it sets — whether the government can compel a company to strip safety restrictions from its own AI — could define the boundary between corporate autonomy and state power over AI for years.
Sources: Reuters · The Guardian · CNBC · NYT · Bloomberg · The Verge · Wired
DeepSeek Gives Huawei Early Access to V4, Cuts Out Nvidia
DeepSeek is about to release V4, its first major model since R1 shook up the industry in January 2025. The significant detail: V4 has been optimized for Huawei and Cambricon chips, and DeepSeek deliberately excluded Nvidia and AMD from pre-release testing.
This is the export control stress test everyone's been waiting for. U.S. semiconductor restrictions were supposed to keep Chinese AI labs dependent on Western hardware. If V4 performs competitively on domestic Chinese silicon, that theory needs revision.
The timing is political. V4's release comes just before China's Two Sessions parliamentary meetings, where AI is expected to be a central policy topic. DeepSeek has become a national champion — the company Beijing points to when arguing that export controls accelerate domestic innovation rather than blocking it.
The open question is whether Huawei chips can handle training at scale, not just inference. R1 showed you could get competitive results with less compute. V4 will show whether that efficiency translates across hardware architectures.
Sources: Reuters (exclusive)
OpenAI Fires Employee for Prediction Market Insider Trading
OpenAI terminated an employee who used insider knowledge of upcoming model releases to trade on prediction markets. The details are thin — Wired broke the story, TechCrunch confirmed — but the category itself is the news.
Prediction markets around AI capabilities have grown large enough that trading on nonpublic information about model performance, release dates, or benchmark scores is now a recognizable form of misconduct. Two years ago, this market barely existed. Now it's big enough that a major AI lab has to fire someone over it.
The broader pattern: as AI companies become the most consequential actors in tech, the financial ecosystem around them — prediction markets, options, futures on compute costs — is developing its own integrity problems. The SEC hasn't weighed in on AI prediction market trading. That probably changes this year.
Sources: Wired · TechCrunch
On the Editor's Desk
Sixty-two events came through the pipeline today. Fifteen passed. We published three.
The Anthropic/Pentagon story consumed almost all the oxygen — 12 of 15 passed events were coverage of the same confrontation from different outlets. When Reuters, the Guardian, CNBC, ABC, NBC, the NYT, Bloomberg, Al Jazeera, The Verge, Wired, TechCrunch, and NPR all independently confirm the same facts, you don't need a fact check. You need an angle that adds something. Our angle: the employee revolt and the legal precedent, which are under-covered relative to the political theater.
We held the LLM accuracy degradation study (models lose up to 33% accuracy in long conversations) because it's single-source and preliminary. Interesting if it replicates. Not ready to report as established.
Musk's deposition bashing OpenAI in the ongoing lawsuit generated coverage but no new information. He's said all of this before, under oath or otherwise.
The kill pile was heavy on YouTube commentary (seven videos rehashing the Anthropic story) and GitHub trending repos (eight). Neither category is news. The ingestion pipeline's kill rate hit 55% this cycle — a sign that when one story dominates, the noise around it scales up too.