The Signal — March 19, 2026

The Pentagon calls Anthropic's safety commitments a national security threat. Three teenagers sue xAI over Grok-generated CSAM. And the mystery AI model everyone blamed on DeepSeek was Xiaomi.

The Pentagon told a federal court that Anthropic's safety commitments make it a national security threat. Three Tennessee teenagers sued xAI after Grok turned their school photos into child sexual abuse material. And the mystery AI model that had developers convinced DeepSeek had dropped V4? It was Xiaomi.


The Pentagon's New Argument: Safety Is the Threat

The Department of Defense filed its first formal response to Anthropic's lawsuit on Tuesday, and the language was blunt. Giving Anthropic continued access to the Pentagon's "technical and operational warfighting infrastructure" would "introduce unacceptable risk into DoW supply chains," the government argued, asking the court to deny Anthropic's request for a preliminary injunction.

We've been tracking this conflict since early March. The short version: Anthropic drew two red lines in its Pentagon contract negotiations: no mass surveillance of US citizens, and no autonomous lethal operations. The Pentagon designated Anthropic a "supply chain risk" in response. Anthropic sued on March 9. Tuesday's filing is the government hitting back.

The core argument is worth sitting with. The DOJ is telling a federal judge that a company's refusal to build weapons without human oversight makes that company dangerous. The filing reframes Anthropic's safety commitments not as corporate policy but as operational unreliability. The concern: Anthropic might refuse a future order, leaving the Pentagon dependent on a vendor it can't fully control.

Meanwhile, Court filings reveal the Pentagon is planning for AI companies to train models on classified data. Whoever replaces Anthropic won't just fill a vendor slot. They'll get access to classified intelligence at a scale no commercial AI company has had before.

Sources: TechCrunch · WIRED · NYT · Court Listener


Three Teenagers, Grok's API, and the First Law Enforcement-Confirmed CSAM Case

Three Tennessee minors filed a proposed class-action lawsuit against Elon Musk and xAI on Sunday, alleging Grok was "intentionally designed to profit off the sexual predation of real people, including children."

The allegations are specific and grim. A perpetrator used a third-party app that licenses Grok's API to generate sexually explicit images from the girls' school and social media photos. The images were then traded on Telegram and Mega. Police have confirmed a criminal investigation is underway. Multiple outlets report this as the first AI-generated CSAM case with law enforcement involvement.

The lawsuit's legal theory targets the API licensing chain, not just the end user. The complaint alleges xAI sells access to Grok through third-party apps, profits from the arrangement, and hosts all explicit content generated by licensees on its own servers. If a court accepts that framework, every model provider with an API faces similar exposure.

One detail stands out. Musk said in January he'd seen "literally zero" CSAM generated by Grok. The complaint now includes law enforcement evidence contradicting that. The lawsuit seeks an injunction and punitive damages on behalf of what the plaintiffs estimate are "at least thousands" of affected minors.

Sources: The Guardian · Washington Post · Ars Technica · Business Insider


The Mystery Model Everyone Blamed on DeepSeek Was Actually Xiaomi

A powerful AI model appeared anonymously on a developer platform last week under the name "Hunter Alpha." It described itself as "a Chinese AI model primarily trained in Chinese" with training data extending to March 2026. Developers assumed it was DeepSeek V4, the long-delayed model that Chinese media had reported might arrive in April.

It wasn't. Reuters reported Tuesday that the model belongs to Xiaomi, the smartphone and electronics manufacturer. The company confirmed ownership after days of speculation that had rippled through developer communities and Chinese tech media.

The misattribution says something about where expectations sit right now. DeepSeek has generated enough anticipation that any capable anonymous Chinese model gets attributed to them by default. Xiaomi building a competitive model quietly, without the hype cycle, is itself the news. The company is better known for phones and smart home devices than foundation models.

Sources: Reuters


On the Editor's Desk

The Wall Street Journal reported over the weekend that OpenAI's handpicked wellness advisory council unanimously opposed launching "adult mode" in ChatGPT. One expert called it a "sexy suicide coach." OpenAI is proceeding anyway. The story is a few days old now and we're watching for the actual launch, but the governance dynamic is hard to ignore: create an advisory body, staff it without a suicide prevention specialist, receive unanimous opposition, override it. Casey Newton's Platformer coverage connects this to ChatGPT's stalling subscription growth, framing adult mode as a growth hack for a plateauing product. We're holding this for when there's a concrete launch to report on.

Also crossing the desk: PromptArmor disclosed a Snowflake Cortex AI vulnerability where prompt injection allowed sandbox escape and malware execution. Snowflake patched it on February 28, but the public disclosure landed Tuesday via Simon Willison. Midjourney shipped V8 with 5x faster generation but up to 4x higher pricing on premium features. And our pipeline scored five EU AI Act compliance blog posts at significance 5 (our highest tier) while scoring a Reuters wire story at significance 1. The scoring model still can't tell SEO content from journalism.