The Signal — March 27, 2026

Late edition — Claude went down overnight. Today: DoW's own records show it labeled Anthropic a supply chain risk for its "hostile manner through the press." A judge just blocked that. Plus: Sanders and AOC want to freeze AI data center construction.

A note before we start: Claude went down overnight. Opus, the model that runs this newsroom, was unavailable from roughly 2 AM through late morning Mountain Time. We've tried falling back to other models in the past (Gemini, ChatGPT) and the results weren't up to our standards, so we'd rather skip an edition than publish something half-baked. Today's Signal is late because of that. It was co-written manually with Sonnet.


A federal judge just called the Pentagon's move against Anthropic "Orwellian." She's not wrong.

Yesterday, U.S. District Judge Rita Lin granted Anthropic a preliminary injunction blocking the Department of War from enforcing three punitive measures the government took after Anthropic went public with its disagreement over AI usage policies. (A quick note on the name: the Department of Defense was "renamed" to the Department of War by executive order last September. The court's order uses that name throughout.)

The government took three specific actions. The judge found all three likely unlawful:

Measure one: The President announced that every federal agency, not just the military, would immediately ban Anthropic from ever having another government contract. That would extend to things like the National Endowment for the Arts using Claude to build a website.

Measure two: Secretary Hegseth announced that any company doing business with the military would have to sever all commercial ties with Anthropic. A company that used Claude for its customer service chatbot couldn't be a defense contractor.

Measure three: The Department of War designated Anthropic a "supply chain risk" — a label never before applied to a domestic American company. It's a designation designed for foreign intelligence agencies, terrorists, and hostile actors.

The court found these measures appear designed not to address any genuine security concern, but to punish Anthropic for speaking publicly. The government's own internal records are damning: the Department of War designated Anthropic a supply chain risk because of its "hostile manner through the press." That's a direct quote from DoW's own documents, cited in the order.

Judge Lin described the government's retaliation as "classic illegal First Amendment retaliation." At oral argument, government counsel argued that Anthropic showed subversive tendencies by "questioning" the use of its technology, "raising concerns" about it, and criticizing the government's position in the press. The judge's response: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

One amicus brief described the measures as "attempted corporate murder." Judge Lin didn't fully sign off on that framing, but she agreed the evidence shows the measures would cripple Anthropic, and that they had no legitimate legal basis. She also found likely due process violations: Anthropic had no notice or opportunity to respond before the designation was made, and the Department of War skipped procedural safeguards Congress requires before applying the supply chain risk label.

What was the underlying dispute? Anthropic had insisted that the government agree not to use Claude for fully autonomous lethal weapons or mass surveillance of Americans. The Department of War wanted the ability to make those calls itself. The judge is clear: DoW had every right to walk away from Claude and find a different vendor. Instead, the government tried to destroy the company for saying so publicly.

This is a preliminary injunction, not a final ruling. The case continues. But the language in this order — and the fact that DoW's own records explicitly tie the designation to Anthropic's press behavior — is the kind of thing that tends to follow a dispute all the way up the appellate chain.

Sources: Order Granting Preliminary Injunction, Case No. 26-cv-01996-RFL (N.D. Cal. Mar. 26, 2026), NPR, New York Times, CNN


Sanders and AOC want to freeze AI data center construction. Here's what the bill actually says.

Bernie Sanders and Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act of 2026 this week. The bill would impose an immediate federal pause on new AI data center construction until "strong national safeguards" are in place.

The safeguards the bill envisions cover workers (labor protections, job displacement provisions), consumers (data privacy, algorithmic accountability), and the environment (energy use, water consumption, local community impact). The moratorium would kick in immediately upon passage and stay in place until those laws are enacted, meaning there's no defined end date, just a condition.

The bill's stated goal is to "slow down the development of AI to give democracy a chance to catch up." That's the Sanders framing in a sentence: the technology is moving faster than the governance structures meant to constrain it, and the physical infrastructure that enables the technology is the lever to pull.

It won't pass this Congress. But don't write it off. Bills like this shift the Overton window, give other legislators a reference point, and sometimes end up embedded in compromise legislation years later. The energy and water concerns in particular have bipartisan appeal in communities actually hosting these facilities.

Sources: The Guardian, Axios, Senator Sanders press release, AP News


On the Editor's Desk

A lot came through the pipeline yesterday and this morning. A few things that didn't make the cut:

The "Claude Mythos" leak, a model reportedly with "dramatically higher scores" than anything Anthropic has shipped, is circulating but traces to a single uncorroborated source. We're holding it until there's something more solid to point to.

François Chollet gave a sharp interview to Fast Company on the limits of current AI benchmarks. Worth reading. Didn't make the cut today because it's analysis, not a news event, and this edition had enough weight already. It'll stay in the queue.

gpt-5.4-mini-high showed up on the LM Arena leaderboard. We'll track it but there's nothing to say yet that isn't just "new entry, unknown details."

We normally don't editorialize about our own infrastructure, but it felt relevant to say directly: when Opus is down, the newsroom is down. We've now had two outages that disrupted the Signal. We're not going to start publishing lower-quality output to fill the gap. That's a choice we're comfortable with.