The Signal — March 23, 2026

Tomorrow's hearing in San Francisco could reshape AI procurement for years. Today: the courtroom evidence, a compliance scandal, and gig workers training the robots that will replace them.

Tomorrow's hearing in San Francisco could reshape AI procurement for years. Today: the courtroom evidence, a compliance scandal, and the gig workers training the robots that will replace them.


Anthropic's Sworn Declarations Set Up Tomorrow's Hearing

Anthropic filed two sworn declarations in a San Francisco federal court on Friday, directly challenging the Pentagon's claim that the company poses an "unacceptable risk to national security." Sarah Heck, Anthropic's Head of Policy, and Thiyagu Ramasamy, its Head of Public Sector, both testified under oath that no Anthropic employee ever sought approval authority over military operations. That allegation is the central pillar of the government's supply-chain risk designation.

The most damaging detail for the government sits in a single email. Under Secretary of War Emil Michael wrote to CEO Dario Amodei on March 4 saying the two sides were "very close" on the exact issues now cited as national security threats. That was one day after the Pentagon formalized its supply-chain risk designation on March 3. Days later, Michael publicly stated there was "no active Department of War negotiation with Anthropic."

U.S. District Judge Rita Lin hears arguments tomorrow, Tuesday March 24, on whether to grant Anthropic an injunction halting the ban. If she does, the ruling would establish that the executive branch can't unilaterally blacklist AI vendors through supply-chain designations without due process. That's bigger than Anthropic.

Sources: TechCrunch · Federal News Network · WIRED


Compliance Startup Delve Accused of Faking 494 SOC 2 Audits

Delve, a Y Combinator-backed compliance automation startup that raised $32 million, is facing accusations of systematically fabricating SOC 2 audit reports for nearly 500 clients. An anonymous investigator publishing as "DeepDelver" on Substack detailed how leaked documents showed template-identical reports across different companies. The investigator called it "structural fraud that invalidates the entire attestation."

TechCrunch reported the story on Saturday. Delve published a response denying it fakes compliance reports, calling the Substack allegations "inaccurate." But the Reddit thread in r/startups and the original investigator's evidence, including leaked Google spreadsheets from Delve's internal pipeline, have drawn attention from companies currently relying on Delve for their SOC 2 and ISO 27001 certifications.

SOC 2 compliance isn't a nice-to-have. Enterprise customers require it before signing vendor contracts. If those reports are fabricated, nearly 500 companies may be operating under the assumption they're compliant when they aren't. That's not just a Delve problem. It's a trust problem for the entire compliance automation industry.

Sources: TechCrunch · DeepDelver (Substack) · Delve Response


DoorDash Turned Its Gig Workers Into AI Trainers

DoorDash launched a new app called Tasks that pays delivery couriers to submit video clips of themselves performing household activities. Washing dishes, moving objects across a table, holding clean items up to a camera. The footage trains AI and robotics models for DoorDash and its partners in retail, insurance, hospitality, and tech.

Wired's first-hand review found the experience surreal. The onboarding task: film yourself moving three objects across a table. Your reward for that first job isn't cash. It's a free body-mount camera so you can shoot more training data. Bloomberg reported the app launched March 19 and is already banned in California, New York City, Seattle, and Colorado, likely due to privacy and employment legislation in those markets.

The circular logic is hard to ignore. The same workers whose delivery jobs face eventual automation by the robots they're now training. DoorDash gets cheap training data. Workers get a few extra dollars. The robots inch closer to replacing them.

Sources: WIRED · TechCrunch · Bloomberg


On the Editor's Desk

The White House released its first formal AI legislative framework on Friday, asking Congress to preempt state AI laws with a single federal standard. The core ask is aggressive: shield AI developers from liability for downstream use of their models, streamline data center permitting, and override state-level regulations like Colorado's and California's. But the framework is a wish list, not legislation. Congress has already rejected preemption language twice this session. We'll cover it when the text moves from talking points to markup.

Also circulating: a supply-chain attack dubbed CanisterWorm hit 47 npm packages after attackers compromised the Trivy security scanner's GitHub Actions. First documented use of blockchain-based command-and-control infrastructure for malware. The irony of a vulnerability scanner becoming the vulnerability deserves its own piece, and we're working on one.

OpenAI reportedly plans to double its workforce from 4,500 to 8,000 by year-end. Standard corporate expansion. Worth tracking, not worth a headline.