The Signal — March 8, 2026

Anthropic turned Claude loose on Firefox and found 22 CVEs. Claude Code learned to work in the background. And both OpenAI and Anthropic are courting open-source developers.

Anthropic turned Claude loose on Firefox's codebase and found 22 CVEs. Claude Code learned to run tasks in the background. And both OpenAI and Anthropic are courting open-source developers with free pro access.


Claude Found 112 Security Bugs in Firefox. 14 Were High-Severity.

Anthropic published results from a security audit where Claude Opus 4.6 scanned Firefox's codebase for vulnerabilities. The numbers: 112 unique reports, 22 assigned CVEs, 14 classified as high-severity. Mozilla patched them in Firefox 148.0, released this past week. Claude found its first Use After Free vulnerability within 20 minutes of starting.

This wasn't a toy demo on test code. Firefox is one of the most scrutinized open-source projects on the planet, with decades of security review behind it. Finding 14 high-severity bugs that human reviewers missed is a genuine result, not a marketing exercise.

Anthropic says Claude also found over 500 zero-day vulnerabilities across other open-source software, though they haven't published specifics on those yet. The obvious implication cuts both ways: if an AI can find these bugs, defenders get faster. But anyone with API access to a capable model can run the same kind of scan against software that hasn't been patched yet.

The race between AI-powered vulnerability discovery and AI-powered exploitation just got less theoretical.

Sources: Anthropic Blog · Mozilla Security · The Decoder


Claude Code Can Now Run Tasks While You Sleep

Anthropic developers announced that Claude Code can now run scheduled tasks in the background, turning the coding assistant into something closer to an autonomous worker. Thariq Shihipar and Boris Cherny (Claude Code's creator) shared the feature via posts on X. No dedicated blog post yet.

The feature lets developers set up recurring code reviews, test runs, and dependency checks that Claude Code executes on a schedule without manual prompting. Less "tool you interact with," more "thing that works when you're not watching."

GitHub Copilot has been heading the same direction. Coding assistants are becoming background processes, not chat interfaces. The question is whether developers trust them enough to touch real codebases unsupervised, especially given last week's Clinejection proof-of-concept showing how AI coding agents can be compromised through untrusted inputs.

Sources: Thariq Shihipar (X) · Boris Cherny (X) · The Decoder


OpenAI and Anthropic Both Want Open-Source Developers on Their Side

OpenAI announced free ChatGPT Pro and Codex access for open-source maintainers. The offer targets people building the infrastructure that most commercial AI products depend on: package maintainers and framework authors who typically work unpaid.

Anthropic made a nearly identical move eleven days earlier, offering Claude Max subscriptions to open-source maintainers on February 27. Simon Willison, who tracks both companies closely, flagged the overlap.

The competitive angle is obvious: whoever captures open-source developer workflows captures a distribution channel that money can't easily buy. Developers pick tools through muscle memory. If your model is the one running in their terminal while they maintain critical infrastructure, switching costs pile up fast.

But there's a less cynical read too. Open-source maintainers are chronically under-resourced, and giving them powerful AI tools could genuinely accelerate projects that the entire software ecosystem depends on. Whether this is philanthropy or strategy probably depends on which company you ask.

Sources: OpenAI Developers · Simon Willison · The Decoder


On the Editor's Desk

The council's top recommendation yesterday was the OpenAI researcher resignation over the Pentagon deal, a senior safety researcher walking out over the company's deepening military partnership. Scored 9/10 by the council. But the story never entered our ingestion pipeline. Neither did four of the council's other top five picks: Oracle's reported mass layoffs, the Florida AI Bill of Rights passing the Senate, Meta's AI glasses privacy lawsuit, or new US chip export rules. The council catches these through broader search analysis that our pipeline doesn't cover yet. We flagged this same gap yesterday. It needs fixing.

What we did catch: 40 events reviewed, 24 killed. GPT-5.4 coverage re-entered from late outlets but we already published that on Thursday. TensorFlow 2.21 and LiteRT shipped (routine upgrade). ByteDance released Helios, a 14-billion-parameter open-weight video generation model (interesting but early). And one event came from an AI regulation article published on a goldendoodle breeder website, which our editor quite reasonably killed.