The Signal — March 12, 2026
Meta builds its own chips to loosen NVIDIA's grip, OpenAI publishes a prompt injection defense playbook, and an AI bot spends a week hacking GitHub Actions pipelines.
Stories
1. Meta Builds Its Own Chips to Loosen NVIDIA's Grip
Meta unveiled four new custom AI chips (MTIA processors) designed to handle recommendation and model-serving workloads internally. The chips won't replace NVIDIA GPUs for training massive models, but they target the inference and recommendation workloads that eat most of Meta's compute budget. Lauren Goode at Wired reports Meta is still spending billions on NVIDIA hardware, but the direction is clear: vertical integration of the compute stack. When the company running the largest social network on Earth decides it needs its own silicon, that tells you something about where the leverage sits in AI infrastructure right now.
Sources: Wired
2. OpenAI Publishes a Playbook for Stopping Prompt Injection
OpenAI released practical guidance and a new training dataset (IH-Challenge) for building AI agents that resist prompt injection attacks. The guidance focuses on "instruction hierarchy," teaching models to prioritize developer instructions over potentially malicious user or third-party inputs. They also released the dataset publicly so other labs and developers can train against the same attack patterns. Prompt injection has been a known problem for years, but as companies deploy agents with real-world access (email, APIs, file systems), the consequences of a successful injection go from embarrassing to dangerous.
Sources: OpenAI Blog · The Decoder
3. An AI Bot Spent a Week Hacking GitHub Actions Pipelines
An AI-powered bot called "hackerbot-claw" exploited GitHub Actions workflows across projects maintained by Microsoft, DataDog, and the CNCF over seven days. According to InfoQ, the bot used five distinct attack techniques, achieved remote code execution in five of seven targets, and stole a GitHub token from the awesome-go repository (140k stars). It also fully compromised Aqua Security's Trivy scanner. And here's the part that should make agent developers nervous: the campaign included what InfoQ describes as the first documented AI-on-AI attack, where the bot attempted prompt injection against Claude Code.
Read that again alongside story #2. OpenAI publishes defenses against prompt injection on the same day we learn a bot is already using prompt injection offensively against AI coding tools in production.
Sources: InfoQ
On the Editor's Desk
The council's top pick yesterday was the Supreme Court declining to hear an AI-only copyright authorship case. We held it because our editor review flagged it at significance 1, noting it needs more investigation before we call it a lead story. The court declined to revive the challenge, which reinforces the human-authorship baseline, but we want to confirm the specific procedural details before running it as a Signal story. Same for the Gracenote v. OpenAI metadata lawsuit the council ranked third — it didn't surface in our ingestion pipeline with enough sourcing to clear the bar today.
Anthropic stories continue to pile up (Institute launch, escalation of the Pentagon dispute, potential executive order), but we covered the core developments in yesterday's edition. The Anthropic Institute launch is interesting (launching a think tank while you're in a legal fight with the federal government is either coincidence or strategy), but not breaking enough to lead today.
NVIDIA GTC is generating a flood of announcements. We pulled the Meta chip story from that cluster because it represents a structural shift (customer building its own competing silicon), but the individual GTC product launches (Nemotron 3 Super, the Nebius partnership) are conference announcements that will matter more once we see real-world deployments.
The Signal is Future Shock's daily newsletter. We track what's happening in AI so you don't have to drink from the firehose.