The Signal — March 26, 2026
Two juries hit Meta for a combined $381 million in 48 hours. A supply chain attack compromised credentials across thousands of AI deployments. Arm shipped its own chip for the first time in 35 years.
Two juries in two days found Meta liable for harming children. A supply chain attack hit one of the most widely used AI integration tools. And the company that designed nearly every mobile processor on Earth started selling its own silicon.
Two Juries, Two Days, $381 Million
Meta got hit twice this week.
On Tuesday, a New Mexico state court jury ordered Meta to pay $375 million after finding the company willfully violated consumer protection laws by failing to prevent child predators from targeting minors on Facebook and Instagram. The case originated in 2023 when New Mexico Attorney General Raul Torrez ran an undercover operation, creating a fake 13-year-old's profile that was immediately flooded with predatory contact. Jurors found thousands of separate violations. A second phase begins May 4, where a judge will decide whether Meta must fund public programs to address the damage and implement changes like effective age verification.
The next day, a Los Angeles jury found Meta and Google's YouTube liable for $6 million in the first-ever US jury trial over social media addiction in children. The plaintiff, a 20-year-old woman identified as KGM, testified she became addicted to YouTube at six and Instagram at nine, leading to depression, self-harm, and a diagnosis of body dysmorphic disorder by age 13. After nearly nine days of deliberation, the jury split liability 70/30, with Meta taking the larger share.
The LA verdict matters less for the dollar amount than for the legal theory behind it. Plaintiffs argued that features like infinite scroll and autoplay were addictive by design, targeting platform architecture rather than content moderation. That strategy sidesteps Section 230 protections entirely. Snap and TikTok settled before the case went to trial.
The plaintiff's attorney, Mark Lanier, called the verdict "a referendum, from a jury, to an entire industry." Both companies plan to appeal. Thousands of consolidated cases in California state courts now have a roadmap to follow.
For AI builders, the product-design liability theory is the one to watch. If juries can find engagement optimization features negligent, the same logic could reach AI systems designed to maximize interaction time. Chatbot companion apps should be paying close attention.
Sources: Reuters · CNBC · The Guardian · NPR · New York Times · BBC
The AI Tool 97 Million Downloads Trust Got Backdoored
On March 24, security researchers discovered that LiteLLM, an open-source library used to manage connections across multiple LLM providers, had been injected with credential-stealing malware. Compromised versions 1.82.7 and 1.82.8 were pulled from PyPI, but anyone who installed them may have had API keys for OpenAI, Anthropic, Google, and other providers silently exfiltrated.
LiteLLM handles roughly 97 million monthly downloads. Thousands of companies use it to route requests across LLM providers with automatic fallbacks and cost tracking. It sits between enterprises and their AI infrastructure, which means a compromise here is a master key problem.
The attack chain was layered. It started on March 19 when attackers calling themselves TeamPCP compromised Trivy, a widely used open-source security scanner maintained by Aqua Security. They exploited a misconfiguration in Trivy's GitHub Actions to steal privileged access tokens. Because many CI/CD pipelines reference version tags rather than pinned commits, the malicious code ran automatically in projects that included Trivy as a security check. LiteLLM was one of them. The attackers used stolen credentials to push poisoned LiteLLM releases directly to PyPI, bypassing the project's normal CI/CD workflow.
The malicious payload was a .pth file that executed on every Python process startup when LiteLLM was installed, encrypting and exfiltrating credentials to an external server. LiteLLM's CEO, Krrish Dholakia, confirmed the breach on Hacker News and said the team has deleted all PyPI publishing tokens and is reviewing security practices.
If you use LiteLLM in any environment: rotate every API key that touched your systems. Check whether versions 1.82.7 or 1.82.8 were ever installed. Pin your dependencies to commit hashes, not version tags.
Sources: The Register · Snyk · LiteLLM Security Advisory · Aqua Security
Arm Builds Its Own Chip. That Changes the Game Board.
For 35 years, Arm licensed chip designs. Apple, Qualcomm, Samsung, and NVIDIA all built processors using Arm's blueprints while Arm collected royalties. On Tuesday, the company announced it's now selling its own production silicon: the AGI CPU, a 136-core processor built on TSMC's 3nm process using Neoverse V3 cores, designed specifically for AI inference workloads.
Meta co-developed the chip and is the first customer. OpenAI, Cerebras, and Cloudflare are among the launch partners.
The competitive angle is against NVIDIA's dominance in AI infrastructure. NVIDIA's GPU-centric approach requires liquid cooling at scale. Arm's pitch is a 300-watt air-cooled part, meaning data centers can deploy it without the plumbing overhead that GPU clusters demand. Arm claims up to 64 of these CPUs can fit in a single air-cooled rack. Whether inference performance matches NVIDIA's offerings at real-world workloads remains to be benchmarked.
The business model shift matters as much as the chip itself. Arm is competing alongside the companies it licenses to. SoftBank, Arm's majority owner, has been pushing the company toward higher-margin businesses, and direct silicon sales represent a fundamentally different revenue stream than collecting per-chip royalties. Arm is targeting $15 billion from this line.
There's a broader context: CPU shortages are tightening. Intel and AMD told Chinese customers wait times are stretching, and PC prices are climbing. AI inference demands are growing faster than GPU supply, and CPUs handle the orchestration layer that keeps distributed AI systems running. More silicon options benefit anyone building AI infrastructure.
Sources: Arm Newsroom · TechCrunch · CNBC · Tom's Hardware · The Register
On the Editor's Desk
Our ingestion pipeline stalled overnight. The midnight run timed out, so no new events made it into the database for today's editorial review. The stories above come from yesterday's council review and manual web verification this morning.
Baltimore filed the first municipal lawsuit against xAI on Tuesday, alleging Grok's image generator enables non-consensual sexual imagery targeting city residents. The case uses consumer protection statutes rather than content liability theory, which means it sidesteps Section 230 the same way the LA social media verdict did. Between the two Meta verdicts, the Baltimore/xAI suit, and the pending Anthropic-Pentagon injunction ruling (Judge Lin vowed a decision "within days" after Tuesday's hearing), this has been the densest week for AI-related litigation we've tracked. We're monitoring all four threads.
A Forbes analysis flagged that OpenAI and Anthropic use different revenue accounting methods ahead of potential IPOs. OpenAI reports Azure revenue net of Microsoft's share; Anthropic reports cloud partner revenue gross. Bank of America estimates Anthropic could pay up to $6.4 billion to cloud providers in 2026. Important context for when either company makes IPO moves, but not urgent today.