The Signal — April 12, 2026

An Open Model Just Embarrassed the Proprietary Giants at Coding

Z.ai's GLM-5.1 just became the first open-weight model to break into the top 3 on Code Arena's agentic webdev leaderboard, posting 1530 Elo. The only models ahead of it are Claude Opus 4 with thinking (1548) and Claude Opus 4 without (1542). Everything else, including GPT-5.4 and Gemini 3.1 Pro, is looking up at a model whose weights you can download right now.

That's a 90-point jump from GLM-5, a huge jump in Elo terms. On SWE-Bench Pro, GLM-5.1 scores 94.6% of Opus's performance. A Chinese open-weight model that anyone can run and fine-tune is beating the best proprietary coding models from OpenAI and Google on practical tasks.

The "open models are always a generation behind" line is getting harder to defend. When the gap between the best closed model and the best open one is 18 Elo points on a coding benchmark that actually matters, the moat argument starts looking more like a puddle.

Sources: BuildFastWithAI, OfficeChai, VentureBeat

Anthropic Adds a Third Compute Provider

Yesterday we covered Anthropic exploring custom chip design. Now they're locking in a third compute provider: CoreWeave just signed a multi-year deal to power Claude. The announcement sent CoreWeave stock up 13%, because when you add Anthropic to a client list that already includes nine of the top ten AI providers, investors notice.

This is Anthropic methodically reducing its dependency on any single cloud. They've had Google Cloud and Amazon as primary compute partners, and both of those companies also happen to run competing AI models. Custom chips for the long game, CoreWeave for immediate capacity. Infrastructure diversification, executed.

For CoreWeave, the deal validates their bet on becoming the GPU cloud that AI labs actually want to use. For Anthropic, it means Claude's scaling story no longer lives entirely inside the data centers of its competitors. That's not a small thing when you're trying to train the next generation of models and your cloud hosts are doing the exact same thing.

Sources: Reuters, CNBC, Bloomberg

AI Finds a Shortcut in Quantum Error Correction

Harvard researchers just published a neural network decoder for quantum error correction that discovered something the physics community wasn't expecting: a "waterfall" regime where error rates drop far faster than conventional theory predicts. The AI decoder cuts quantum errors by up to 17x compared to standard methods.

Why that matters beyond the physics: one of the biggest obstacles to useful quantum computing is that you need enormous numbers of physical qubits just to maintain reliable logical qubits. Every roadmap you've seen for quantum advantage assumes a certain error correction overhead. If AI decoders can find these waterfall regimes consistently, those qubit requirements shrink.

And it's AI accelerating a completely different field. The neural network didn't just do error correction better; it found a regime that human researchers hadn't identified. AI as a tool for discovery, not just automation.

Sources: arXiv, The Quantum Insider

On the Editor's Desk

Didn't make today's cut but worth knowing about: OpenAI introduced a $100/month Pro tier, aimed squarely at Codex users. Between GLM-5.1 proving open models can compete at the top of coding benchmarks and OpenAI adding a mid-price coding tier, the AI-for-code market is heating up fast. The next few months are going to be very good for developers shopping for tools.