The Signal — February 22, 2026
THE SIGNAL
Future Shock Daily — February 22, 2026
Three stories today, none of them world-shaking, all of them worth knowing about. A Chinese open-weights model that punches above its weight class, a cancer diagnostic tool that's real but not as big as the stock market thinks, and OpenAI writing a check for AI safety research.
Zhipu AI's GLM-5 Tops the Open-Weights Leaderboard
Zhipu AI released GLM-5, a 744-billion-parameter mixture-of-experts model with 40 billion active parameters per inference pass. It now sits at the top of the open-weights Intelligence Index, putting it in the same performance tier as closed models from Anthropic and OpenAI.
The model is available on HuggingFace under open weights. Latent Space and DeepLearning.ai's The Batch both covered it independently, confirming the benchmark results. For anyone tracking the open-vs-closed capability gap, GLM-5 is the clearest evidence yet that Chinese labs are closing it fast. The practical question for developers: do you build on a 40B-active MoE that you can run yourself, or stay locked into API-dependent closed models? That tradeoff gets harder to justify with every release like this.
Sources: HuggingFace Model Card · DeepLearning.ai The Batch · Latent Space Newsletter
Tempus AI Launches Pan-Cancer HRD-RNA Diagnostic
Tempus AI announced a new RNA-based algorithm for detecting homologous recombination deficiency (HRD) across cancer types. HRD testing matters because it predicts which patients respond to PARP inhibitors, a class of targeted cancer drugs. Current HRD tests are DNA-based and limited to specific cancer types. An RNA approach could work across tumor types, which would expand the patient pool eligible for these treatments.
The stock jumped on the news, with TipRanks running a headline about an "AI Cancer Breakthrough." That framing oversells it. This is a diagnostic refinement from a company that ships diagnostic tools regularly. It's real precision oncology progress, verified through a BusinessWire press release and covered by Financial Times Markets. But calling it a breakthrough sets expectations the product hasn't earned yet.
Sources: BusinessWire Press Release · FT Markets · Yahoo Finance
OpenAI Puts $7.5M Into UK AI Alignment Research
OpenAI contributed $7.5 million to the UK's Global Alignment Project, a government-backed initiative focused on independent AI safety research. Microsoft is also contributing. The total fund now sits at 27 million pounds.
Both OpenAI's blog and the UK government's official GOV.UK page confirmed the details. This is the kind of event that's easy to dismiss as PR, and there's a version of that argument that holds up: $7.5 million is pocket change for a company valued north of $150 billion. But the fund supports independent research, not OpenAI-controlled work, and it adds to a growing pool of non-industry safety research funding. Whether that pool grows fast enough to matter is a separate question.
Sources: OpenAI Blog · GOV.UK Announcement · BusinessToday · MLex
On the Editor's Desk
Thirteen events came through the pipeline today. Nine got killed.
A Globe and Mail op-ed about the "AI bubble deflating" arrived scored at significance 4 out of 5. It's one columnist's opinion. We killed it. Two separate articles covered the same Sam Altman quote comparing human energy costs to AI training costs. Mildly interesting dinner party material, not a news event. Both killed.
The biggest pipeline miss today: Google released Gemini 3.1 Pro three days ago and our system captured a YouTuber's reaction video instead of the actual release from Google's blog. That's like covering a product launch by interviewing someone in the audience. We're investigating the RSS feed configuration.
Andrej Karpathy's commentary on "Claws" (AI coding agents that take over your terminal) qualified but didn't make the cut. It's single-source commentary from Simon Willison's blog. Interesting reading if you're a developer, but opinion rather than news.
Quiet Sunday. Some days the honest answer is "not much happened." We'd rather tell you that than pad the edition.