The Signal — February 21, 2026
THE SIGNAL
Future Shock Daily — February 21, 2026
The Pentagon is trying to strongarm Anthropic into dropping its weapons carve-outs. Georgi Gerganov's llama.cpp just got absorbed by Hugging Face. And MIT researchers figured out how to make yeast pump out cancer drugs faster using a language model. Friday delivered.
The Pentagon Wants Anthropic to Drop Its Weapons Guardrails
A $200 million defense contract is on the line because Anthropic won't budge on autonomous weapons and mass surveillance. The Pentagon's response: threaten to label the company a "supply chain risk," which would effectively blacklist it from future government work.
Axios broke the story on February 15, and the details are specific. Anthropic bid on a DoD contract with explicit carve-outs: no autonomous targeting, no warrantless bulk surveillance. The Pentagon's procurement office rejected the carve-outs and told Anthropic to either remove them or walk. Anthropic walked. Now DoD officials are floating the supply chain designation as retaliation.
This is the first time a major AI company has lost a government contract specifically over safety restrictions it chose to keep. The usual dynamic runs the other way: companies water down their principles when the money gets big enough. Wired's Backchannel published a deep-dive this week with additional sourcing from current and former Pentagon officials, and the picture is bleak. Multiple defense officials described the carve-outs as "operationally unacceptable," while one called Anthropic's position "naive."
What to watch: whether other AI companies use this as cover to quietly drop their own red lines on military applications. If the supply chain designation sticks, it sends a clear message that safety commitments carry a price tag.
Sources: Axios · CNBC · Wired Backchannel · NYT
llama.cpp's Creator Joins Hugging Face
Georgi Gerganov, the developer behind llama.cpp, is bringing his company ggml.ai into Hugging Face. If you've run an open-source language model on your own hardware in the past three years, Gerganov's work is almost certainly the reason it worked.
llama.cpp made local LLM inference practical. Before it existed, running large models required expensive cloud GPUs or deep expertise in model quantization. Gerganov wrote a C++ inference engine that ran quantized models on consumer hardware, and the local AI community built an entire ecosystem on top of it. The GGUF format became the standard for sharing quantized models.
Hugging Face says the project stays fully open-source with Gerganov retaining technical leadership. The practical upside: tighter integration between the transformers library (how models are defined) and llama.cpp (how they run locally). The goal is near-single-click deployment of new models for local inference. HF also notes they already had core llama.cpp contributors on staff, so this formalizes a collaboration that was already happening.
The consolidation question is real. r/LocalLLaMA's initial reaction is mixed, with some users concerned about one company controlling both the model hub and the primary inference engine. But the alternative was Gerganov continuing to maintain critical infrastructure as essentially a solo operation, which carries its own risks.
Sources: Hugging Face Blog · Gerganov's GitHub Discussion · Simon Willison
MIT Uses a Language Model to Make Yeast Produce Cancer Drugs More Efficiently
MIT chemical engineers trained an LLM on the genetic code of industrial yeast and used it to optimize codon sequences for protein production. The result: boosted output of six different proteins, including trastuzumab, the monoclonal antibody sold as Herceptin for breast cancer treatment.
The technical details matter here. Every amino acid can be encoded by multiple three-letter DNA sequences (codons), and different organisms prefer different ones. Getting the codon pattern right for a target organism like Komagataella phaffii is the difference between a yeast culture that barely produces your target protein and one that churns it out efficiently. Researchers have been hand-tuning these sequences for decades. The MIT team's LLM learned the patterns from the yeast's own genome and predicted optimal sequences that outperformed traditional optimization.
The paper, published in PNAS this week, tested the approach on six proteins ranging from human growth hormone to the cancer antibody. All six showed improved production. The practical implication: faster, cheaper development of biopharmaceuticals. The current process of optimizing codon sequences for new drugs involves months of trial-and-error; a predictive model that consistently works could compress that timeline.
On the Editor's Desk
We reviewed 18 events in the past 24 hours. Six passed, two qualified with editorial caveats, one is on hold, and nine got killed.
The kills were mostly pipeline noise: five PR Newswire press releases about Teamsters at Atlanta airport, Treasury policy, banking podcasts, and corporate VP appointments. None had anything to do with AI. Our PR Newswire feed's keyword filtering needs work.
We also killed three YouTube videos that were duplicates or sponsored content covering stories we already had from primary sources. The Gemini 3.1 Pro launch cleared our bar via TechCrunch and the official DeepMind model card, so we didn't need two separate YouTube recap videos of the same benchmarks.
One story is on hold: ALLT.AI claims to have published the "first-ever study using brain lesion data to decode how AI processes language." Interesting concept, but the only source is their own PR Newswire press release. No independent coverage, no preprint link, no way to verify the methodology. We'll revisit if the paper shows up on arXiv or in a journal.
Stories that qualified but didn't make today's top three: Andrej Karpathy buying a Mac Mini to experiment with the “Claws” paradigm (LLM agents with computer control), and Anthropic banning OAuth tokens from consumer plans in third-party tools. Both real, both covered by multiple sources, but commentary and policy updates rather than lead stories.
Google's Gemini 3.1 Pro also passed review with record benchmark scores, though Ars Technica notes Claude Opus 4.6 still leads the Arena text leaderboard by 4 points. We chose not to lead with another benchmark horse-race story.