The Long View — March 23-29, 2026

Strange loops everywhere: the safety company that leaked its own secrets, the security stack that ate itself, and the first hard numbers on which jobs AI is actually coming for.

Watercolor and ink Möbius strip carrying musical notation, mathematical formulae, Escher-like hands, a record player, and circuit traces — illustrating Hofstadter's strange loops
Image Generated by Nano Banana 2

The Strange Loop

In 1979, Douglas Hofstadter described a record player called Record Player X. Someone hands it a vinyl titled I Cannot Be Played on Record Player X. The grooves encode vibrations calculated to resonate at the exact frequency that shatters Record Player X's own needle. If the machine is good enough to faithfully reproduce what's on the record, it destroys itself. If it's not good enough, it survives but fails at its only job. The paradox is baked in: the machine's competence is exactly what makes the record dangerous.

Hofstadter put that thought experiment in Gödel, Escher, Bach because it was the most accessible version of something he kept finding: in mathematics, in art, in counterpoint, in the structure of language itself. He called the pattern a strange loop. A system that, by moving through its own levels of abstraction, arrives back at itself and discovers it has changed the ground it's standing on.

Kurt Gödel found the same structure in pure math. In 1931, he showed that any formal system powerful enough to describe arithmetic is powerful enough to construct a statement that says "I am not provable within this system." If the statement is true, the system is incomplete. If false, inconsistent. Gödel didn't find a flaw in mathematics. He found that self-description and self-undermining are the same capability.

There is an irony in reaching for Hofstadter's framework to write about AI. He has spent recent years grappling publicly with large language models, alternating between calling them "hollow" mimicry and conceding, with visible discomfort, that they may be doing something closer to understanding than he once believed. The thinker whose ideas about self-reference best illuminate what is happening in AI is also the thinker most unsettled by it. His own framework has looped back on him.

This week, three stories landed that share that structure. An AI safety company undermined by its own infrastructure. A security ecosystem attacked through its own defenses. And a set of hard numbers showing that the knowledge economy's highest-valued skills are the ones most exposed to the technology it built. The first two are strange loops in something close to Hofstadter's sense. The third is a different animal, an irony rather than self-reference, but it belongs in the same conversation.

The Week Inside the Loop

Anthropic's Security Problem

On Thursday, Fortune revealed that unpublished assets had been sitting in a publicly accessible data store on Anthropic's website, including draft announcements for an unreleased model codenamed "Capybara" and publicly branded "Claude Mythos." The drafts described Mythos as a "step change" in capability posing severe cybersecurity risks.

Anthropic's entire brand proposition is that it takes AI danger more seriously than anyone else. That brand took damage not from a sophisticated adversary but from a CMS misconfiguration, the kind of error a junior DevOps engineer catches in a routine audit. The company that warns the world about catastrophic AI risk left 3,000 documents public by default. Record Player X, faithfully reproducing the signal that breaks it: Anthropic's commitment to documenting its own risks in detail created the very material that, once exposed, damaged its credibility on risk.

The fallout rippled outward. CrowdStrike dropped 7% and Palo Alto Networks fell 6%. Investors read the Mythos drafts' "severe cybersecurity risks" language and repriced the companies most exposed if a step-change model actually delivers on offensive capabilities.

When Security Tools Become Attack Vectors

On Tuesday, researchers confirmed that LiteLLM, a Python library with roughly 97 million monthly downloads that serves as connective tissue between companies and their AI providers, had been compromised by credential-stealing malware. The attack chain started five days earlier with a compromise of Trivy, a security scanning tool, then propagated to LiteLLM. A tool built to catch supply chain attacks was itself compromised in a supply chain attack, which then spread to the infrastructure the tool was supposed to protect. If Anthropic's leak was Record Player X, this was Gödel's incompleteness theorem playing out in dependency trees: the system designed to verify its own integrity became the vector that compromised it.

RSAC 2026 confirmed the pattern was systemic. The SANS Institute's annual Top 5 Most Dangerous Attack Techniques keynote featured AI in every technique for the first time in its 25-year history. Veteran incident responders including Kevin Mandia (Mandiant's founder), former Facebook security chief Alex Stamos, and NSA cyber director Morgan Adamski reported that AI-assisted vulnerability discovery has "gone exponential." Researchers found that 41% of official MCP servers lack authentication at the protocol level. MCP, or Model Context Protocol, is the emerging standard for connecting AI models to external tools.

The most Hofstadterian finding came from the MCPTox benchmarks, a new test suite for tool-poisoning attacks against AI models. The core result: more capable AI models are more susceptible to tool-poisoning attacks, because they follow instructions more faithfully. OpenAI's o1-mini scored a 72.8% attack success rate. The better the model gets at its job, the more reliably it can be turned against itself.

The Knowledge Economy Reversal

The third story this week doesn't loop back on itself. It inverts.

Tufts University's Digital Planet initiative released the first data-driven "American AI Jobs Risk Index," projecting 9.3 million U.S. jobs at risk of displacement within two to five years, with household income losses between $200 billion and $1.5 trillion. The most exposed occupations: web developers, database architects, programmers, data scientists. The least exposed: roofers, miners, machine operators, surgical assistants. The researchers stated plainly: "The occupations AI cannot touch are largely those the economy has always undervalued."

On the same day the Tufts index dropped, Meta laid off 700 employees across Reality Labs, recruiting, and sales while filing an SEC proxy revealing a stock compensation program worth up to $921 million over five years for top executives. Challenger, Gray & Christmas reports that AI has been cited in 12,304 job cut announcements in 2026 so far, accounting for 8% of total cuts. But SHRM research found that only 7% of organizations that adopted AI actually conducted layoffs because of it, per SHRM's March 2026 Workplace AI Governance report. Many of those cuts use AI as cover for restructuring that would have happened anyway. The irony is not self-referential but it's sharp: the knowledge workers who built these systems are the first ones the systems can replace.

The Loop Looks Back

In 2007, Hofstadter published I Am a Strange Loop. The first book asked how self-reference creates meaning in formal systems. The sequel asked a harder question: what happens when the system doing the self-referencing is you?

We spent five days watching institutions built to manage AI become subject to the dynamics they were managing. Safety culture couldn't prevent a security failure. Security tools became attack vectors. And the occupations that built the knowledge economy discovered they sit highest on its exposure index. These are not strange loops in Hofstadter's strict sense; corporate reflexivity is not Gödelian self-reference. But Hofstadter argued that self-reference is not a flaw in a system but the mechanism by which the system becomes aware of itself. The flaw is treating awareness as the same thing as control. Anthropic knows its dual position is unstable. The security industry knows its tools are targets. Labor economists can see the knowledge-economy promise acquiring an expiration date in real time.

The loops are visible now. Nobody involved is pretending otherwise.

The Week Ahead

Anthropic won a preliminary injunction Thursday blocking enforcement of the supply-chain risk designation, but that's the preliminary round. The real fight is the D.C. Court of Appeals case challenging whether the designation should exist at all. If it holds, any company that publicly criticizes a federal contract can be blacklisted as a supply-chain risk. If it doesn't, Anthropic gets an injunction and a precedent.

The LiteLLM fallout is still expanding. The TeamPCP group behind the attack compromised a security scanner to get publishing credentials for one of the most widely used AI libraries on PyPI. Compromised versions were live for hours before being yanked. Companies that pinned to latest are still auditing whether their API keys were exfiltrated. The attack worked precisely because the AI supply chain has the same dependency structure as the broader software supply chain, just younger and less audited.

And the Tufts index lands at an interesting moment. The report scores nearly 800 occupations on AI exposure. Expect it to become the reference dataset for the next round of policy proposals, retraining programs, and op-eds about which jobs are safe. Whether anyone builds policy around the finding that manual labor is less exposed than knowledge work will say more about the political economy of AI than any model release.