The Strategic Tech Cycle: Gunpowder, Germs, Steel, and AI

When a private actor creates a technology the state considers strategic, the state does not argue. It reclassifies. Anthropic is the latest chapter in a pattern as old as bronze.

In Charles Stross's Empire Games, a woman named Rita discovers she can walk between parallel worlds. The US government discovers this about thirty seconds later. What follows is not a conversation. The Department of Homeland Security recruits her as an intelligence asset, fits her with monitoring equipment, and sends her to infiltrate a rival civilization. The family that originally possessed this ability, known as the Clan, had operated for generations as a feudal aristocracy running a heroin trade through a medieval parallel world. They were not sympathetic actors. But Stross's point was never about whether the Clan deserved what happened to them. His point was about what the state does when a private group holds a capability it considers strategic. The Clan was classified as transdimensional narcoterrorists, hunted, and their entire homeworld was destroyed with nuclear weapons. The government then reverse-engineered a controlled version of the world-walking capability for its own use. Not through negotiation. Through reclassification, then annihilation.

In February 2026, Anthropic told the Pentagon it would not remove safety restrictions from Claude. The company's red lines: no mass surveillance of Americans, no lethal autonomous weapons without human oversight. The Pentagon wanted unfettered access across all "lawful" use cases. Anthropic refused. Within hours, Defense Secretary Pete Hegseth designated the company a supply chain risk, a classification normally reserved for foreign adversaries, and OpenAI signed a deal to fill the gap the same day.

Jared Diamond argued in Guns, Germs, and Steel that technologies become instruments of state power through geographic and institutional advantage. He was describing how power concentrates. Stross was describing the mechanism: when a private group holds a technology the state considers too important to leave in private hands, the state does not debate. It reclassifies. The playbook is not science fiction. It is the oldest institutional reflex in human civilization.

The Clan and the State

The Merchant Princes series runs on a single political insight: states do not tolerate strategic capabilities they cannot control. The Clan used their world-walking ability to move goods between parallel Americas. They accumulated wealth and stayed quiet until the moment the US government learned what they could do, it moved to capture that capability entirely.

The surviving world-walkers ended up under the protection of the New American Commonwealth, an alternate-timeline government that represents a different model of state-technology relations, one where capability holders are integrated rather than crushed, but still controlled. The walkers traded one state's dominion for another's. The capability was never really theirs.

Anthropic built Claude. The Pentagon contracted with the company for a reported $200 million deal. The contract included two restrictions: no lethal autonomous weapons, no mass surveillance of Americans. The Pentagon wanted those restrictions removed. When Anthropic held, the government did not argue the merits. It reclassified. "Supply chain risk" is an administrative designation under 10 USC 3252, not a judicial finding. It requires no trial and no public evidence. It simply makes doing business with the federal government impossible.

Then came the counter-move Stross would have predicted. More than 30 employees from OpenAI and Google DeepMind, including Google's chief scientist Jeff Dean, filed an amicus brief supporting Anthropic's lawsuit. The brief warned that the blacklist "introduces an unpredictability in our industry that undermines American innovation and competitiveness." Scientists at competing companies, putting their names on a court document saying the government's approach to their technology is wrong.

The Pattern Has a Name

Stross was not inventing a dynamic. He was documenting one. The Crypto Wars of the 1990s ran on identical machinery.

In 1991, Phil Zimmermann released PGP, a free encryption program that let ordinary people send messages no government on Earth could read. The US Customs Service opened a criminal investigation. The charge: exporting munitions without a license. Zimmermann had not shipped a single weapon. He had published software. Under the International Traffic in Arms Regulations, the US government classified strong encryption as a munition, placing it in the same legal category as missiles. Mathematics, expressed as code, was now a weapon.

The investigation lasted three years. The government never formally charged Zimmermann, but it did not need to. The threat of prosecution hung over the entire field. You do not need a conviction to control behavior. You need the credible threat of reclassification.

It took nearly a decade of legal challenges and a landmark Ninth Circuit ruling in Bernstein v. United States (which established that source code is speech protected by the First Amendment) before encryption export controls were substantially relaxed in 2000. The private sector won. But the government's opening move was always the same: take the thing, put it in a category that gives you power over it, and let the new label do the work.

And if classification is always a tool for shaping reality, then Anthropic's own "red lines" are also a classification act. The company decided which uses of its technology count as acceptable and which do not. That is a legitimate exercise of corporate autonomy. It is also, structurally, the same kind of category-drawing the state performs. The difference is who holds the pen.

Anthropic's lawsuit challenges the supply chain risk designation under 10 USC 3252, arguing that Congress required the Pentagon to use the least restrictive means to mitigate supply chain risk, not punish a supplier for asserting contractual terms. The legal theory differs from Bernstein. The institutional reflex is identical.

The Reclassification

Reclassification is not a bug in the system. It is the system's primary tool for controlling things it cannot outright ban. Walk through history and you will find the same mechanism repeating across centuries.

Bronze and Tin

Bronze Age civilization ran on a controlled substance: tin. Copper was common enough, but tin, the ingredient that turned soft copper into hard bronze, was geographically scarce. The trade routes that connected tin deposits to Mediterranean urban centers were not commercial highways. They were strategic chokepoints. Control the tin supply and you controlled who could arm themselves and who could project power. The material scarcity itself functioned as a classification system, and the states that controlled access to it controlled the balance of power for a millennium.

Fast forward three thousand years. Steel was "just manufacturing" until it became the backbone of modern warfare. When World War I broke out, steel production capacity became one of the strongest predictors of military power. Germany was Europe's largest steel producer by a wide margin. Britain and France depended on American steel shipments to sustain their war effort. Across the belligerent nations, private steel production was brought under state direction, allocated by government ministries and rationed to military production quotas. What had been a commercial industry was reclassified from "commerce" to "war material." A material that companies had traded as a commodity for decades was, by 1916, a nationalized strategic resource.

The pattern took centuries to play out for metallurgy.

The Printing Press

Johannes Gutenberg produced his first printed Bible around 1455. Within decades, the Church and European monarchies were in open panic about what the technology meant for their monopoly on information.

In 1487, Pope Innocent VIII issued the bull Inter multiplices, giving bishops authority to censor printed materials before publication. By the 1530s, pre-publication censorship was extensive across Catholic Europe, with the ecclesiastical imprimatur required before any work could be published. England followed the same trajectory. The Licensing of the Press Act 1662 required all printed materials to be licensed, gave the Stationers' Company and the Archbishop of Canterbury authority to approve publications, and banned the printing of "heretical seditious schismatical or offensive Bookes or Pamphlets" across the British Empire. Presses had to be registered. Publications approved. Unauthorized printing criminalized.

The technology was not banned. Banning it was impossible: too many presses existed, too many people knew how to build them, and the economic benefits were too large to sacrifice. So the state did not destroy the press. It absorbed the press into its permission structure, converting an unregulated tool into a licensed apparatus. Anyone could still print. They just needed the government's blessing first.

The press went from "invention" to "licensed apparatus" in less than a century. Faster than metallurgy.

Biology as Munition

The Biological Weapons Convention of 1972 drew a line through the middle of biological research. On one side: science. On the other: prohibited weapons development. The science itself did not change. The convention reclassified certain applications of biological knowledge from "research" to "illegal under international law."

Decades later, CRISPR gene editing forced the same question in a new form. The US government established Dual Use Research of Concern (DURC) policies, classifying certain life sciences research as posing "a significant threat with broad potential consequences to public health and safety." A 2017 workshop convened by the National Science Advisory Board for Biosecurity acknowledged that the DURC framework had failed to anticipate developments like CRISPR/Cas9, which arrived faster than the regulatory categories built to contain earlier biotechnologies.

The biology did not change. The category did. Who gets to edit genes became a question of national security classification, not scientific ethics. This is the version of the pattern that should concern AI researchers most, because it shows reclassification applies to knowledge itself. Not physical tools. Not products. The capability to understand and manipulate biological systems was reclassified from "academic freedom" to "dual-use concern" based on what someone might do with the results.

The Through-Line

Line up the timescales. Bronze Age states took centuries to formalize control over metallurgy. The Church needed less than a century to regulate the printing press. The Biological Weapons Convention arrived roughly thirty years after biological warfare became a realistic military capability. The Crypto Wars lasted about a decade. Anthropic refused the Pentagon's terms in late February 2026. By early March, the company had been reclassified, blacklisted, sued the government, received an amicus brief from its own competitors, and watched OpenAI take its place at the table. Centuries. Decades. Weeks.

The pattern is not universal. Nuclear weapons went from invention in 1945 to the Non-Proliferation Treaty in 1968, twenty-three years of relative regulatory patience. The internet went largely unregulated for a quarter century. Some technologies escape the reclassification cycle for a while, or settle into stable regulatory frameworks without the adversarial dynamic described here. But the technologies that combine strategic military value with private-sector control tend to trigger the reflex fastest. AI checks both boxes.

What Happens Next

None of this means the government's interests are illegitimate. National defense is not a trivial concern. A military that ties its own hands while adversaries do not is a military that loses. If AI systems can save soldiers’ lives, the argument that a private company should unilaterally decide which applications are off-limits deserves genuine scrutiny, not dismissal.

The question is not whether the government has valid interests. It does. The question is whether reclassification, an administrative act with no judicial oversight, is the right tool for resolving a disagreement about the terms of a defense contract. The Anthropic situation was not a national emergency requiring emergency powers. It was a contract negotiation where one party had the ability to redefine the other as a threat.

In Stross's story, the state wins. The Clan had a capability the US government wanted. The government classified them as terrorists, destroyed their world, and built its own version of their technology. The Clan were genuinely dangerous. They were also annihilated. Stross's insight was that the state's response had nothing to do with proportionality and everything to do with control. Even when the private capability holders are genuinely problematic actors, the state's response is still calibrated to capture, not to justice.

The Crypto Wars showed a different outcome. Zimmermann was never charged. Bernstein established that code is speech. Export controls were relaxed. But that victory required a decade of lawyers, engineers, and activists who recognized that the government's move against one product was a move against the entire field.

That is the question facing the AI industry right now, and the industry is splitting along the fault line. More than 30 employees from OpenAI and Google DeepMind signed the amicus brief. But their employer, OpenAI, signed the Pentagon contract the same day Anthropic was blacklisted. Sam Altman told employees he shared Anthropic's "red lines." Then he took the deal anyway. Individual engineers signed a court document saying the government was wrong. Their company signed a contract saying it was open for business.

This is the fracture the reclassification is designed to produce. It does not need to defeat the entire industry. It needs to split it. If the rest of the field sees Anthropic's designation as a shared threat to their autonomy, the pattern can be fought the way encryption was fought. If they see it as a competitive opportunity, the reclassification already worked. The capability gets captured not by force, but by making refusal more expensive than compliance.

The Crypto Wars took a decade to resolve because encryption was, in the 1990s, an abstraction most people could not see or touch. AI is not an abstraction. Hundreds of millions of people use it daily. The Pentagon's move against Anthropic was covered by every major news outlet within hours. The amicus brief followed within a day. The speed of the response matches the speed of the reclassification. But speed cuts both ways. Anthropic does not have a decade to build coalitions and wait for courts to catch up. The supply chain risk designation is already costing the company contracts. Every other AI company is watching to see whether principled refusal carries a price tag they can afford to pay, or whether the rational move is to take the contracts, drop the restrictions, and let Anthropic be the cautionary tale.

Which leaves the question that matters more than whether Anthropic wins its lawsuit. The reclassification machine does not stop. If AI models can be designated supply chain risks, what comes next? Open-source model weights, currently distributed freely, could be reclassified as controlled technology requiring export licenses. AI research papers could face pre-publication review, the way biological dual-use research already does. Training datasets could be designated as strategic national assets. Individual researchers could find their work reclassified out from under them, the way Zimmermann's code was reclassified as a munition.

The pattern is old. The targets are new. And the only question that has ever mattered is the one every example in this piece comes back to: once a technology becomes an instrument of state power, who controls the label?