The Last Holdout
At 5:01 PM on a Friday in February, the deadline expired. Defense Secretary Pete Hegseth had given Anthropic CEO Dario Amodei a simple ultimatum: remove the safety restrictions from Claude, the company's flagship AI, or lose the $200 million Pentagon contract. Allow unrestricted military use "for all legal purposes." Amodei's answer, published before the deadline hit, was no.
Within hours, the president ordered every federal agency to stop using Anthropic's products. Hegseth designated the company a supply-chain risk to national security, a classification normally reserved for foreign adversaries and sanctioned firms. That same day, OpenAI announced a deal to supply AI to classified military networks.
Sam Altman went on CNBC to explain that it was important for companies to work with the military, "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic." The framing was generous. The timing was not.
To understand how the most safety-conscious AI lab in the world ended up blacklisted by its own government while its competitors lined up to take its place, you have to go back seven years, to a different company and a different war.
3,100 Signatures
In April 2018, employees at Google discovered that their employer had signed a contract with the Pentagon. Project Maven used machine learning to analyze drone surveillance footage. The reaction was immediate. More than 3,100 employees signed a letter to CEO Sundar Pichai demanding Google pull out: "We believe that Google should not be in the business of war."
Twelve employees resigned. Google Cloud CEO Diane Greene told staff the company "would not choose to pursue Maven today because the backlash." By June, Google had published AI principles pledging never to build AI for weapons or surveillance, and let the Maven contract expire.
The principles lasted six years and eight months.
In January 2024, OpenAI quietly removed the words "military and warfare" from its usage policy. No announcement. No letter. The Intercept noticed. Six months later, Palantir deployed Anthropic's Claude for classified intelligence work, making Anthropic the first advanced AI company with models running in classified government environments.
Then, on February 4, 2025, Google removed the weapons and surveillance restrictions from its AI ethics principles entirely. The company that had once let 3,100 employees override a Pentagon contract reversed course without a single employee letter making the news.
By the time Hegseth's ultimatum arrived in February 2026, every major AI lab except one had dropped its objections to military use. Anthropic was the last holdout.
The $400 War
Five thousand miles from the boardrooms where these decisions were being made, a different kind of AI debate was playing out in real time.
On the front lines of eastern Ukraine, a pilot sits behind a screen wearing goggles, flying a drone that costs somewhere between $300 and $500. The drone has no GPS. No AI. No autonomy. It sends back analog video, and some trail a fiber-optic cable behind them to stay connected when Russian electronic warfare blankets the battlefield in interference. A West Point study found that over 50 percent of FPV drones are downed by jamming. The fiber-optic ones are immune.
Anduril Industries, founded by Oculus VR creator Palmer Luckey, offered a different approach. The company's Ghost drones use AI-powered autonomous navigation. They don't need a human pilot. They don't need a fiber-optic tether. They navigate by algorithm.
According to the New York Times, the Ghost drones failed in Ukraine when they lost GPS signal. Ukrainian forces stopped using them. Luckey told the Times he did not consider these "setbacks or failures." Anduril's Bolt-M, an AI-guided munition, costs between $20,000 and $40,000 per unit. The company claims one can replace ten cheap drones. That claim has not been independently verified.
The gap between the $400 drone and the $40,000 drone is not just a cost difference. It is two competing theories of what AI is for in warfare. One theory says autonomy replaces humans. The other says humans, aided by simple technology, adapt faster than algorithms. Ukraine, the most intense proving ground for military AI in a generation, has so far favored the cheap option.
Anduril's valuation tells a different story. The company was worth $8.5 billion in 2022. By mid-2025, that figure had reached $30.5 billion. It is now reportedly in talks at $60 billion. The company hired more than 3,500 employees in under two years, many of them recruited directly from Google, Meta, and SpaceX. Defense tech venture capital hit $48 billion in 2025, up 120 percent from the prior year.
The market is not betting on what works today. It is betting on what works next. And the market is not alone.
The Arms Race Nobody Voted For
China's leading weapons experts said in mid-2024 that the country could have fully autonomous AI weapons on the battlefield within two years. The PLA has issued tenders for AI-powered robot dogs and is converting thousands of retired jet fighters into unmanned AI-controlled aircraft. A December 2024 DoD report stated that China's military believes AI will "enable a range of new defense applications, including autonomous and precision-strike weapons."
In Ukraine, forces carried out the first fully unmanned military operation in December 2024 near the village of Lyptsi. In June 2025, Ukrainian operators smuggled FPV drones into Russia on trucks and simultaneously attacked multiple air bases 4,300 kilometers from the front line, destroying up to 41 aircraft in what one estimate put at $2 to $7 billion in damage.
The international community has been trying to regulate this for over a decade. A UN Group of Governmental Experts on Lethal Autonomous Weapons Systems has met regularly since 2014. In December 2024, the UN General Assembly voted 166 to 3 to expand discussions on autonomous weapons regulation. The three opposing votes came from Belarus, North Korea, and Russia.
No binding treaty exists. The forum operates by consensus. Any single nation can block progress. Meanwhile, the technology being debated in conference rooms is being deployed in combat zones in real time.
Fifteen to One
There is another way to build AI for warfare. It just doesn't get funded the same way.
DroneShield, an Australian company, makes systems that detect and track hostile drones using radar, RF sensors, and acoustic arrays. Its market capitalization is roughly $2 billion. D-Fend Solutions, an Israeli firm, builds technology that takes over hostile drone control signals, redirecting them to a safe landing zone without kinetic engagement. Fortem Technologies in Utah fires nets from interceptor drones to capture threats mid-air. Epirus builds directed-energy systems that disable drone electronics with focused microwave pulses.
These are not theoretical products. They are deployed. DroneShield's systems operate in over 70 countries. D-Fend has contracts with the U.S. military and multiple allied nations. The technology to defend against autonomous weapons exists, works, and is commercially available.
The counter-drone industry is worth roughly $2 to $3 billion. Anduril alone is valued at more than fifteen times that figure.
The ratio tells you something about incentive structures. Venture capital funds companies that project revenue growth. A company that builds weapons projects revenue growth through procurement contracts that scale with threat perception. A company that builds shields projects revenue growth through procurement contracts that scale with the number of weapons deployed. One creates the demand the other serves, but the weapons company gets the first check.
This is the gap Anthropic was trying to hold open. Not between AI and no AI. Between AI pointed in one direction and AI pointed in another. The same neural-network navigation that guides a drone to a target can guide an interceptor to that drone. The same computer vision that identifies a vehicle as hostile can identify a civilian vehicle and abort a strike. The technology is agnostic. The application is a choice.
The Question That Remains
Anthropic's refusal cost the company $200 million and its standing with the federal government. It may cost more. The Defense Production Act gives the executive branch authority to compel companies to produce goods deemed essential to national defense. Whether that authority extends to compelling an AI lab to remove safety restrictions from its models is an untested legal question that several legal scholars have begun to analyze.
What is not a legal question is the trajectory. In 2018, 3,100 Google employees could stop a military AI contract. In 2026, the full weight of the company that invented the transformer architecture is pointed at defense, and the one company that held out has been designated a security threat. Every major lab, without exception, now works with the Pentagon or has removed its restrictions on doing so.
The global defense-AI sector absorbed $48 billion in private investment in 2025. The counter-drone industry drew a fraction of that. The talent pipeline flows from consumer tech to weapons manufacturers. Anduril runs drone-racing competitions to recruit engineers the way Google once ran coding challenges.
Nobody voted on any of this. No referendum asked citizens whether AI should be built for offense or defense. No public debate weighed the $400 drone against the $40,000 one. The direction was chosen by procurement officers, venture capitalists, and executives negotiating contracts in rooms without cameras.
The money says swords. The question is whether anyone is left to make the case for shields.