What If — February 26, 2026
WHAT IF: What if Google had renewed Project Maven in 2018?
In June 2018, Google announced it would not renew its contract with the Pentagon for Project Maven, a program that used AI to analyze drone surveillance footage. Around 4,000 Google employees had signed a petition opposing the work, and a handful of engineers resigned. Google published a set of AI principles that excluded weapons applications. But what if Google's leadership had held firm, kept the contract, and told the petitioners to find other jobs?
The First Cascade
Google renews Maven in late 2018. A few dozen engineers leave. The press cycle lasts two weeks. Most employees stay -- the pay is too good, the stock too valuable, the job market not quite hot enough to absorb thousands of principled defectors all at once. Google quietly expands its defense AI division through 2019, hiring former Palantir engineers and partnering with Lockheed Martin on a next-generation targeting system. By 2020, Google holds the largest defense AI portfolio of any tech company, with contracts across the Army, Navy, and intelligence community. The Pentagon never needs to spread its bets across smaller, less capable firms.
The Second Cascade
With Google dominating military AI, Anthropic never gets a defense contract. Dario Amodei founds Anthropic in 2021 as planned, but without the gravitational pull of government money, the company stays smaller and more research-focused. It never builds the enterprise sales team that landed the DoD deal. The Pentagon, meanwhile, has no reason to pressure any AI company on safety guardrails -- Google already gave them everything they wanted years ago. The entire debate about whether safety-focused AI labs can coexist with military contracts never happens, because the question was settled before anyone thought to ask it.
The Third Cascade
By February 2026, AI safety as a corporate identity doesn't exist. Google's willingness to do defense work in 2018 established the norm: if you build capable AI, you work with the government. No opt-out, no AI principles posted on a blog. Anthropic, in this timeline, is a well-regarded research lab with 200 employees and no commercial products. The Pentagon doesn't need to issue ultimatums to anyone. The concept of a company built around "responsible AI" as a market differentiator never crystallized, because Google proved in 2018 that principles are just press releases you eventually take down.
Skeptic's Rebuttal: Google's culture was genuinely hostile to defense work. Even if leadership renewed Maven, the internal resistance would have escalated -- leaks, sabotage, executive departures. The company might have fractured before it could build a defense division. And the Pentagon's procurement bureaucracy moves slowly enough that Google's technical advantages might not have translated into dominance.
Takeaway: Does AI safety exist as a field because of genuine conviction, or because Google's 2018 decision created a market gap that other companies filled?
WHAT IF: What if Anthropic had delayed Claude Cowork by six months?
On January 30, 2026, Anthropic added industry-specific plug-ins to Claude Cowork, its enterprise agent platform that had launched in research preview earlier that month. A week later, on February 5, Claude Opus 4.6 dropped. Within three weeks, the S&P 500 Software and Services index lost over $1 trillion in value, though broader market anxiety over tariffs and rate uncertainty amplified the decline. Analysts called it "Software-mageddon." SaaS companies saw their price-to-earnings ratios compressed overnight as investors suddenly believed AI agents could replace entire product categories. But what if Anthropic had pushed the Cowork launch to July?
The First Cascade
Without the Cowork plug-ins as a concrete demonstration of what AI agents could do to enterprise software, the "Software-mageddon" sell-off doesn't happen in February. Opus 4.6 still launches, and reviewers call it impressive, but a model without a product isn't a threat to Salesforce's revenue. SaaS stocks drift lower through the spring on general AI anxiety, losing maybe 8-10% instead of cratering. The Citrini Research report lands in late February anyway, but without a real product to point to, it reads as speculation rather than prophecy. Markets shrug it off.
The Second Cascade
WiseTech still cuts staff -- the CEO had been looking for an excuse -- but the announcement lands differently. Without the broader market narrative of AI agents eating software, investors read it as a cost-cutting play, not a structural shift. The stock rises 4% instead of 11%. Fed Governor Lisa Cook still mentions AI displacement in a speech, but it's one paragraph in a talk about labor markets, not the headline. The "AI is coming for your job" news cycle that dominated February 2026 loses its sharpest edge. Entry-level software engineers are still anxious, but the ambient dread drops from a roar to a hum.
The Third Cascade
By July, when Cowork finally launches, the market has had six months to prepare. Hedge funds have already repositioned. SaaS companies have started integrating their own agent features. The sell-off still happens, but it's 40% smaller. The trillion-dollar "Software-mageddon" becomes a $400 billion correction that most people forget within a quarter. The structural story is the same -- AI agents are replacing SaaS -- but the human experience of it is less like a car crash and more like a slow lane change. The panic, the doomsday reports, the "Occupy Silicon Valley" fears that Citrini described -- they never quite ignite, because the fuse burned too slowly.
Skeptic's Rebuttal: Markets were already primed for a SaaS correction. If not Cowork, something else would have triggered it -- maybe Google's Gemini agents, maybe an open-source tool. The sell-off might have been delayed by weeks, not months. And Anthropic's competitors would have filled the gap, meaning the market impact could have been distributed rather than concentrated.
Takeaway: How much of a trillion-dollar market event was driven by the actual technology, and how much by the timing of a single product launch?
WHAT IF: What if Anthropic walks away from the Pentagon this Friday?
Defense Secretary Pete Hegseth gave Anthropic until Friday to remove safety restrictions on Claude's military deployment or lose its $200 million DoD contract. Anthropic is the only AI company currently integrated into classified military systems. The Pentagon's alternative, xAI's Grok, just signed its own classified systems deal days ago but hasn't been integrated yet. As of Thursday morning, Anthropic hasn't publicly responded. What if they say no?
The First Cascade
Anthropic's board meets Thursday night. The vote is close. The safety team argues that removing guardrails from a model deployed on classified systems creates liability no amount of government revenue can offset. The commercial team points out that $200 million is roughly 8% of projected 2026 revenue and that losing DoD credibility will cost enterprise deals. The board votes 5-3 to reject the Pentagon's terms. Friday morning, Anthropic's general counsel sends a letter to the Department of Defense: Claude's safety architecture is not negotiable. The contract lapses. By Friday afternoon, the Pentagon announces an emergency procurement review. Anthropic faces immediate financial pressure as enterprise customers pause new contracts pending resolution.
The Second Cascade
Over the next two months, something unexpected happens. Enterprise customers who had been quietly evaluating Google and OpenAI alternatives start signing with Anthropic instead. The reasoning, repeated in procurement meetings from JPMorgan to Siemens: if Anthropic won't remove safety guardrails for the Pentagon, they won't remove them for anyone. In regulated industries -- banking, healthcare, defense contracting -- that guarantee is worth more than the lost government revenue. Anthropic's enterprise pipeline grows 30% by April. Meanwhile, the Pentagon scrambles to integrate Grok into classified systems. The process, which normally takes 18 months of security review, is being rushed into six. Career officials at the NSA and CIA push back quietly, citing concerns about xAI's data handling and Elon Musk's public social media behavior as security risks.
The Third Cascade
By late 2026, two parallel AI ecosystems are forming. One serves governments that want unrestricted AI -- the Pentagon, the Gulf states, countries that never cared about guardrails. xAI, plus a handful of Chinese and Israeli firms, compete for this market. The other serves enterprises and allied democracies that want auditable, safety-constrained systems. Anthropic leads this tier, followed by Google DeepMind. OpenAI tries to straddle both markets and satisfies neither. The bifurcation isn't clean -- it never is -- but the Friday when Anthropic said no becomes the date people point to when they try to explain how AI governance split into two incompatible regimes.
Skeptic's Rebuttal: Anthropic is unlikely to walk away from $200 million and the prestige of being the Pentagon's AI provider. The more probable outcome is a quiet compromise -- limited guardrail modifications for specific classified use cases, with enough ambiguity for both sides to claim they held their ground. Companies rarely choose principle over revenue at this scale.
Takeaway: If the only company built around AI safety can't afford to act on its principles when it matters most, what was the point of building it that way?
Three scenarios. Three questions about what happens when companies, markets, and governments collide with AI faster than anyone planned for. None of these are predictions. All of them are plausible. The real timeline, as always, will be stranger than any of them.