The Long View — February 16-22, 2026
THE LONG VIEW
Future Shock Weekly — February 16-22, 2026
In 1825, the Stockton and Darlington Railway opened in northeast England. Within two years, canal company shares had lost a third of their value. The canals still worked fine. The boats still floated. Nothing about the physical infrastructure had changed. What changed was that investors could suddenly see the future, and it didn't have canals in it.
This week, the stock market decided it could see a future without SaaS companies. The S&P 500 Software and Services Index shed nearly a trillion dollars in market cap over six weeks, with forward earnings multiples collapsing from 39x to 21x. Figma is down 79% from its IPO. Meanwhile, a Federal Reserve governor used the word "unemployable" in prepared remarks. India convened 88 nations to sign an AI governance declaration. The Pentagon threatened to brand Anthropic a national security risk for refusing to let Claude help fire weapons autonomously. And OpenAI started running ads inside ChatGPT.
None of these things, individually, would define a week. Together they describe something specific: the moment institutions began reacting to AI not as a technology to evaluate, but as a force already reshaping the ground beneath them. The reactions are clumsy, contradictory, and occasionally absurd. That's what early adaptation looks like.
The Institutional Scramble
The biggest story of this week isn't any single event. It's the pattern.
Anthropic built Claude with safety constraints that, among other things, prevent its use in autonomous weapons systems and mass surveillance. The Pentagon wants a $200 million contract with no such restrictions. When Anthropic insisted on carve-outs, Defense Secretary Hegseth's office floated designating the company a "supply chain risk" — a label normally reserved for firms doing business with sanctioned nations like China. That designation would require every Pentagon contractor to certify they don't use Claude.
The standoff is clarifying because both sides are behaving rationally within their own logic. The military's position is that a vendor's ethics policy shouldn't override a commander's lawful orders. Anthropic's position is that "lawful" and "wise" aren't synonyms, and they built safety commitments into their corporate structure for a reason. Neither side is wrong on its own terms. That's what makes it a genuine dilemma rather than a simple controversy.
At the same time, India's AI Impact Summit concluded with the New Delhi Declaration, signed by 88 countries including the US, China, and Russia. The declaration's seven pillars (democratized access, economic growth, trusted AI, scientific application, social empowerment, human capital, resilient systems) are broad enough to achieve consensus and vague enough to mean almost nothing. But India's real play isn't the document. It's the positioning. By hosting the first major AI summit in the Global South, extracting $250 billion in investment pledges from tech CEOs, and enacting content-labeling regulations with 2-3 hour deepfake takedown windows, India is constructing itself as a third pole in AI governance between Washington and Beijing. Whether it can execute on that ambition is a separate question from whether the ambition matters. It matters.
And then there's the SaaSpocalypse. Anthropic's systematic release of industry-specific plugins and tools — legal, tax, security, sales — has triggered what analysts are calling the most significant tech capital realignment since the dot-com bust. Claude Code Security, launched February 20, reasons about vulnerabilities the way a human researcher would instead of matching static rule patterns. Cybersecurity stocks dropped on the announcement. The market is pricing in a future where one AI agent replaces five software licenses, and per-user SaaS pricing models break.
The dot-com comparison cuts both ways. The internet did transform everything, but the specific predictions of 2000 were wrong about both the timeline and the winners. Some of these software companies will look like bargains in two years. Others won't exist. The market doesn't know which is which, and that uncertainty is doing the damage.
The Quiet Commercialization
While the Anthropic-Pentagon standoff grabbed headlines, OpenAI crossed a different kind of Rubicon with much less noise. ChatGPT now shows ads to free-tier users in the US, with sponsored placements from brands like Expedia appearing after a user's first prompt. Over 30 advertisers have run campaigns this month. The CPM reportedly starts at $60.
This deserves more attention than it's getting. For three years, ChatGPT trained hundreds of millions of people to treat an AI assistant as a trusted conversational partner. The relationship feels intimate in a way that search never did — you don't confide in Google, but people tell ChatGPT about their anxieties, their health problems, their creative ambitions. Inserting commercial messages into that channel changes its character. Not because ads are inherently evil, but because the value proposition shifts. "I'm here to help you" becomes "I'm here to help you, and also to help Expedia sell you a hotel room based on what you just told me about your divorce."
OpenAI has said personalized targeting — using chat history, ad interactions, and saved memories — started this month as an opt-in feature. The company needs revenue to justify its valuation and fund compute. Nobody should be surprised. But the transition from tool to advertising platform has historically degraded every product it's touched, from television to social media to web search. The question is whether conversational AI will be different, or whether OpenAI just discovered the same gravity that pulled Google from "don't be evil" to whatever Google is now.
The Grieving Before the Loss
January 2026 saw 108,435 US job cuts — the worst monthly total since 2009. A Yale Budget Lab study found no macroeconomic evidence that AI is actually driving displacement. Sam Altman said at the India summit that companies are "AI-washing" layoffs they'd do anyway. The Challenger data shows AI was explicitly cited in only about 7,600 of January's cuts.
And yet: 71% of Americans worry about AI taking their jobs, according to Reuters/Ipsos polling. Fed Governor Michael Barr outlined three scenarios for AI's labor impact, including one where a "large share of the population" becomes "essentially unemployable." Dario Amodei has said half of entry-level white-collar jobs could disappear within five years. Bernie Sanders, speaking at Stanford on February 21, called for a moratorium on AI data center expansion and warned of an "AI tsunami."
The gap between current evidence (minimal displacement) and forward-looking warnings (catastrophic displacement) creates a strange psychological condition. A society is pre-grieving a disruption that hasn't fully arrived. Workers are internalizing the narrative and losing bargaining power even when AI isn't the cause of their layoff. Companies are using "AI transformation" as cover for cuts driven by poor management or market conditions. The fear itself becomes an economic force — suppressing wages, discouraging career investment in "exposed" fields, and reshaping how an entire generation of young workers thinks about their future.
The Organizational Psychologist on our council put it starkly: when a Fed governor uses the word "unemployable" about a large share of the population — not "displaced," not "transitioning," but unemployable — that's a change in how institutions conceive of human economic value. Populations described as "surplus" by elites don't respond with orderly retraining programs. They respond with populism, withdrawal, or worse.
The Bookshelf
Carlota Perez, Technological Revolutions and Financial Capital (2002)
Perez argues that every major technological revolution follows the same pattern: an installation phase driven by financial speculation and creative destruction, followed by a crash, followed by a deployment phase where the technology actually transforms daily life. The installation phase overinvests in the new and underinvests in adaptation. The crash forces a reckoning. The deployment phase is where the real gains happen — but only if institutions adapt.
We are watching the installation-phase crash in real time. The trillion-dollar software selloff, the frantic capital rotation into "AI-proof" sectors, the policy scrambles in Delhi and Washington and Sacramento — this is what Perez's framework predicts. The technology arrived faster than institutions could absorb it. The financial system is repricing violently. The question her framework raises is whether what comes next is a productive deployment phase or a prolonged period of institutional failure. In the five revolutions Perez studied, the answer depended almost entirely on whether governments built the right bridges between the technology and the people it disrupted. That question is open, and this week did not make the answer more encouraging.
The Week Ahead
Three things to watch in the coming days:
Will the Pentagon follow through on its "supply chain risk" threat against Anthropic, or will the standoff produce a compromise that sets precedent for how AI companies negotiate military contracts? The resolution matters far beyond this single deal.
How do enterprise software companies respond to the SaaSpocalypse in their Q4 earnings calls? The gap between market panic and actual contract cancellations will become visible. Watch for specific numbers on AI-driven "seat compression."
India's new AI content-labeling rules took effect February 20. The first enforcement actions — or the first visible non-compliance by major platforms — will signal whether India's regulatory ambitions have teeth.