What If — February 19, 2026

Three scenarios. Two from the past, one from the future. Each follows the thread wherever it goes.


A. WHAT IF: What if Fei-Fei Li had kept ImageNet behind a paywall in 2009?

In 2009, Stanford computer scientist Fei-Fei Li and her team finished assembling ImageNet — 14 million labeled images sorted into 20,000 categories, most of them tagged by workers on Amazon Mechanical Turk at a cost of roughly two cents per label. The dataset was released for free. Any researcher anywhere could download it and start training models. Three years later, Alex Krizhevsky used it to train AlexNet, a deep neural network that crushed the competition at the 2012 ImageNet Large Scale Visual Recognition Challenge and kicked off the deep learning revolution. That revolution led, in a fairly direct line, to where we are today.

But Li nearly didn't release it freely. Academic datasets at the time were often restricted — gated behind institutional access agreements, shared only with collaborators, or held back for exclusive publication advantage. The default in computer science was closer to "share with your lab" than "share with the world." What if Li had followed the norm? What if ImageNet had been licensed — say, $50,000 per year for commercial use, free for Stanford affiliates?

The First Cascade

The 2012 ImageNet Challenge still happens. But the field of competitors narrows. Instead of labs worldwide downloading the dataset and running experiments on consumer GPUs, access concentrates among well-funded university groups and a handful of corporations willing to pay. Geoffrey Hinton's group at the University of Toronto might still produce something like AlexNet — Toronto had the relationship and the budget. But the hundreds of follow-on experiments that happened between 2012 and 2015, the ones that refined convolutional networks, that tested architectures, that proved deep learning wasn't a fluke — those happen slower and in fewer places. The "ImageNet moment" still occurs, but it reads more like a proprietary breakthrough than an open revolution. Google acquires Hinton anyway (they did in 2013). But the barrier to entry for smaller players rises. A PhD student in Taipei or a startup in Montreal doesn't get the same running start.

The Second Cascade

By 2015, deep learning is dominated by three or four institutions instead of dozens. The open-source ecosystem that produced PyTorch, TensorFlow, and the culture of "release your model weights" develops differently — or later. When researchers can't access the foundational training data freely, they build proprietary pipelines. The norm becomes corporate research labs publishing papers but not releasing models or data. This is the world that benefits incumbents. Google, Microsoft, and Facebook still invest heavily in AI. But the "garage AI" energy of the mid-2010s — the startups training models on rented cloud GPUs — never materializes at the same scale. The talent pipeline shifts. Instead of thousands of self-taught deep learning practitioners who learned on freely available datasets, the field stays credentialed and institutional.

The Third Cascade

Fast-forward to 2026. AI is still powerful, but the competitive landscape looks more like the pharmaceutical industry than the tech industry: a small number of large players with enormous R&D budgets, competing on proprietary data advantages. The open-source AI movement — Meta's Llama, Alibaba's Qwen, Mistral — probably never happens, because the cultural expectation of openness in AI research was never established. The AI Scare Trade still comes, eventually. But it arrives later, concentrated in fewer hands, and the public debate about it is even more confused because fewer people outside the industry understand what these systems actually do.

The Skeptic's Rebuttal

The deep learning revolution was probably overdetermined. ImageNet accelerated it, but CIFAR-10, MNIST, and other datasets existed. Compute was getting cheap regardless. Someone would have assembled a large-scale image dataset eventually. Li's specific decision may have shifted the timeline by two or three years, not by a decade.

The Takeaway

How much of the current AI landscape — its openness, its speed, its distributed talent base — traces back to a single researcher's decision to give something away for free?


B. WHAT IF: What if Saudi Arabia had invested its $3 billion in Anthropic instead of xAI?

This week, Saudi Arabia's HUMAIN announced a $3 billion investment in Elon Musk's xAI as part of its Series E, with shares subsequently converting into SpaceX equity. The deal deepened the axis between Saudi sovereign wealth and the Musk conglomerate — a private entity now combining AI, satellite communications, and space launch infrastructure. California's attorney general was simultaneously investigating xAI's Grok model for generating non-consensual explicit images.

What if HUMAIN's investment committee had gone the other direction? What if the $3 billion had gone to Anthropic instead?

The First Cascade

Anthropic, already valued at roughly $60 billion after its Series E in late 2025, absorbs the capital without significant dilution. But the signal matters more than the money. Saudi sovereign wealth picking the safety-focused lab over the "move fast" lab reframes the geopolitical AI narrative overnight. Anthropic's Dario Amodei, who has spent years positioning the company as the responsible alternative, suddenly has the backing of one of the world's largest sovereign wealth funds. The read in Washington: even the Saudis think safety-first AI is the better bet. Anthropic's lobbying position on AI regulation strengthens. Its access to Gulf state compute infrastructure opens up.

The Second Cascade

Musk's xAI, without the Saudi capital, faces a different funding landscape. The xAI-SpaceX merger still happens — the infrastructure synergies are too obvious — but the combined entity is less flush. More critically, the narrative shifts. The "Musk axis" (xAI + SpaceX + Starlink + X/Twitter) loses its most prominent sovereign backer. Saudi Arabia's $50 billion AI campus outside Riyadh starts running Anthropic's Claude instead of Grok. The Global South AI infrastructure being built at the India summit this week — the data centers, the partnerships, the sovereign AI platforms — tilts toward Anthropic's safety standards as a baseline, because Saudi money comes with Saudi preferences. India's Sarvam AI platform, looking for international partners, finds Anthropic a more palatable ally than the company under investigation for generating explicit images.

The Third Cascade

By late 2026, the AI safety landscape looks different. Not because $3 billion changed Anthropic's research — they were already well-funded — but because the sovereign wealth signal legitimized safety-focused AI as the economically rational choice, not just the ethically preferable one. The California AG's investigation into Grok carries more weight when xAI can't point to sovereign backing as a vote of confidence. Meanwhile, Anthropic finds itself in an uncomfortable position: a safety-focused company backed by a government with its own surveillance and human rights record. The contradictions don't resolve. They just move.

The Skeptic's Rebuttal

Sovereign wealth funds don't pick AI labs based on safety philosophy. They pick them based on return potential and strategic positioning. xAI's integration with SpaceX and Starlink offered infrastructure exposure that Anthropic, a pure AI play, couldn't match. The HUMAIN deal was about diversification and access to space infrastructure, not an endorsement of Musk's approach to content moderation.

The Takeaway

When a sovereign wealth fund picks an AI company, is it investing in the technology, the ideology, or the infrastructure? And does the distinction matter?


C. WHAT IF: What if the AI Scare Trade doesn't stop at financial services?

This week, markets violently rotated away from professional services stocks. Charles Schwab dropped 7%. Raymond James and LPL Financial fell 8%. London's St. James's Place cratered 20% in a single session. The catalyst: Altruist, a fintech firm, launched Hazel — an autonomous AI platform generating complex tax and estate strategies without human intervention. The same day, Fed Governor Michael Barr delivered a speech outlining three scenarios for AI's labor market impact, including one where a large share of the population becomes "essentially unemployable."

The market repriced wealth management in a day. What happens if it reprices everything else?

The First Cascade — Summer 2026

The rotation spreads to legal services by April. Not because a single product launches — there's no "Hazel for lawyers" moment — but because earnings calls start including the phrase "AI-augmented headcount reduction." Two Am Law 100 firms announce they're cutting associate classes by 40%, replacing first-year document review and contract drafting with AI systems. The stocks of publicly traded legal services companies (Thomson Reuters, LegalZoom, RELX) diverge sharply: pure legal labor plays drop, while legal AI infrastructure plays surge. A 34-year-old associate at a mid-size firm in Dallas checks LinkedIn and sees three of her law school classmates have been laid off in the same week. She starts looking at MBA programs, then realizes those are probably next.

The Second Cascade — Late 2026

The labor market data catches up to the market data. Initial unemployment claims among workers aged 22-30 with college degrees tick up for five consecutive months — a pattern not seen since 2008, but this time concentrated in knowledge work rather than construction or manufacturing. Universities start seeing application drops in law, business, and accounting programs. Medical schools hold steady — "AI can't do surgery" becomes the new "AI can't drive." (It could, actually, but the regulatory and liability barriers are real.) The political class notices. "AI displacement" enters the 2026 midterm vocabulary. Two Senate candidates in swing states run on "AI accountability" platforms. Neither of them has a specific policy proposal, but the phrase polls well.

The Third Cascade — 2028

The professional licensing system — bar exams, CPA exams, medical boards — faces an existential question. If an AI can pass the bar exam in the 90th percentile (GPT-4 did this in 2023), and an AI platform can generate tax strategies that outperform human advisors (Hazel claims this now), what is the economic function of the license? The answer, it turns out, is liability. Someone has to be responsible when things go wrong. The professions don't disappear. They restructure around a smaller number of licensed humans who oversee AI systems and carry the malpractice insurance. A radiologist in Milwaukee doesn't lose her job. But her department shrinks from twelve radiologists to four, each reviewing AI-flagged images rather than reading scans from scratch. The work changes. The numbers change more.

The Skeptic's Rebuttal

Markets overshoot. The AI Scare Trade looks a lot like the robo-advisor panic of 2015, when Betterment and Wealthfront were supposed to kill traditional wealth management. They didn't. Professional services firms have survived automation scares for decades by adapting, bundling AI tools into their offerings, and charging for the human judgment layer. The 20% single-day drops in established firms are buying opportunities, not structural repricing.

The Takeaway

When markets reprice an entire category of human labor in a single trading session, are they predicting the future or just panicking about it? And at what point does the panic itself become the catalyst?


Three thought experiments. None of them predictions. All of them following real logic from real events to see where the thread leads. The Skeptic has the last word on each one, because the strongest test of any scenario is the best argument against it.