AI Didn't Replace Workers. It Outran Their Managers.

Inside the management inversion: nine independent sources say companies are cutting the wrong layer.

This is a special edition of The Long View, our Sunday series where we step back from the week's headlines and follow the longer arcs: how AI is reshaping work, power, and the structures we built around both.

The Layoff Script

Block cut 4,000 jobs in February. Forty percent of its workforce, gone in a morning. CEO Jack Dorsey told employees that "intelligence tools fundamentally change what it means to build and run a company" and predicted most companies would follow within a year.

Two weeks later, Atlassian eliminated 1,600 positions (10% of its staff) to redirect hundreds of millions toward "the future of teamwork in the AI era." At Shopify, CEO Tobi Lutke told employees to "demonstrate why they cannot get what they want done using AI" before requesting headcount.

The script works. Investors hear "AI efficiency." Headlines write themselves. Stock ticks up. Nobody asks the follow-up question.

The data tells a different story. S&P Global found the share of companies abandoning most of their AI initiatives jumped from 17% in 2024 to 42% in 2025. Avi Goldfinger reported that 55% of CEOs who fired people "because of AI" already regret it. Most never replaced anyone with AI.

They are cutting people in the name of AI while simultaneously failing at AI. Call it what it is: a press release, not a strategy.

Two Claims Wearing the Same Suit

Tim O'Brien wrote the cleanest articulation of this in February: "When someone says 'a single engineer with AI can now do what ten engineers used to do,' they may be describing a genuine technological shift. They may also be providing cover for decisions that would have happened anyway, dressed in the language of capability rather than cost. These are different claims. They require different evidence."

Three possible realities coexist. AI accelerated some work. Companies are using AI as narrative cover for cost cuts driven by macroeconomic conditions. And the majority of CEOs who acted on the AI-replaces-workers premise already wish they hadn't.

Fortune reported in March that the narrative may be running backwards: capital is being redirected to AI infrastructure, and the cuts are funding the spending, not the other way around. The Guardian talked to Block workers after the layoffs. Their assessment: "You can't really AI that." The work didn't disappear. The headcount did.

Ed Zitron adds the financial context that makes the pattern legible. The software industry's decline predates AI. Private equity owns 30-40% of the SaaS sector, according to Apollo Global estimates, and hundreds of billions in software companies are now trapped in portfolios worth less than their acquisition price. Growth plateaued. Revenue flattened. AI arrived not as a cause but as the most investor-friendly explanation available. The cuts come dressed in capability language, but the receipts show cost language.

The Production Gap

AI compressed "blank slate to working proof of concept" from days to hours. The acceleration is real, and non-trivial. O'Brien calls it the "Apollo Project" phase, and the first time a server responds to your code, you can see why people get excited.

The hard part was never writing the first version. Between "it works on my machine" and "it handles real users at real scale" sits a gap that has not compressed at all: multi-region configuration, caching strategy, dependency failures at 2 AM, cost optimization, analytics that tell you something useful. O'Brien found his AI making "amazingly bad decisions like 'let's skip a yaml parser and just parse this as a string with regexp.'" He had to tell Sonnet 4.6 twice that a date field shouldn't be stored as varchar.

"A collection of Skills files, however carefully curated, doesn't install that understanding," he wrote. "It can scaffold the structure. It cannot substitute for the judgment that comes from having watched a system fail in interesting ways."

The pattern shows up at every level of the stack. Marco Kotrotsos reverse-engineered Claude Code's caching architecture and found seven places where the "obvious" implementation was wrong. Don't swap tools mid-session. Don't switch models mid-conversation. Implement plan mode as a tool call instead of a tool swap. Each decision contradicted what an AI would generate on the first pass. Each required understanding production systems at a depth that comes from watching them break. Phil at Rentier Digital saw the same thing from the client side: "A demo succeeds if it works once in front of someone. A product has to work ten thousand times while you're not watching."

The numbers confirm what the practitioners are describing. According to Deloitte's 2025 survey, only 11% of organizations have agentic AI in production. Seventy-five percent are still planning or have no strategy at all. As Han Heloir Yan put it after surveying the field: the models are "spectacular." Everything around them is a mess.

The Inversion

For decades, the constraint on building software was build speed. Leadership planned, prioritized, allocated resources, tracked progress, and the bottleneck was always the same question: can the team build it fast enough? Management's cadence matched the build cadence. Quarterly planning made sense when quarters were how long things took.

AI broke that assumption. Builders accelerated; the coordination layer didn't.

Yan states it in technical terms: "The bottleneck has shifted from model capability to everything around the model. The strategy, the business case, the data readiness, the architectural decisions, the evaluation frameworks, the security posture." Translate that to organizational terms and it says the same thing: the constraint moved from "can we build it" to "do we know what to build, and can we support it once it's built?"

Something counterintuitive happened to the surface area of senior work. Pre-AI, engineers designed one system: the application. Now they design two: the application and the agentic system that builds, operates, or augments it. AI created more architecture work, not less. The demand for experienced engineers went up. Companies responded by cutting them.

The inversion looks like this: engineers shipping features faster than leadership can review them. Project plans outdated before the sprint starts. Managers organizing work around velocity assumptions that no longer hold. The people writing the code outpacing the organization's ability to decide where the code should point.

Yan found that the successful 5% of AI implementations started with specific, measurable business outcomes. "Reduce invoice processing time from 8 days to 2 days at 99.5% accuracy." The other 95% started with the technology and went looking for a problem. The technology didn't fail. Leadership failed to provide direction when the tools suddenly made direction the only thing that mattered.

Companies are laying off the people who sped up while the layer that didn't speed up stays intact. They are optimizing the wrong side of the equation.

Generals and Servant Leaders

There are two models of leadership, and one of them breaks when the people doing the work move faster than the people directing it.

The traditional model operates like a military command structure. Strategy cascades from the top, objectives flow downward through approval gates, and output gets reviewed at checkpoints. The whole system rests on an assumption: the bottleneck is execution.

The model that works when builders can prototype in hours operates differently. Leadership embeds in the work rather than sitting above it. Blockers get removed in real time instead of surfacing in a weekly retrospective. Direction adapts continuously because the cost of pivoting dropped. Tactical decisions get made by the people closest to the code, because they are moving too fast for approval queues to keep up.

Robert Greenleaf coined "servant leadership" in 1970, but for most organizations it was always optional. AI removed that option. When a two-week approval cycle sits on top of a two-hour prototyping cycle, the approval cycle is not just slow; it is the dominant constraint on the entire system. The organization can only move as fast as its slowest coordination layer, and that layer is no longer the builders.

This requires leadership to give up control. To accept that the value they add is not deciding what to build but making sure the right things get built, and getting out of the way on everything else. Most management structures are not designed for that. They are designed to approve, gate, review, and control. Those functions made sense when execution was the bottleneck. When execution accelerates, they become the thing that holds the organization back.

The Uncomfortable Question

If AI made individual contributors three to five times more productive but did not touch the management layer, the rational response is not to cut ICs. It is to restructure management.

None of that is happening. CEOs report to boards, not to engineers. "We're cutting headcount and investing in AI" is a story Wall Street understands and rewards with share price bumps. "We're restructuring middle management because they can't keep up with AI-accelerated teams" is not a story anyone wants to tell.

It is easier to fire the people who got faster than to fix the process that got exposed.

There is a third option nobody talks about: leadership gives up entirely. Adobe CEO Shantanu Narayen stepped down in March after 18 years. Adobe beat Q1 earnings. Shares still fell 6-7%. The company that defined creative software for a generation lost its chief executive not to failure but to the dawning recognition that AI had changed what the job required. Atlassian's leadership cut 1,600 people and called it a pivot. When the inversion hits, some leaders restructure, some cut, and some walk away. Which response the organization can survive is not a rhetorical question.

The 55% regret statistic makes sense in this frame. Companies cut the wrong layer. The 42% abandonment rate tells the rest of the story: companies that cut experienced people cannot bridge the production gap, so the AI initiatives fail, so they abandon them. The layoffs did not enable AI. They crippled it.

Yan's data reinforces this from the technical side. His three-category framework shows that 80% of production AI value comes from structured workflows that need experienced architects to design the flowcharts, validation gates, and composition patterns. The remaining 20% (autonomous agents) requires even more senior judgment for durability layers, boundary-setting, and security. A 15-year engineer watching AI write code is not obsolete. They are the only person in the room who can tell you which category the problem belongs in, and that decision determines whether the project ships or joins the 42%.

Ten Voices, One Finding

No single source makes this argument. Ten independent voices from different domains do, without coordination.

O'Brien, writing from engineering philosophy, sees a production gap that human judgment alone can bridge. Kotrotsos, working at the infrastructure level, finds seven "obvious" implementations that were all wrong. Yan, analyzing agentic architecture, counts 75% of organizations unable to move from demo to production. Phil at Rentier, building products for clients, watches demos get mistaken for products. Zitron, tracking financial flows, sees a SaaS industry that was already dying before AI became the convenient explanation. Fortune's labor economists note that the narrative is running backwards. Goldfinger surveys the CEOs and finds 55% regret. S&P Global tracks the initiative abandonments rising from 17% to 42%. Eran Zinman at Monday.com offers the counter-example of leadership that adapted instead of cutting. And Klarna provides the cautionary tale: replace humans with AI, watch quality drop, hire them back.

Philosophy, infrastructure, architecture, practice, finance, labor economics, survey data, market research, and executive leadership. Separate domains, separate incentives. The convergence is the signal.

The Counter-Examples

Monday.com lost roughly 80% of its market value after the SaaS selloff that followed Anthropic's Cowork launch, according to co-CEO Eran Zinman on the 20VC podcast. The market priced the company as dead. Zinman's response was not the AI layoff playbook.

When Monday.com automated roughly 100 sales development representatives with AI, response times dropped from 24 hours to 3 minutes. Every metric improved. Zinman redeployed the SDRs to outbound prospecting instead of firing them. His reasoning: "Every time we eliminate one bottleneck, a new one emerges. Many bottlenecks aren't in code."

That sentence is the management inversion articulated from the CEO's chair. The bottleneck moved, so the people moved with it. When the constraint shifted from execution to strategy, Zinman didn't eliminate the people. He pointed them at the new constraint.

Then there is Klarna, the cautionary tale from the other direction. The Swedish fintech became the poster child for AI-replaces-workers after CEO Sebastian Siemiatkowski bragged that AI could do the work of 700 employees and stopped hiring humans entirely. The stock of AI-first confidence lasted about two years. Then quality dropped. Siemiatkowski admitted publicly that the company was hiring people again. Vice's headline wrote itself: "This Company Replaced Workers With AI. Now They're Looking for Humans Again."

Three companies, three responses to the same pressure. Block cut 4,000 people and declared AI had changed what it meant to run a company. Klarna went all-in on replacing humans with AI, then quietly reversed course when the work degraded. Monday.com, facing an even more severe market correction, recognized that AI changed where the work was, not whether humans needed to do it. Two of those three are now dealing with the consequences of cutting the wrong layer.

What Restructuring Would Actually Look Like

If a company took the inversion seriously, it would not start with headcount. It would start with approval chains.

If builders can ship in a day, a five-person approval chain is the dominant constraint. Not the code. Not the engineers. The sequence of people who need to say yes before anything moves. Flatten it.

Embed leadership in the work itself. Not standups-as-status-reports. Pairing with the people building, understanding what is happening in real time rather than reviewing it after the fact. When prototypes take hours instead of weeks, a two-week review cycle becomes the bottleneck for the entire system.

Kill quarterly planning as the primary cadence. The cost of starting something wrong dropped because AI can produce a new prototype in an afternoon. The cost of continuing something wrong did not. Continuous prioritization replaces the quarterly roadmap.

Invest in production engineering, not just production initiation. O'Brien documented the gap. Kotrotsos documented seven decisions in caching architecture that required judgment AI does not have. Yan's data shows that 80% of production AI value comes from structured workflows that need experienced architects to design the validation gates and composition patterns. The remaining 20%, autonomous agents, requires even more senior judgment: durability layers, boundary-setting, security posture, kill switches.

Measure leadership by unblocking speed, not span of control. How fast can the team move? What is slowing them down? If the answer keeps coming back to "waiting for approval," the constraint is organizational, not technical.

Direction Over Speed

AI gave organizations speed. That was never the question. The question was always: are we going in the right direction?

Direction is a leadership function. It requires understanding what customers need, what the technology can and cannot do in production, where the organization's strengths are, and which bets to make when the cost of experimentation drops but the cost of operating the wrong experiment at scale does not.

The companies cutting workers and calling it AI strategy are answering the wrong question. They are optimizing for speed on a road they haven't checked a map for. The 42% abandonment rate is what happens when organizations run fast without knowing where they are going. The 55% CEO regret rate is what happens when they realize, too late, that they cut the people who knew the way.

The companies that figure out the management inversion, that restructure coordination to match the new velocity of execution, will capture what AI makes possible. The rest will have lean teams, fast tools, and no idea where they're headed.