What If Exposure Breeds Exhaustion?

The tech industry assumes more exposure to AI converts skeptics into adopters. Four independent data sources say otherwise.

Watercolor illustration of a person walking through a street overwhelmed by signs and screens all demanding attention
Image Generated with Nano Banana 2

This is Part 1 of a three-part series on the future of AI adoption.

A developer spent three weekends in February building an AI automation pipeline. Custom tools, memory layer, orchestration logic. It works, and it's genuinely useful. When a colleague asked how it was going, he said: "Honestly? I'm tired. I just want to use one thing that doesn't change for six months."

He isn't an AI skeptic — he ships production code that relies on three different models, reads the papers, argues about context windows at dinner. By any reasonable measure he's one of the most engaged users in the field, and he's completely worn out.

The tech industry has a core assumption baked into every product launch, every workplace integration, every enterprise sales pitch: if you just get people using AI tools, they'll come around. It's the logic behind mandatory Microsoft Copilot rollouts, behind "AI-first" workplace mandates, behind a thousand LinkedIn posts about "meeting people where they are." But four independent data sources, covering hundreds of thousands of respondents across two years, tell a different story.

The Numbers

Stack Overflow's 2025 Developer Survey found that 84% of developers now use AI tools in their workflow, up from 76% the year before, with usage climbing steadily. But positive sentiment about those tools dropped from above 70% in 2023 and 2024 to 60% in 2025. Among experienced developers, only 2.6% say they "highly trust" AI output. Twenty percent say they "highly distrust" it. The more people use AI tools, the less they seem to like them.

The Gallup/Walton Family Foundation study on Gen Z, published April 2026, found excitement about AI dropped 14 points year-over-year to 22%. Hopefulness fell 9 points to 18%. Anger rose 9 points to 31%. Even among daily users, excitement dropped 18 points. The most digitally native generation, the cohort the industry is building for, is getting less enthusiastic the more they use the tools.

Pew Research reported in September 2025 that 50% of U.S. adults say AI in daily life makes them more concerned than excited, with a March 2026 update confirming the trend is holding. YouGov found 40% of Americans hold a more negative view of generative AI than they did a year prior. Only 16% feel more positive.

This isn't one outlier poll. Across developers, Gen Z, and the general public, the pattern is consistent: the more people use AI, the less enthusiastic they become about it.

The Homework That Never Stops

Most people don't experience new workplace technology as chosen exploration; they experience it as an assignment. Learn this tool, now learn the update, now learn the replacement for the tool you just learned. Corporate IT has been handing out homework for decades, and most employees approach it the way most students approach homework: grudgingly, minimally, and with mounting resentment.

The difference is that previous homework ended. Learning Excel in 1995 was a chore, but the chore had a finish line. You learned the formulas, built your spreadsheets, and used roughly the same product for a decade. The interface was stable and the mental model held for years. AI tools don't work like that. The product you learned in January behaves differently by April. Prompting strategies from last quarter are already obsolete. The model you built a workflow around gets deprecated without warning. The homework never has a due date because it never actually ends.

Harvard Business Review published research in February 2026 showing AI tools consistently intensify work rather than reducing it. Employees report faster pace and broader scope, but the promised time savings don't materialize. Fortune found that time spent on email doubled while focused work sessions dropped 9%. A Boston Consulting Group study coined the term "AI brain fry" for the mental fatigue that comes from constant oversight of AI-generated output. The tools are doing more, and so are the people operating them.

The pitch is "AI will save you time." The lived experience is "here's another system to learn, configure, prompt correctly, verify the output of, and troubleshoot when it hallucinates." The gap between the pitch and the lived experience is the whole problem.

The Smart Home Preview

Smart home adoption already showed us how this plays out. The enthusiast's house runs on motion sensors, automated routines, and voice-controlled everything, with dozens of hours of configuration and troubleshooting behind the seamless experience. The enthusiast genuinely loves it, but the enthusiast's partner, who just wants to turn on a light without consulting three different apps, has a different perspective.

The normal person's smart home is a speaker that sometimes misunderstands them, a thermostat that resets itself, and a different app for every device. Deloitte found that reliance on individual apps leads to frustration as devices multiply. CEPRO's 2026 survey identified the top complaints: functionality issues (23%), overabundance of apps (20%), and device incompatibility (18%). The enthusiast experience and the normal-person experience are two entirely different products sold under the same name.

AI tools are following the same trajectory, where power users build impressive custom workflows while everyone else gets a chatbot grafted onto their existing software with a popup that says "Try AI!" and a learning curve that resets every quarter.

Even the Converts Are Tired

The last escape hatch for the exposure-breeds-adoption thesis is that exhaustion is a casual-user problem. The argument goes: people who really use AI tools love them. The dissatisfaction is coming from people who haven't invested enough time.

The Gallup data dismantles this. The people who chose to use AI tools every single day, not reluctant adopters dragged into mandatory training, saw their excitement drop 18 points year-over-year. They haven't stopped using AI, but they're noticeably less enthusiastic about it.

Building an AI workflow is a project, but maintaining one is a job because models update, APIs deprecate, context windows shift, pricing changes, and capabilities appear and disappear between versions. The enthusiast doesn't learn a tool and move on so much as enter a relationship with a moving target, like renovating a house that rearranges its own rooms every few months.

Even experienced practitioners report hitting a wall, with developers who orchestrate multi-agent systems describing real cognitive fatigue after sustained sessions as the skill ceiling keeps rising.

The Counterarguments

Two objections deserve serious consideration.

Every new technology goes through this. Email was overwhelming in 1998, smartphones were addictive in 2010, and social media caused anxiety spirals in 2015. Technology adoption always involves a disillusionment phase before the tools mature and settle into accepted utility, and this one will too, so give AI time.

The pattern is real for technologies that stabilize. Email settled into a predictable protocol, and smartphones matured into a form factor that hasn't meaningfully changed in a decade. The trough resolves when the product stops changing long enough for users to build durable habits, but AI tools are moving targets in a way email never was because the interface you learned last month is meaningfully different this month. The trough of disillusionment assumes there's a bottom to land on, and it's worth asking what happens when the ground keeps shifting.

The tools will just get better. Current AI products are early, clunky, require too much prompting and too much verification. As the technology improves, the friction drops, and the exhaustion drops with it. The problem is the current generation, not the concept.

Plausible. Maybe even likely for some narrow use cases. But "the tools will get better" has been the industry's answer for three years, and in that time the sentiment data has moved in one direction: down. Each improvement brings new capabilities that require new learning. Better tools with more features are not simpler tools with less cognitive load. The Stack Overflow data tracks developers, the most technically sophisticated user base available, and their trust is declining even as the tools improve. If improvement were the cure, the patient should be getting better by now.

What Happens If the Pattern Holds

If the correlation is real, if increased usage genuinely tracks with declining enthusiasm rather than growing conviction, the industry's entire go-to-market model is running backwards. More demos, more workplace mandates, and more integrations don't convert skeptics so much as manufacture fatigue.

That doesn't mean AI tools are useless. It means the strategy of blanketing every product with AI features and assuming adoption follows exposure might be producing the opposite of its intended effect. The exhaustion isn't a marketing problem to be solved with better onboarding. It might be a signal about the relationship between the tools and the people using them.

But maybe the problem isn't exposure itself, and maybe it's more about what people are being exposed to. What if the tools aren't exhausting because there are too many of them, but because most of them aren't actually good at the things people need done?


This is Part 1 of a three-part series. Next: "What If AI Is Just Bad at Most Things?"