AI Will Make Half Your Job Easier. It Might Ruin the Half You Liked.

AI Will Make Half Your Job Easier. It Might Ruin the Half You Liked.
Image Generated by Nano Banana 2

Every job is splitting in two. The half that produces artifacts is compressing into supervision. The half that requires judgment is carrying more weight than ever.


In January, the work was Confluence pages, SharePoint folders, onboarding videos, and the slow assembly of a mental model for a spacecraft system someone else had built. It was the kind of start anyone in aerospace would recognize: read everything, ask questions, sketch diagrams on a whiteboard, feel stupid for a month, feel slightly less stupid the month after.

By February, Claude Code was open on a second monitor. Contact analysis for a spacecraft antenna, link budgets, latency timelines. The sort of technical work that used to fill a week now filled an afternoon, but the shape of the day still felt normal. You were still producing things, just faster.

By April, the threshold had shifted to anything that would take more than twenty or thirty minutes. Drafting Confluence pages, building PowerPoints, writing test plans and procedures. More time reviewing what the agent produced, less time producing from scratch. The ratio had flipped without anyone marking the date.

By May, the agent was searching the entire codebase, pulling context from Jira and Confluence, writing test scripts against the ground software API gateway, running them, storing results, and publishing a Confluence page with the outcomes. An end-to-end test workflow, built in a few weeks, that would have taken a team several months to stand up the traditional way.

A large, familiar portion of the job had stopped feeling like something you did and started feeling like something you watched.

The leash extended because interrupting the agent started to feel less useful than letting it finish. There was no policy decision, no deliberate handoff. The unit of delegation just went from a task to a workflow, one week at a time.


The Split

This is not one person's productivity story. The same rearrangement is happening across knowledge work, and the pattern is consistent enough to name.

The part of a job that produces a visible artifact (the analysis, the document, the presentation, the data transformation, the test plan) is compressing into prompts and reviews. The part that never produced an artifact — knowing which analysis matters, sensing when a requirement will cause political trouble three months from now, deciding what to bring into a meeting and what to leave out — stays. And it carries more weight now, because there is more output riding on each judgment call.

In defense engineering, a systems engineer who used to spend most of her week producing requirements documents and traceability matrices now spends that time in interface meetings, review boards, and priority negotiations with contractors. The documents exist. They may be better than what she would have written by hand. The question is whether they are the right documents, aimed at the right problem, timed for the right conversation. That question got more expensive to get wrong, because the machine can generate ten documents in the time it used to take to produce one.

In education, an AI tutor can explain calculus more patiently than most lecturers and adapt to each student in real time. What it leaves behind is the work that was always there but hidden behind the lecture: noticing that a fourteen-year-old is struggling personally rather than academically, mentoring, motivating, modeling what it looks like to be a curious adult who cares about ideas.

A better way to think about the split: one half is artifact work, the other is consequence work. Artifact work produces something you can point at, a deliverable with a file name. Consequence work determines whether that thing should exist, who will accept it, and what happens when it is wrong. AI is getting very good at artifacts. Careers and institutional trust live in consequences.


The Reassurance and What It Misses

Scott Galloway made a version of the calming argument in May 2026. The AI job apocalypse, he argued, is partly a marketing strategy. The people loudest about AI wiping out entire professions are the same people who profit from that fear funneling capital toward AI companies. Spreadsheets didn't kill accounting; they quadrupled the profession over forty years. The Jevons paradox, where cheaper execution creates more demand for the work rather than less, is a more likely outcome than mass displacement.

On a two-to-three year horizon, he is probably right. Mass unemployment from AI is not imminent.

But there is a subtler mechanism worth taking seriously, one that economist Robert Shiller has studied in other contexts: the narrative itself is economically consequential, independent of the underlying reality. If companies preemptively cut headcount because the story says AI will replace those roles, and then a downturn hits, the displacement gets attributed to AI even if macroeconomic conditions caused it. The fear loop feeds itself, and the narrative becomes the event.

Galloway's reassurance answers a question, but not the one that anyone who has actually used these tools for six months is asking. That question is not "will I lose my job." It is something stranger and less dramatic: what happens when the part of the job you were good at, the part that gave you a sense of competence, that you could point to when the day was done and say I built that, becomes the commodity layer?

The stress is subtler than unemployment. You're employed, maybe even more productive than ever, and the center of gravity is shifting underneath you anyway. The skills that earned your reputation five years ago are becoming the cheap half, and the adaptation is becoming a different kind of worker inside the same role, faster than you can consciously plan the transition.

Alvin Toffler coined a term for this in 1970. He was describing the disorientation that comes when change outruns the capacity to adapt to it. Less about losing a paycheck, more about losing your bearings inside a role you technically still hold.


The Seniority Window

The productivity multiplier from AI tools scales with experience in ways that are not immediately obvious. A senior engineer with twenty years of building spacecraft systems can see a tenfold increase in output, or more. Not because they write better prompts, but because they know which test coverage gap is dangerous, which analysis is a dead end, and which prototype will unblock a meeting that has been stuck for two weeks. They spend the machine's effort on the right problems.

A junior engineer with identical tools and identical access gets a smaller multiplier, perhaps three to five times their baseline. Not because the tools work differently for them, but because they spend more cycles on tasks that don't matter, or ask questions that miss the real issue. The leverage is in the taste, not the throughput.

That advantage is real, and it may also be temporary.

Consider how startups build judgment. A junior engineer at a startup accumulates more operational scar tissue in eighteen months than most people gain in a decade at a larger organization, because the feedback loops are shorter and the scope is wider. Ship on Monday, users complain on Wednesday, fix it by Thursday. You learn what matters by watching things break in real time.

AI does something structurally similar to feedback loops everywhere. Prototype in an afternoon what used to take a sprint. Run twenty variations of a test approach before committing to one. Ingest and query a codebase that would have taken months to understand through reading alone. The junior who leans into this is operating in a permanent startup-like learning environment, regardless of their employer's actual pace.

The part of seniority that persists is not knowledge but trust. You can compress the time it takes to understand a system with the right tools. You cannot compress the time it takes for a CTO to trust your judgment because you have been right seven consecutive times when it counted. Context is buildable with machines. Trust is buildable only with people, over time, through consequences.

So the seniority advantage is genuine, but the window may be narrower than anyone holding it expects. The better way to phrase it: "I know things you don't yet." And the "yet" gets shorter every quarter, as AI-augmented iteration lets the next cohort build context faster than any generation before them.

What almost nobody is building right now are replacement apprenticeship pathways. The execution work that AI handles was also the training ground where junior workers developed judgment through repetition and failure. If those reps disappear and nobody intentionally designs new ones, organizations will simply stop producing senior talent. Five years from now, the companies that failed to solve this will be staring at an empty leadership pipeline, while the ones that solved it early will have a compounding advantage that is very difficult to reverse.


The Leash

In January, every output got a full review. By March, the review was a skim. By May, there were workflows where the review consisted of reading a summary line and trusting the rest — not because anyone decided that was safe, but because the results were good enough, often enough, that interrupting started to cost more than the risk.

The same drift is happening wherever the split is happening. A physician approves an AI-generated differential without fully engaging with it, because the last hundred were correct and the next patient is already waiting. A procurement officer reviews an AI-generated requirements package but does not trace every requirement back to its source document, because the traceability matrix looks complete and the deadline is Friday. A teacher trusts the AI tutor's assessment of a student's progress without spot-checking, because thirty other students also need attention right now.

Trust calibration, the skill of knowing when to verify and when to let go, does not appear in any curriculum, any job description, or any performance review. It may be the most consequential professional skill of the next decade, and no institution is teaching it.

The old work does not end cleanly. It thins out, the familiar parts becoming prompts and review queues and agent runs that finish while you are in a meeting. The unfamiliar parts remain, and they are heavier now, because each judgment call carries ten times the output behind it.

That is the shape many people are starting to see in their own work. The change rolls out without fanfare, one tool at a time, one workflow at a time, until the part of the job you actually liked starts feeling like it belongs to someone else.

Read more