Essay Series
The Transition Is Not the Destination
Most AI writing assumes today’s institutions are the final shape of the future. That is the mistake.
Illustration for this essay. · Drew Littrell
A lot of AI writing now follows the same script.
The machine gets stronger. Workers get weaker. Capital captures the gains. Your best move, if you have the means, is to save aggressively, buy the right stocks, and try to stand a little closer to the owners than the owned.
I understand why that story lands. It names something real. The current AI wave is being funded, deployed, and monetized by large firms. It is already being used to cut costs, reduce headcount, compress wages, and increase leverage. Anyone pretending otherwise is selling incense.
But there is a mistake buried in that worldview, and it is a big one: it treats the transition as the destination.
Because capital is currently driving AI, many people assume the future must belong to capital in its present form. Because today's firms are using AI to squeeze more output from fewer people, many assume the long-term result must be a world where workers become disposable and owners absorb everything worth having. That does not follow.
Every age has a habit of mistaking its institutions for nature. We take the arrangements we inherited — large firms, fragile labor markets, bloated administrative systems, fragmented software, managerial layers, licensing regimes, financial gatekeepers — and assume they are permanent features of reality. They are not. They are responses to a particular set of constraints, and the deepest of those constraints has been the cost of intelligence and coordination.
A shocking amount of modern economic life exists because it has been expensive to understand what is going on. Expensive to move information, compare options, verify facts, monitor work, forecast demand, route resources, handle exceptions, keep large groups aligned. That is why so much white-collar work consists of translating, reconciling, scheduling, reporting, formatting, escalating, checking, forwarding, documenting, and chasing.
A great deal of the modern economy is human middleware. And that is what AI threatens first. Not humanity. Not meaning. Not usefulness. The middleware.
When intelligence becomes cheaper and more continuous, the layers built around scarce cognition stop making sense. If software can classify, summarize, simulate, draft, route, match, forecast, monitor, and escalate in real time, then many of the institutional forms we inherited start to look less like natural law and more like expensive workarounds.
The important question is not just who captures more value inside the old machine. The important question is which parts of the old machine we no longer need. Most popular AI commentary misses this entirely. It notices the first-order effect — incumbents get stronger, labor gets weaker, firms cut costs, ordinary people feel the floor moving — and mistakes it for the whole story.
But first winners are not final winners. Early structure is not final structure. And cheap intelligence can centralize power, but it can also unbundle it.
A small team with strong tools can do work that once required layers of analysts, coordinators, assistants, marketers, schedulers, and support staff. I have watched this happen in my own work — teams of three operating with a sophistication that used to require departments. A local business can run with a level of responsiveness that used to require scale. Regional manufacturing, logistics, and service networks become far easier to coordinate. Public agencies could remove whole strata of bureaucratic drag if they chose to use AI to simplify rather than merely surveil. Cooperatives and mission-driven institutions become more viable once the old burden of overhead falls hard enough.
AI is not only a machine for concentration. It is also a machine for collapsing coordination costs. That distinction changes the entire frame.
If AI were merely a better cost-cutting tool inside the existing system, then the bleak advice would be basically correct. Own capital, flee fragility, hope you are not standing where the machine lands. That is a portfolio strategy, not a theory of society.
A more complete view says something different: AI is a general-purpose coordination technology. It does not just shift the balance between labor and capital within today's institutions. It changes what kinds of institutions become possible. And I do not think the right response is to become a minor shareholder in your own displacement and call that realism.
Buying the Mag 7 may be a hedge. It is not a civilizational plan.
The hopeful view is not that everything will sort itself out. History gives us no such guarantee. Powerful tools get captured all the time. Transitions are usually ugly. People lose livelihoods before new structures appear. Wealth concentrates faster than wisdom.
But realism cuts both ways. It is unrealistic to assume that institutions designed for scarce intelligence will remain unchanged once intelligence becomes abundant. It is unrealistic to believe that human beings must forever organize their lives around selling hours into systems that increasingly need fewer of them.
The real opportunity is larger than job preservation — though that matters too. The real opportunity is to build a world in which fewer people are trapped in routine administrative labor, fewer communities are crippled by coordination failure, fewer small operators are crushed by overhead, and more human effort goes toward judgment, stewardship, trust, craft, local knowledge, and long-horizon building.
The industrial habit of mind still runs deep here. We have been trained to think that human value is mostly wage value. If a role becomes automatable, we assume the person attached to it has become less necessary. Wrong lesson. What becomes less necessary is the routine slot inside the old structure. The person was always more than the slot.
Human beings matter most where stakes are real, context is messy, accountability is local, trust is hard-won, and judgment cannot be reduced to a dashboard. Care, leadership, teaching, conflict resolution, institution-building, field operations, community life, physical-world responsibility — these are not relics waiting to be automated. They are the domains where someone actually bears consequences. And a society that frees people to do more of this work instead of less is not a utopian fantasy. It is a design choice.
The task is not to out-machine the machine at machine work. The task is to stop wasting human beings on machine work in the first place.
For individuals, that means reducing fragility, but also shifting toward domains where judgment, relationships, physical accountability, and real ownership matter. Learn the tools. Use them hard. Build assets. Stop depending entirely on a single credential or a single employer. But do not confuse private defense with public vision.
For builders, it means attacking coordination costs directly. Do not just wrap another AI skin around a legacy workflow and call it innovation. Ask which layers are obsolete. Ask which bottlenecks exist only because information has been trapped, fragmented, delayed, or made expensive to act on. Ask what becomes possible for small teams, local operators, and ordinary institutions once competent coordination gets cheap.
For towns, schools, governments, and public systems, it means using AI to remove drag instead of merely tightening managerial control. The best use of these tools is not a more efficient version of bureaucratic paralysis. It is to simplify, integrate, and restore local capacity.
The bleakest AI essays do one thing well: they refuse to flatter the reader. Good. The reader does not need flattery. What the reader needs is a frame that is honest enough to face the transition and ambitious enough to see past it.
The transition will be rough. Capital will try to capture the first wave of value, and probably a good deal after that. Some current jobs and institutions will disappear. None of that should be minimized.
But the destination does not have to look like the transition. That is the mistake in so much AI fatalism — it sees the first shock and assumes it has seen the final form. It looks at the old system using new tools and assumes the only future available is a harsher version of the present.
AI can be used to make human beings more disposable inside inherited systems. That danger is right in front of us. AI can also be used to build systems in which fewer people live as disposable parts in the first place.
The question is not whether the old order can use AI to cut costs. Of course it can. The question is whether that is the whole story. It does not have to be.