Essay Series
A Future Worth Wanting
The hopeful vision for the AI age is not mass idleness or elite abundance. It is a society with more dignity, more ownership, more time, and more room for human beings to be human.
Illustration for this essay. · Drew Littrell
In the first essay, I argued that the transition is not the destination. In the second essay, I argued that cheap intelligence changes what becomes buildable by collapsing coordination costs.
This final essay asks the question that matters most: what is all of that for?
People do not only fear losing jobs. They fear becoming unnecessary. They fear that if machines can do more of what markets used to pay humans to do, then the humans themselves will count for less. And to be fair, our society has trained them to fear exactly that.
For a long time, we have asked paid work to do too many jobs at once. Work is supposed to provide income, status, routine, identity, discipline, social membership, and a claim to dignity. If someone has a respectable job, we assume he is doing well and probably deserves his place. If someone does not, we quietly start talking about resilience, upskilling, and other modern euphemisms for "best of luck."
That arrangement was always more fragile than it looked. AI is exposing the fragility.
The labor market is not a moral scoreboard
A person's market wage tells you something real, but limited. It tells you what that person can earn under current conditions, inside a particular set of institutions, under a specific balance of supply, demand, law, and technology. It does not tell you what that person is worth.
The labor market is a useful accounting device. It is a terrible priest.
This confusion has done damage for a long time, even before AI. Some of the most important things people do have always been poorly priced, unpaid, or rewarded only indirectly: raising children, caring for aging parents, mentoring the young, maintaining a household, supporting neighbors, building trust, preserving institutions, serving a town, keeping standards alive. None of those become unimportant because a spreadsheet struggles to value them cleanly.
When routine cognitive labor becomes cheaper, a society that equates wage value with human value will start treating more people as surplus. That is not a sign that the people have become less important. It is a sign that the society built too much of its moral order on top of its payroll system.
A good civilization should know the difference between "less demanded by the labor market" and "less necessary as a human being." We have not always been good at that distinction. We are going to have to get better.
A humane future is not mass idleness
People often jump to the wrong kind of optimism here. They imagine a future where machines do the work and human beings drift into permanent leisure, sustained by abundance and entertained into contentment. It sounds pleasant for about seven seconds, and then the floor falls out.
A society of passive consumption is not a hopeful one. It is a sedated one.
Human beings need more than comfort. They need responsibility, standards, the satisfaction of doing something difficult and doing it well. They need roles that call on judgment, effort, loyalty, patience, courage, and self-command. They need reasons to get up in the morning that are better than another optimized stream of machine-made amusements.
I take this more seriously than most people writing about AI, because I think the danger of abundance without purpose is not just sociological — it is spiritual. History is full of examples of societies that got what they wanted and then rotted from the inside. The pattern is old enough to be biblical: prosperity leads to complacency, complacency to decadence, decadence to collapse. Every form of aimlessness — gluttony, vanity, envy, acedia — flourishes when people have resources but no framework for what those resources are for.
The secular version of techno-optimism has no answer to this. It assumes that once material needs are met, people will naturally find meaning. Some will. Many will not. The crisis is not economic. It is anthropological. What is a human being for, once the market no longer needs him to shuffle paper?
If your answer to that question is "whatever he wants," you have not answered it. You have abandoned it.
The point of automation is not to retire the human being. It is to retire the drudgery that never deserved a human life in the first place. A hopeful future is not one where nobody works. It is one where less of life is spent on pointless, repetitive, administratively bloated labor, and more of it is spent on the kinds of work that make a person more capable, more useful, and more deeply embedded in the lives of others.
Before industrial society narrowed adulthood into "hold a salaried slot inside a large institution," people understood productive life more broadly. A good life included keeping a home, raising a family, mastering a trade, serving a local community, caring for kin, teaching the next generation, maintaining property, honoring obligations, and building something that outlasted the self. Wage labor was one part of a larger human picture. We would be wise to recover some of that picture.
Ownership is not optional
None of this works if the gains from AI remain concentrated in a few hands.
This is where many hopeful visions go soft. They say lovely things about creativity, learning, and meaning, while quietly ignoring the small detail that bills still exist. If productivity rises but ownership stays narrow, then most people will experience AI as dispossession with better marketing. That is not a hopeful future. That is feudalism with GPUs.
A credible hopeful vision requires broader ownership of productive capital. That does not mean there is only one mechanism — it could include employee ownership, profit-sharing, stronger retirement systems that actually capture technology upside, community investment funds, easier paths to small business formation, more widespread household asset ownership, public dividends from automated sectors, and fewer artificial barriers to owning housing, tools, or productive property. Reasonable people can argue about the mix.
The principle is simpler: when machine capital makes society richer, ordinary people need a real claim on the gains. Not a lecture. Not a motivational poster. Not an invitation to become more "adaptable" while someone else owns the floorboards.
A person with some ownership has bargaining power. A household with assets has room to think. A community with local productive capacity has resilience. Without stake, "freedom" usually means the freedom to accept worse terms.
The opposite of exploitation is not unemployment. It is independence.
Use abundance to buy back time
If AI genuinely increases productivity, then some of those gains should show up in ordinary life in forms people can actually feel. Not only in market caps and earnings calls. They should show up as more secure households, cheaper essentials, saner institutions, and more time.
Civilization should cash some of its productivity gains in time. That could mean shorter workweeks in some sectors. It could mean healthcare systems that stop forcing every clinician to moonlight as a data-entry clerk. It could mean schools that give teachers time to teach, and public agencies that stop wasting weeks of human life on broken procedures.
Not all progress has to arrive as a monthly check, though some kind of social dividend may well be part of the picture. A transfer can prevent catastrophe. It cannot, by itself, create a good society.
Success in the AI age should not be measured only by how much output we can squeeze from fewer workers. It should be measured by whether a young couple can form a family without panic, whether a nurse gets home with something left in the tank, whether a small business owner can survive without drowning in forms, whether an aging parent can be cared for without detonating the finances of three generations.
If the technology is real, then the benefits should eventually become boringly concrete. More dinner at home. Less pleading with systems. More room to breathe.
Human work becomes more human
As routine tasks are automated, the work that remains will not disappear. But its center of gravity will move — toward judgment, trust, responsibility, embodiment, conflict resolution, taste, leadership, and care.
Someone still has to decide when the rule does not fit the case. Someone still has to notice that the building feels wrong, that the patient is deteriorating, that the child is frightened, that the team is coming apart, that the machine's recommendation is missing the point. Someone still has to take responsibility when things go badly.
The most durable human roles will be the ones closest to consequences. The farther you get from real stakes, the easier the work is to automate. The closer you get to lived consequences — care, field operations, skilled trades, leadership, education, mediation, local enterprise, public safety — the more human judgment matters.
This suggests a different educational emphasis. The future does not need an army of professional prompt whisperers. It needs adults who can use tools well, think clearly, speak honestly, work with others, exercise judgment under uncertainty, master a domain, and bear real responsibility. It needs apprenticeships, practical training, technical fluency, and moral seriousness. I think about this constantly with my own kids — what skills will actually compound in a world where abstraction work gets cheaper every year. The answer keeps pointing toward the physical, the local, the high-trust, the hard-to-fake.
The challenge is not to beat the machine at machine work. The challenge is to build lives and institutions in which human beings are used for what human beings are actually for.
Thicker institutions, not thinner souls
There is another problem buried inside the future-of-work debate that almost nobody talks about honestly.
Paid employment has been carrying too much of our social structure. For many people, work is where adult friendship happens, where recognition happens, where routine happens, where standards are enforced, where one's contribution becomes visible. Remove or weaken that role, and you cannot leave an empty space and assume the market will kindly fill it with meaning.
It will not.
A humane future needs thicker institutions outside the payroll system. Families that function. Neighborhoods where people are known. Schools that form character and not just credentials. Churches and religious communities that hold people accountable to something beyond themselves. Guilds and associations that pass on craft and standards. Municipal life that gives ordinary people a real role. Mentorship that crosses generations.
The point is membership — but membership with substance, not just affinity. The kind that makes demands on you, not just the kind that makes you feel welcome.
This is where most techno-utopian visions feel bloodless. They imagine that once economic necessity relaxes, people will naturally flower into self-directed abundance. But human beings are not just appetites with broadband. They need formation. They need duties. They need to be corrected, not just encouraged. They need structure that is inhabited from within, not merely imposed from above.
A monthly check may reduce desperation. It does not coach the team, reconcile the neighbors, care for the sick, initiate the young into competence, uphold standards, or keep a town alive. If AI weakens the centrality of labor markets, then other institutions have to get stronger, not weaker. Otherwise we will not get a freer civilization. We will get a lonelier one.
Hope is not denial. It is design.
None of this happens automatically.
There will be fights over ownership, law, education, culture, platform control, public capacity, and the meaning of contribution. Some institutions will rot before better ones are built. Some actors will use AI to centralize power, intensify surveillance, cheapen labor, and flood society with synthetic junk. That road is open already. So is another one.
Hope, in this context, is not the belief that technology saves us. It is the insistence that productivity be translated into dignity, stake, time, belonging, and responsibility — rather than merely into more concentrated wealth.
A society in which more people can own something, build something, care for someone, belong somewhere, and spend less of their one life doing tasks that never justified a human soul.
The real test of the AI age is not whether we can build intelligence at scale. It is whether we can build a civilization worthy of the people it is supposed to serve.