The loop that shrank
“Human in the loop” sounds like a settlement between people and machines. It is more likely a grace period.
In offices from Goldman Sachs to Shopify, a reassuring phrase is doing a lot of work. “Human in the loop” is the promise that, whatever the latest model can do, a person will still sit at the control panel: approving, editing, catching the odd hallucination. It sounds like a durable settlement between flesh and silicon. It is more likely a grace period.
The idea flatters everyone. Managers can claim they are adopting AI responsibly while keeping their headcount. Regulators can believe there is an accountable adult in the room. Workers can tell themselves that their judgement remains indispensable. And the technology firms selling the tools can reassure customers that nothing will change too abruptly. In the short run, much of this is true. Models are impressively capable and occasionally bizarre. They need supervision.
But grace periods exist to be used up.
In the early phase of any automation wave, humans are essential, not because machines are harmless, but because they are unreliable. The machine does most of the work; the human does the checking. The checking is called “oversight”, but it is often just quality control by another name. Then the machine improves. The errors become rarer, and more importantly, more predictable. Evaluation is automated. Monitoring gets better. The human’s role shifts from constant supervision to occasional intervention. At that point the accountant starts asking the obvious question: why are we paying someone to watch a system that mostly behaves?
The uncomfortable truth about “human in the loop” is that it is not a moral principle. It is a cost structure. When the costs change, so does the principle.
This is not to say that humans vanish from the picture. They do, however, migrate. In some settings they are kept in the loop because someone must be blamed. If a model denies a loan, recommends a medical treatment or places a trade, a firm may want a person to sign off, not because that person improves the decision, but because they can absorb liability. Yet even accountability can be engineered. Audit trails, insurance, certification and regulation can make automated systems legible enough for courts and watchdogs. Over time, “someone to blame” becomes “a process to show”. The loop becomes a compliance artefact, not a job.
The labour-market consequences arrive not with a bang but with a thinning. In many white-collar occupations, the lowest rungs are made of drudgery: drafting, summarising, writing boilerplate, fixing small bugs, preparing slides. These were never glamorous tasks, but they served as apprenticeship. Today they are exactly what large language models do well. Firms will not announce that they are abolishing the junior analyst. They will simply hire fewer of them. Productivity rises; entry points disappear.
The next effect is compression. When output per worker jumps, teams can shrink. And when the skills needed to generate first drafts become abundant, the market price of those skills falls. Many jobs may survive, but in altered form: more orchestration, less creation; more managing exceptions, less solving the typical case. This is not mass unemployment so much as mass bargaining-power loss—a subtler, and potentially more corrosive, shift.
Where does the surplus go? The optimistic answer is that cheaper “cognitive labour” will expand demand. If it costs less to write software, more software will be written. If it costs less to draft legal documents, more transactions will occur. Historically, lowering the price of a capability often increases its use. Yet this time the distribution of gains may be unusually skewed. AI is not like a steam engine bolted to a single factory. It is a general-purpose technology delivered through platforms, trained at scale and updated centrally. When a model improves, it improves for millions at once. When a workflow is embedded in software, it can be copied across an industry.
That favours owners of chokepoints: compute, proprietary data, distribution, and the software layer where work actually happens. Returns to scale make concentration attractive; network effects make it durable. If the economy’s productivity leaps but its gains accrue mainly to those who own the infrastructure and interfaces, society may get cheaper services alongside a growing sense of dispossession.
This is why the rhetoric of “human in the loop” matters. It is a way of postponing a political argument about who benefits from automation. For decades, rich countries told their citizens to invest in skills. “Learn to code” became a secular mantra. The bargain was simple: acquire human capital and the economy will reward you. If AI makes many of those skills cheap, the bargain weakens. Workers are not made redundant by a malevolent plot; they are made redundant by a market that no longer needs what they sell.
The more unsettling possibility is a mismatch between production and legitimacy. Modern societies distribute income, status and security primarily through wages. But if fewer workers are needed to produce abundant output, wages cease to be a reliable distribution mechanism. The economy can flourish while the polity frays. This is not unprecedented. Many countries have experienced growth with stagnating median incomes, and the resulting resentment has been fertile ground for populists. AI could intensify the pattern.
What, then, is to be done? The first response is to take the grace period seriously and use it well. Firms should build systems that are auditable and safe, not merely impressive. Governments should clarify liability and standards, so that “human oversight” is not just a comforting fiction. Competition policy should focus on the layers where control can become entrenched, especially distribution and workflow software, let “AI everywhere” turn into “a few platforms everywhere”.
The second response is more radical: broaden ownership. If the gains from automation accrue to capital, then societies may need more people to own capital, whether through pension funds, profit-sharing, sovereign wealth funds or other mechanisms. Social insurance will also need strengthening, not only to cushion displacement but to prevent a legitimacy crisis when work is no longer the main route to security.
For now, “human in the loop” will remain the polite compromise: machines do the work; humans check it; everyone pretends the arrangement is permanent. It is not. The loop is a bridge between a world in which machines are unreliable and one in which they are boringly competent. Bridges are for crossing. The question is what awaits on the other side.


