Is AI Coming for White-Collar Jobs?”

On February 23, 2026, the Dow Jones Industrial Average fell more than 800 points in a single session. Headlines cited tariff escalation and renewed AI anxiety. Around the same time, a Substack essay circulated widely outlining what the author explicitly called a “scenario, not a prediction.” Within hours, investors were attaching that scenario to price action. Whether or not one post moved the entire market is secondary. Markets move when a narrative crystallises a fear that was already there.

What made this particular essay resonate was not drama. It was arithmetic.

The central question was simple: what happens if artificial intelligence succeeds too well?

For centuries, the scarce input in advanced economies has been human intelligence. Capital could be raised. Machines could be built. Natural resources could be extracted. But the ability to analyse, code, draft legal arguments, negotiate contracts, synthesise data, design systems, and persuade clients was constrained by human capacity. White-collar wages are built on that scarcity.

Now imagine intelligence becomes abundant and replicable.

The essay’s scenario proposed that AI agents perform coding, legal drafting, financial modelling, logistics routing, administrative coordination, marketing optimisation, and large portions of enterprise analytics at scale. Productivity surges. Corporate margins expand. Earnings look strong. Equity markets initially reward efficiency.

But machines do not buy iPhones.

Humans do.

If productivity gains accrue primarily to capital while labour income contracts, aggregate demand weakens. The modern economy depends on circulation. Wages become spending. Spending becomes revenue. Revenue becomes profit. If labour’s share shrinks materially, the loop strains.

The scenario suggested labour’s share of GDP could fall from roughly 56 percent to 46 percent within a compressed timeframe, a decline that would represent the sharpest structural shift in modern economic history. That figure is not a forecast. It is a stress test. But it forces a serious macroeconomic question: what happens when the one input that was historically scarce, cognitive capacity, becomes abundant?

Markets responded quickly to adjacent signals. IBM fell roughly 13 percent in a single day, its steepest drop in approximately 25 years, after Anthropic highlighted that its AI model could modernise legacy code bases such as COBOL. That market alone is estimated at roughly $30 billion annually. Investors interpreted the announcement not as incremental productivity enhancement but as potential revenue displacement.

Other firms associated with software, payments, and intermediation experienced sharp declines as well. The essay named companies whose business models rely in part on what it described as “habitual intermediation,” businesses that sit between buyer and seller and collect fees because customers default to familiar platforms. Payment networks such as Visa and Mastercard collect interchange fees that typically range between 2 and 3 percent. In the scenario described, AI agents route payments through stablecoin networks on blockchains such as Solana or Ethereum, compressing those fees.

Is that imminent? No. Regulatory infrastructure, fraud prevention systems, and compliance frameworks are not easily bypassed. But is margin pressure plausible if intelligent agents optimise transactions for lowest cost every time? Yes.

The argument extended beyond U.S. firms. India’s IT services exports exceed $200 billion annually, built on labour arbitrage. Skilled developers performing high-value coding at lower cost than Western counterparts have powered that sector for decades. If AI coding agents perform significant portions of that work at near-zero marginal cost, the arbitrage compresses. In the essay’s hypothetical timeline, major IT firms begin seeing contract cancellations by 2027, and by 2028 the IMF is in preliminary discussions. Again, this is framed as a scenario, not a prediction. But scenarios are how markets evaluate tail risk.

Layer onto that approximately $13 trillion in U.S. consumer credit, underwritten on assumptions of stable household income. If white-collar wages contract meaningfully, those assumptions shift. Credit markets reprice. Equity valuations follow.

None of this requires apocalypse. It requires only compression.

So is AI coming for white-collar jobs?

Yes, in the sense that cognitive automation is advancing rapidly and certain task bundles will shrink.

No, in the sense that wholesale eradication of professional work is neither immediate nor economically simple.

The more accurate framing is that AI is coming for tasks, not titles.

White-collar work is composed of structured, repeatable, digitally accessible activities. Entry-level knowledge roles are particularly exposed. Junior analysts compiling data and drafting first reports. First-year associates conducting research. Paralegals preparing standardised documentation. Basic compliance monitoring. Financial reconciliation. Administrative scheduling and coordination. Standardised marketing content. Routine coding and debugging.

AI already performs many of these functions competently. The likely impact is not mass extinction of professions. It is compression. Fewer junior hires. Flatter hierarchies. More output per employee.

The risk is not that lawyers disappear. It is that firms hire fewer first-year associates. Not that accountants vanish, but that automated reconciliation reduces demand for entry-level roles. Not that developers are obsolete, but that one AI-augmented engineer produces what previously required several.

Roles requiring fiduciary accountability, complex negotiation, cross-functional integration, ethical reasoning under ambiguity, and high-trust relationship management are less immediately exposed. AI can generate options. It does not assume liability. That boundary matters economically and legally.

At the same time, history complicates the fear.

Technological progression has always displaced workers. I remember when telegrams were referenced because long-distance calls were not widely accessible. Landlines were standard. Telephone operators physically connected calls. Entire switchboard rooms employed thousands. Those roles disappeared as systems improved. Before that, typists, stenographers, elevator operators, travel agents, and industrial labourers were replaced by more efficient systems.

Each wave of innovation eliminated certain roles and created others. Living standards rose over time, though not without dislocation.

The counterargument is powerful. Productivity gains historically create more jobs than they destroy. Automation frees labour from lower-value tasks and reallocates it to higher-value ones. Entire industries emerge from new technological platforms. The internet eliminated certain roles and created digital marketing, e-commerce, cybersecurity, cloud infrastructure, and countless others.

The question is not whether new roles will emerge.

The question is speed.

If labour displacement accelerates faster than new industries absorb workers, the transition becomes destabilising. If productivity gains pool primarily with capital rather than circulating through wages, the demand side weakens.

This is not AI good or AI bad.

It is distribution and timing.

If I were 25 today, entering this landscape, I would not avoid AI. I would master it. I would treat it as foundational infrastructure, the way earlier generations treated spreadsheets and the internet. Those who learned those tools early multiplied their value. Those who resisted them eventually lost leverage.

I would become fluent in AI systems as workflow multipliers. I would position myself above repetitive task layers and into supervisory, integrative, and decision-making roles. I would invest deeply in financial literacy, strategic thinking, and judgment under uncertainty. Machines generate options. They do not carry consequences.

I would build relational capital aggressively. Communication, negotiation, trust, and cross-disciplinary synthesis become more valuable as automation increases. The more automated systems become, the more valuable human accountability becomes.

The safest position in a shifting system is not at the bottom of a repetitive task stack.

It is at the level where responsibility sits.

Shock is not a strategy.

Adaptation is.

References

Reuters. “IBM posts steepest daily drop since 2000 after Anthropic says AI can modernize legacy systems.” February 24, 2026.
https://www.reuters.com/business/ibm-posts-steepest-daily-drop-since-2000-after-anthropic-says-ai-can-modernize-2026-02-24/

Yahoo Finance. “Dow drops 800 points amid tariff fears and AI scare trade.” February 23, 2026.
https://finance.yahoo.com/news/live/stock-market-today-dow-drops-800-points-as-sp-500-nasdaq-slide-on-trump-tariff-fears-ai-scare-trade-210027026.html

Citrini Research. “The 2028 Global Intelligence Crisis.”
https://www.citriniresearch.com/p/2028gic

Barron’s. “AI blog post rattles stocks as disruption fears rise.” February 2026.
https://www.barrons.com/articles/ai-blog-post-stocks-fall-cf25d815

Submit a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.