Companies are cutting workers because of AI. But the numbers they report publicly are a fraction of what they actually plan to do. A major new study proves it, and the gap should alarm everyone.
A working paper from the National Bureau of Economic Research, based on a survey of nearly 750 chief financial officers conducted by Duke University and the Federal Reserve Banks of Atlanta and Richmond, found that 44% of U.S. firms plan some AI-related job cuts in 2026. That works out to roughly 502,000 roles across the economy. In 2025, employers publicly attributed just 55,000 layoffs to AI, according to Challenger, Gray & Christmas. The private expectation for 2026 is about nine times higher.
That 9x gap is the headline number. But the real story is why it exists, what it means for workers, and whether anyone in a position of power is being honest about it.
AI Job Cuts Are Real, but the Reporting Is Broken
The 55,000 AI-attributed layoffs tracked by Challenger, Gray & Christmas in 2025 were just 4.5% of all job losses that year. By comparison, four times as many cuts were blamed on “market and economic conditions,” and nearly six times as many on government efficiency restructuring. AI barely registered as a category.
But those public numbers rely entirely on what companies choose to say. And companies have strong incentives to say very little. When New York became the first state to require AI disclosure on WARN Act filings, more than 160 companies filed mass termination notices in the following year. Not a single one attributed layoffs to AI or automation. Zero. That includes Amazon and Goldman Sachs, both of which have publicly discussed integrating AI into their operations.
The system for tracking AI job cuts is voluntary, self-reported, and structured around incentives that reward silence. Workers are being cut, and no one is counting accurately.
Why Companies Stay Quiet
There are two competing reasons companies underreport AI-related cuts, and both can be true at the same time.
First, “AI layoff” is hard to define. When a company restructures a department and replaces some functions with automated tools, is that an AI layoff? What about when a company simply stops hiring for roles it expects AI to handle eventually? As Bloomberg Law reported, even the New York labor commissioner acknowledged that defining an AI-related layoff is challenging.
Second, there is a financial game being played. Oxford Economics argued in January 2026 that “some firms are trying to dress up layoffs as a good news story,” using AI as a cover for routine headcount reductions driven by weak demand or past over-hiring. Attributing cuts to AI “conveys a more positive message to investors” than admitting your business misjudged the market.
Wharton management professor Peter Cappelli has documented this phenomenon for years: companies announce “phantom layoffs” to boost stock prices, and investors have learned to reward firms that frame cuts as innovation rather than failure.
The Block Test Case
In February 2026, Block CEO Jack Dorsey cut 40% of the company’s workforce, more than 4,000 people, explicitly blaming AI. It was the largest single AI-attributed layoff event in tech history. Block’s stock surged 24% on the news.
But UVA Darden’s Batten Institute questioned whether AI was really the driver. Block had ballooned from 3,835 employees before the pandemic to over 10,000. The cuts brought headcount roughly back to pre-COVID levels. Was AI the reason, or the excuse?
This is the core tension. Some AI job cuts are real. Some are rebranded post-pandemic right-sizing. And the current disclosure system cannot tell the difference.
The Productivity ParadoxAn economic paradox where widespread technology adoption fails to appear in aggregate productivity statistics, often because benefits are delayed or unevenly distributed. Returns
Companies are cutting jobs in the name of AI productivity gains that have not materialized. The NBER study found a “productivity paradox” in which perceived productivity gains are larger than measured productivity gains, reflecting what co-author John Graham called “more of a wish than a realized fact.”
Goldman Sachs confirmed this gap in a March 2026 analysis. Senior economist Ronnie Walker wrote that “we still do not find a meaningful relationship between productivity and AI adoption at the economy-wide level.” Only 10% of S&P 500 management teams quantified AI’s impact on specific use cases. Just 1% quantified its impact on earnings. Fewer than 20% of U.S. establishments are even using AI for any business function.
Meanwhile, MIT research found that 95% of enterprise AI pilots delivered “little to no measurable impact” on profit and loss.
Workers are losing jobs for a technology that mostly does not work yet at enterprise scale.
What Needs to Change
The 9x gap between private expectations and public reporting is not just a data problem. It is a transparency failure with real consequences for millions of workers who cannot plan their careers around information they do not have.
New York’s WARN Act expansion was a start, but one year and zero AI disclosures later, it is clearly not enough. Workers deserve to know when their jobs are being eliminated because of automation, not just because their employer chose to mention it. Mandatory, standardized AI displacement reporting, not voluntary checkboxes, is the minimum viable policy response.
And investors should demand honesty too. If 44% of CFOs plan AI job cuts but almost none disclose them publicly, the market is pricing in a fiction. The gap between what executives believe and what they say is a material risk that current disclosure rules fail to capture.
The NBER study’s co-author put it plainly: “Who knows what’s going to happen in 2028? I’m not making a prediction that there will never be any jobs lost two, three and five years from now to AI.” The cuts are coming. The only question is whether anyone will be honest about it before they arrive.
A March 2026 NBER working paper by Baslandze, Edwards, Graham et al., drawing on the Duke CFO Survey conducted with the Federal Reserve Banks of Atlanta and Richmond, has produced the most granular private-sector data yet on AI-driven workforce displacement. The central finding: 44% of surveyed CFOs from nearly 750 U.S. firms anticipate AI-related job cuts in 2026, projecting a net loss of approximately 0.4% of total employment, or roughly 502,000 roles out of 125 million. This represents approximately nine times the 55,000 AI-attributed layoffs publicly reported in 2025 by Challenger, Gray & Christmas.
That multiplier deserves scrutiny. It reveals a structural failure in how AI job cuts are measured, reported, and understood, with significant implications for labor policy, securities disclosure, and corporate governance.
The Disclosure Gap in AI Job Cuts
The 55,000 figure from Challenger, Gray & Christmas represents employer-attributed layoffs, a self-selected, voluntary reporting category introduced only in 2023. It accounted for just 4.5% of all 2025 job losses, far behind restructuring, economic conditions, and government efficiency cuts.
The inadequacy of this tracking became clear in New York. After the state amended its WARN Act to require employers to disclose whether layoffs stem from “technological innovation or automation,” more than 160 companies filed mass termination notices in the year following the update. Not one attributed layoffs to AI, including major AI adopters like Amazon and Goldman Sachs. As Kevin Frazier of the Abundance Institute observed, the WARN Act is “a product of the 1970s” designed for factory closings, not the gradual, diffuse displacement patterns of AI adoption.
The Harvard Law School Forum on Corporate Governance has noted the regulatory vacuum: while GAAP requires disclosure of material charges from planned terminations, and the federal WARN Act mandates advance notice, neither framework is designed to capture the incremental, anticipatory nature of AI-driven workforce changes. The result is a measurement system that systematically undercounts.
AI-WashingThe practice of falsely attributing layoffs or business decisions to AI to generate a favorable investor narrative, even when AI is not the real cause. and the Incentive Structure
The disclosure gap runs in both directions. Some companies overstate AI’s role; others hide it entirely.
Oxford Economics argued in January 2026 that firms are “dressing up layoffs as a good news story,” rebranding pandemic-era over-hiring corrections as AI-driven efficiency gains. The logic is straightforward: attributing cuts to AI “conveys a more positive message to investors” than admitting weak demand or strategic misjudgment.
Wharton professor Peter Cappelli’s research supports this interpretation. Cappelli has documented how companies announce “phantom layoffs” to arbitrage positive stock-market reactions. He cites Harris Poll data showing 74% of global CEOs feared losing their jobs within two years if they could not demonstrate AI success, with CEOs estimating that roughly a third of their AI initiatives amounted to “AI washing for optics and reputation.”
The Block layoff illustrates the ambiguity. CEO Jack Dorsey cut 40% of Block’s workforce in February 2026, over 4,000 employees, explicitly citing AI. Stock surged 24%. But as Darden’s Batten Institute analyzed, the layoffs raised questions about whether AI was the true driver. Block had grown from 3,835 to over 10,000 employees during the pandemic. The cuts restored pre-COVID headcount. Whether AI was the cause or the narrative wrapper is an open question, and the current regulatory framework offers no mechanism to distinguish the two.
The Productivity ParadoxAn economic paradox where widespread technology adoption fails to appear in aggregate productivity statistics, often because benefits are delayed or unevenly distributed.: Solow Redux
The NBER paper identifies a “productivity paradox” directly invoking Robert Solow’s 1987 observation that “you can see the computer age everywhere but in the productivity statistics.” CFOs report perceived AI productivity gains that exceed measured gains, which the researchers attribute to delayed revenue realizations.
Goldman Sachs’ March 2026 “AI-nxiety” report confirmed this at the macro level. Senior economist Ronnie Walker found “no meaningful relationship between productivity and AI adoption at the economy-wide level,” even as a record 70% of S&P 500 management teams discussed AI on quarterly calls. The granular numbers are striking: only 10% of S&P 500 firms quantified AI impact on specific use cases, and just 1% quantified earnings impact. Census data indicates fewer than 20% of U.S. establishments use AI for any business function.
Where AI does deliver, the gains are concentrated. Goldman found a median 30% productivity boost in two specific domains: customer support and software development. But these localized successes have not translated into economy-wide productivity acceleration. MIT’s 2025 GenAI Divide study found that 95% of enterprise AI pilots delivered “little to no measurable impact on P&L,” based on 150 leader interviews, 350 employee surveys, and 300 public deployment analyses.
This creates an uncomfortable arithmetic: companies are projecting workforce reductions based on productivity gains that, for the vast majority, have not been realized. They are cutting workers in anticipation of a technological dividend that remains theoretical at enterprise scale.
Compositional Effects and the Small-Firm Divergence
The NBER paper reveals important heterogeneity beneath the headline numbers. Larger companies anticipate net AI-driven workforce reductions, while smaller firms (under 500 employees) expect modest hiring gains, particularly in technical roles. About half of the projected 502,000 job losses fall on white-collar workers, with the study identifying routine clerical and administrative functions as most vulnerable.
This compositional shift matters for policy. The displacement is not uniform across the economy but concentrated in specific roles within larger organizations. The paper develops an index ranking job functions most negatively affected by AI, finding that the reallocation of labor is occurring both within firms (from routine to technical roles) and across firms (from large to small).
The small-firm hiring signal complicates the narrative. If smaller companies are adding technical roles to support AI adoption while larger firms shed clerical ones, the net employment effect is smaller than either side of the debate suggests. But the workers losing clerical jobs at large companies are not the same workers being hired for technical roles at small ones. The displacement may be a rounding error in aggregate statistics while remaining catastrophic for the individuals affected.
Regulatory and Governance Implications
The 9x gap between private expectations and public AI job cuts reporting has three policy-relevant dimensions.
Securities disclosure. If 44% of CFOs plan AI-driven workforce reductions but fewer than 1% quantify AI’s earnings impact on calls, investors are operating with incomplete information. As the Harvard Law Forum noted, GAAP-triggered disclosure obligations and existing WARN requirements were designed for discrete, identifiable events, not for the gradual attrition and role elimination that characterizes AI displacement.
Labor market tracking. New York’s WARN Act experiment demonstrates that voluntary, self-reported disclosure produces zero signal. Additional bills in the New York legislature would require companies with 100+ employees to report displaced workers, unfilled roles previously held by humans, and hours changed due to AI. A separate proposal mandates 90-day written notice before AI-driven cuts, with $10,000 fines and five-year loss of state tax incentives for violations.
Board governance. The NACD framework recommends boards exercise AI adoption oversight “with the same level of scrutiny as financial risk.” Martin Lipton of Wachtell Lipton has urged boards to consider the effect of technology adoption on employees “as opposed to myopically seeking immediate expense-line efficiencies at any cost.” The current disconnect between CFO private expectations and board-level disclosure suggests this oversight is not happening at scale.
The Honest Assessment
The NBER data points to a labor market disruption that is real but moderate in the near term, structurally underreported, and driven as much by anticipatory corporate behavior as by actual technological capability. The 502,000 projected job losses represent 0.4% of total U.S. employment. Even if realized in full, this is not the “doomsday scenario” some tech executives have promoted.
But it is nine times what anyone was willing to say publicly. And the study’s co-author, Duke’s John Graham, was candid about the limits of short-term projection: “Who knows what’s going to happen in 2028?”
The uncomfortable truth is that we have a workforce measurement infrastructure designed for the 1970s being asked to track a 2026 phenomenon. Until disclosure requirements catch up to corporate intent, the gap between what executives plan and what they report will remain the defining failure of AI labor policy.



