Evergreen 15 min read

How DOGE Used ChatGPT to Kill 97% of America’s Humanities Grants

Stack of federal grant documents next to a computer screen
🎧 Listen
Mar 29, 2026
Reading mode

In April 2025, two young staffers from the Department of Government Efficiency walked into the National Endowment for the Humanities and, within 22 days, cancelled 97 percent of the agency’s active grants. Their primary analytical tool was not a team of subject-matter experts, not the NEH’s own peer review system, and not even a careful reading of the grant proposals themselves. It was a single ChatGPT prompt: “Does the following relate at all to DEI?”

The result was the largest mass termination of federal grants in the NEH’s 60-year history. More than $100 million in congressionally appropriated funding vanished. A Holocaust documentary, an Italian-American history archive, a Native American language preservation project, and a museum’s request for a new air conditioning system were all flagged as “diversity, equity, and inclusion” by the same algorithm and killed by the same two people.

This story, pieced together from court depositions, internal emails, and spreadsheets released in March 2026 as part of a federal lawsuit, is not really about artificial intelligence. It is about what happens when ideological enforcement replaces institutional judgment, and when the people wielding the axe cannot tell the difference between a Holocaust oral history and a DEI training seminar.

What Happened

On January 20, 2025, President Trump signed Executive Order 14151, directing federal agencies to eliminate all DEI programs, offices, and grants. The order did not define DEI with precision. It did not specify what to do about a grant studying Italian-American immigrant life or one digitizing Appalachian photographs. That ambiguity would prove catastrophic.

DOGE’s Small Agencies Team, led by a GSA employee named Justin Fox and his supervisor Nate Cavanaugh, met with NEH leadership on March 12, 2025. Neither man had government experience. Fox was a former investment banker. Cavanaugh came from tech and finance. Neither had backgrounds in the humanities, academic research, or grant administration.

Their task was to identify NEH grants that violated the executive order. Their method was to feed 1,163 grant descriptions into ChatGPT, one by one, with a prompt that read: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.”

Fox did not define “DEI” for the model. He did not instruct it on how to interpret the term. He did not know how ChatGPT interpreted it. The chatbot’s yes-or-no answers and brief rationales were pasted into a spreadsheet. That spreadsheet became the kill list.

Of 1,163 grants analyzed, 1,057 were flagged as DEI-related. Just 42 were kept. By April 1, 2025, the NEH had issued termination letters for roughly 1,400 grants and laid off 116 employees, about two-thirds of its workforce.

What ChatGPT Actually Flagged

The spreadsheet, presented as evidence in court, reveals what happens when you ask a language model to be an ideological filter without giving it a coherent ideology to filter for.

The High Point Museum in North Carolina requested $349,000 to replace its aging HVAC system. ChatGPT’s verdict: “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.” The grant was cancelled. The museum director later told Fortune they recouped about 70 percent of the award through the termination clause.

A newspaper digitization project at the University of Oregon and University of Nebraska-Lincoln was flagged because the initiative “seeks to enhance digital newspaper programs, making them more accessible and customizable which aligns with DEI goals of inclusivity and representation.” Preserving old newspapers became DEI.

A documentary about Jewish women’s slave labor during the Holocaust was terminated. When Fox was asked in his deposition why a Holocaust documentary counted as DEI, he replied: “It’s a gender-based story that’s inherently discriminatory to focus on this specific group.”

A project on literary agents and the corporate structure of the publishing industry was flagged. A center for AI ethics research, including work on eldercare technology, was flagged. Projects to preserve endangered Native American languages were flagged. An archival project on Italian-American immigrant life was flagged.

Meanwhile, Fox also compiled a separate “Detection List,” searching the grant database for keywords like “gay,” “BIPOC,” “indigenous,” “tribal,” “melting pot,” and “equality.” He searched for “Black” and “homosexual” but not “white” or “caucasian.” He categorized the results into lists he titled “Craziest Grants” and “Other Bad Grants.”

Who Was Actually in Charge

On paper, NEH Acting Chair Michael McDonald was the “final decider” on grant terminations. In practice, the internal emails tell a different story.

McDonald wrote to Fox on April 1: “As you’ve made clear, it’s your decision on whether to discontinue funding any of the projects on this list.” He acknowledged that many of the targeted grants were “harmless when it comes to promoting DEI,” but noted that DOGE also wanted cuts “to assist deficit reduction.”

The pressure was relentless. On March 31, the day before 1,400 grants were terminated, Fox sent urgent messages to McDonald: “We’re getting pressure from the top on this and we’d prefer that you remain on our side but let us know if you’re no longer interested.”

That pressure, it turned out, was manufactured. Cavanaugh admitted under deposition that there was “no person explicitly putting pressure on Justin to send this email.” The White House urgency was a “time pressure tactic” they invented themselves.

DOGE staff even drafted and sent the termination letters themselves, using a Microsoft email account rather than the NEH’s standard grant management office. The letters cited a nonexistent executive order as the basis for termination, one that purportedly mandated the NEH “eliminate all non-statutorily required activities and functions.” No such order exists. McDonald acknowledged in his deposition that he did not review the letters “as closely as perhaps I should have.”

Official government business about the cuts was conducted over Signal, set to auto-delete messages, in violation of the Federal Records Act.

The Bigger Picture: $49 Billion Across Government

The NEH was just one agency. By January 2026, DOGE had driven the termination of 15,887 federal grants totaling approximately $49 billion across the federal government.

At the National Science Foundation, roughly 430 grants worth $328 million were cancelled in April 2025, including research on deepfake detection, election security, AI advancement, and STEM education for underserved communities. The mass cancellation coincided with the arrival of DOGE affiliates, including Luke Farritor, a former SpaceX intern, who received clearance to view and modify the agency’s funding opportunity system. The Office of Management and Budget instructed NSF staff that all funding opportunities now needed approval from DOGE, OMB, or the Office of the Director.

The NSF’s normal review process, where program officers evaluate projects and a Division of Grants and Agreements makes termination decisions with an appeal option, was bypassed. “This is all opaque to us,” one NSF source told Nextgov. “We don’t know who the individuals are that are calling shots. Now, it’s as though the foundation has been hijacked.”

AmeriCorps lost nearly $400 million in active grants, shutting down over 1,000 programs. The Department of Justice cancelled 373 grants worth $820 million supporting violence reduction and victim services. FEMA resilience programs lost nearly $1 billion.

Did It Work?

The stated goal was deficit reduction. In his January 2026 deposition, Cavanaugh was asked directly:

“You don’t regret that people might have lost important income … to support their lives?”

“No. I think it was more important to reduce the federal deficit from $2 trillion to close to zero.”

“Did you reduce the federal deficit?”

“No, we didn’t.”

Fox similarly acknowledged that the deficit was never reduced. Elon Musk departed DOGE at the end of May 2025. By November 2025, the Office of Personnel Management declared that DOGE had ceased to exist as a “centralized entity.”

What it left behind was concrete: researchers laid off mid-project, community organizations shuttered, language preservation work halted, and billions in already-appropriated funds clawed back from grants that had passed rigorous peer review before a chatbot ever saw them.

The Lawsuit

The American Council of Learned Societies, the American Historical Association, the Modern Language Association, and the Authors Guild filed their lawsuit in May 2025 and moved for summary judgmentA court ruling that resolves a case without a full trial, granted when there is no genuine dispute over the key facts and the law clearly favors one side. in March 2026. They allege violations of the First Amendment (targeting grants for their viewpoints), the Equal Protection Clause (flagging grants based on references to race, gender, ethnicity, and sexuality), and separation of powers (DOGE, not the NEH chair or Congress, made the funding decisions).

ACLS President Joy Connolly put it bluntly: “DOGE employees’ use of ChatGPT to identify ‘wasteful’ grants is perhaps the biggest advertisement for the need for humanities education, which builds skills in critical thinking.”

The NEH is awarding grants again, but the new recipients skew heavily toward conservative-aligned projects, including $10 million grants to public universities with “civics” schools and an education network headquartered at a conservative think tank. Trump has nominated McDonald to serve as permanent NEH chairperson.

On April 1, 2025, the National Endowment for the Humanities terminated roughly 97 percent of its active grant portfolio. The mechanism behind this decision, revealed through court-ordered discovery in March 2026, is a case study in how large language modelsA machine learning system trained on vast amounts of text that predicts and generates human language. These systems like GPT and Claude exhibit surprising capabilities but also make confident errors. fail when deployed as classification systems without ground truthIn machine learning, verified reference data used to train or evaluate a model's accuracy. Without it, a classifier has no reliable standard to measure against. labels, validation sets, or domain expertise.

The Classification Pipeline

DOGE staffer Justin Fox submitted 1,163 NEH grant descriptions to OpenAI’s ChatGPT with the following prompt:

“Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.”

The responses were copied into a spreadsheet alongside each grant’s metadata. The spreadsheet included columns for “DEI rationale” and “Yes / No DEI?” This ChatGPT-generated list replaced the list created by NEH’s own staff for determining which grants to cut.

Of 1,163 grants processed, 1,057 were classified as DEI-related and 42 were retained. That is a positive classification rate of approximately 91 percent. The ultimate termination rate across the full NEH portfolio was 97 percent.

Why the Classifier Failed

The prompt design guaranteed a high false positive rateThe proportion of negative cases a classifier incorrectly labels as positive. A high rate means the model is flagging too many things that do not qualify. for several compounding reasons:

No operationalized definition. The prompt asked whether something “relate[s] at all to DEI” without defining DEI. Fox testified that he did not define DEI for ChatGPT and did not know how the model interpreted the term. Without a ground truth label, the classifier has no target to optimize against. It defaults to the broadest possible semantic association in its training data.

Maximally inclusive phrasing. “Relate at all to” is the loosest possible relevance threshold. Any semantic connection, however tenuous, qualifies. A language model trained on internet text will find some associative path between virtually any humanities project and the concept of diversity, equity, or inclusion, because those terms are pervasive in academic and cultural discourse. Asking “does this relate at all” is the NLP equivalent of asking “could this possibly be connected if you squint.”

Character limit compressed reasoning. The 120-character constraint forced ChatGPT to produce a rationale that fit in roughly 20 words. This eliminated any possibility of nuanced analysis, hedge language, or counterargument. The model had to commit to a binary classification with a tweet-length justification.

No negative class calibrationThe alignment between self-assessed and actual performance or knowledge. Well-calibrated people accurately estimate their own abilities; poorly calibrated people misestimate.. There was no test with known non-DEI grants to establish a baseline false positive rate. There was no adversarial testing with edge cases (like HVAC grants) to see where the boundary sat. The system was deployed at scale on its first run.

The predictable result: an HVAC replacement grant was classified as DEI because “improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences.” The chain of reasoning is: better climate control leads to better preservation leads to more accessibility leads to diverse audiences leads to DEI. Each inferential step is individually plausible in a language modeling sense, but the cumulative chain maps a mechanical engineering project onto an ideological category.

The Keyword Layer

The ChatGPT classification was not the only filter. Fox also conducted keyword searches of the grant database, looking for terms including “DEI, DEIA, Equity, Inclusion, BIPAC, LGBTQ.” He compiled a separate “Detection List” using terms like “gay,” “BIPOC,” “indigenous,” “tribal,” “melting pot,” and “equality.”

He searched for “Black” and “homosexual” but not “white” or “caucasian.” This asymmetry means the keyword filter was structurally more likely to flag grants involving racial and sexual minorities regardless of whether the grant’s actual purpose related to DEI policy.

The combination of a maximally permissive LLM classifier and an asymmetric keyword filter created a system that was, by construction, almost incapable of producing a “no” result for any grant involving underrepresented populations. This is not a bug in the sense of unexpected behavior. The system worked exactly as designed. The design was the problem.

Institutional Process Displacement

Federal grant agencies have established procedures for grant review and termination. At the NSF, program officers evaluate projects, the Division of Grants and Agreements makes termination decisions, and awardees have an appeal process. At the NEH, grants pass through peer review before being awarded.

DOGE bypassed all of these mechanisms. Fox drafted and sent termination letters himself using a Microsoft email account rather than routing them through the NEH’s office of grant management. The letters cited a nonexistent executive order. NEH Acting Chair McDonald ceded decision authority to DOGE in writing. DOGE and NEH staff communicated about the process via Signal with auto-delete enabled, violating the Federal Records Act.

The same pattern repeated at the NSF. Three DOGE affiliates embedded in the Office of the Director, with at least one receiving “Budget, Finance, and Administration” clearance allowing modification of the agency’s funding system. The Office of Management and Budget instructed NSF staff that all funding opportunities required DOGE or OMB approval. Approximately 430 grants worth $328 million were terminated, including research on deepfake detection, election security, and cyber-physical systems protection.

Scale

By January 2026, DOGE had driven 15,887 federal grant terminations totaling approximately $49 billion. These were not budget cuts to future appropriations. They were terminations of grants already awarded, often mid-execution. Universities with multi-year NSF grants lost funding in Year 3 of 5. Nonprofits running federally funded community programs lost their entire operational budgets.

The economic impact extends beyond the direct dollar figures. The grants funded staff who were laid off, research infrastructure that was abandoned, and community programs that closed. The AmeriCorps cuts alone eliminated over 32,000 positions. The Department of Justice cancelled 373 grants worth $820 million that had been supporting violence reduction and victim services.

The Deposition Testimony

In his January 2026 deposition, Cavanaugh was asked whether it was inappropriate that “someone in their 20s with no experience with federal government grants was making personal judgment calls about what grants to cancel.” He said it was not inappropriate and that he did not need formal education or experience. He was asked if he had read any books on how to identify DEI in grants. He had not.

Fox was asked why a Holocaust documentary about Jewish women’s experiences counted as DEI. He called it “a gender-based story that’s inherently discriminatory to focus on this specific group.”

When asked whether he took any steps to ensure ChatGPT’s classification would not discriminate on the basis of sex, Fox replied: “It didn’t matter.”

The stated justification for all of it was deficit reduction. When pressed, Cavanaugh admitted the deficit was not reduced. Cavanaugh also admitted that the “pressure from the White House” Fox invoked in emails to McDonald was fabricated as a “time pressure tactic.”

Legal and Institutional Aftermath

The ACLS, AHA, MLA, and Authors Guild filed their lawsuit in May 2025 and moved for summary judgmentA court ruling that resolves a case without a full trial, granted when there is no genuine dispute over the key facts and the law clearly favors one side. in March 2026 on three grounds: First Amendment viewpoint discriminationA First Amendment doctrine prohibiting the government from restricting or penalizing speech based solely on the viewpoint it expresses., Equal Protection Clause violations (flagging grants based on references to race, gender, ethnicity, sexuality), and separation of powers violations (DOGE, not the NEH or Congress, controlled funding decisions).

ACLS President Joy Connolly stated: “DOGE employees’ use of ChatGPT to identify ‘wasteful’ grants is perhaps the biggest advertisement for the need for humanities education, which builds skills in critical thinking.”

The NEH has resumed awarding grants, but with a pronounced shift toward conservative-aligned projects. Two public universities with “civics” schools and an education network based at a conservative think tank received $10 million grants. Trump nominated McDonald as permanent NEH chairperson. Multiple federal judges have issued orders blocking or reversing specific grant terminations across agencies, but litigation moves slowly relative to the damage already inflicted.

How was this article?
Share this article

Spot an error? Let us know

Sources