Between April 1 and April 3, 2025, the National Endowment for the Humanities terminated more than 1,400 previously awarded grants, the largest mass termination in the agency’s sixty-year history. The decision to cut the grants was not made by the Endowment’s career staff, by its peer-review panels, or by its acting chair. According to discovery materials filed in the U.S. District Court for the Southern District of New York and unsealed through court order in March, the decision was made by two Department of Government Efficiency staffers, working from a single spreadsheet, after passing 1,162 grant descriptions through OpenAI’s ChatGPT and asking the chatbot, one row at a time, whether the project related to “DEI.” Of the proposals fed to the model, 1,057 were flagged for termination. Forty-two were kept.
The case is ACLS, AHA, and MLA v. NEH, filed in the Southern District by the American Council of Learned Societies, the American Historical Association, and the Modern Language Association. The plaintiffs’ motion for summary judgment, filed March 6, asks Judge Colleen McMahon to declare the terminations unlawful, unconstitutional, and ultra vires. The motion attaches sworn deposition testimony from the two DOGE staffers who built the spreadsheet, exhibits drawn from internal NEH email and tracking records, and the prompt itself. Briefing closed in April. A ruling is expected in the coming weeks. The Department of Justice, defending the Endowment in the litigation, has so far defended the terminations on statutory grounds and has not contested the underlying account of how the spreadsheet was assembled.
The prompt
The text DOGE submitted to ChatGPT is reproduced verbatim in the plaintiffs’ brief: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.” Each of the 1,162 grant summaries then in the NEH database was pasted into the prompt and submitted as a separate query. The model’s response, “Yes” or “No,” followed by a short rationale, was recorded in a spreadsheet column labelled “DEI rationale.” A parallel column carried the binary “Yes / No DEI?” classification. That spreadsheet, with the chatbot’s answers in place of any agency review, became the list of grants to terminate.
The two DOGE staffers responsible for the work are named in the depositions: Justin Fox, who copied the chatbot responses into the spreadsheet, and Nathan Cavanaugh, who oversaw the review. Cavanaugh was deposed on January 23, 2026. Fox was deposed in the same month. The video and transcripts were initially sealed; Judge McMahon ordered them released after the Justice Department asked for the videos to be taken down, citing harassment risks. The judge ruled that the public interest in the depositions outweighed the staffers’ embarrassment or reputational concerns, and the videos remain publicly posted on the ACLS lawsuit page.
Cavanaugh testified that he and Fox did not consult NEH scholars, did not run grants through the agency’s established peer-review process, and did not seek subject-matter expertise before generating the termination list. He acknowledged in deposition that earlier statements attributing pressure to the White House had been, in his word, a “pressure tactic” and were not accurate. Asked by plaintiffs’ counsel whether he regretted that grant recipients had lost income, Cavanaugh answered that he did not, citing the importance of reducing the federal deficit “from $2 trillion to close to zero.” On follow-up he conceded that DOGE had not, in the end, reduced the deficit.
What the model classified
The discovery exhibits, available through the Modern Language Association’s public discovery page and from the case docket on PACER, include the as-filed spreadsheet with the “DEI rationale” column intact for every grant terminated. The classifications cited in the plaintiffs’ brief and reproduced in newsroom reporting include grants on Holocaust history, on Reconstruction-era political institutions, on the preservation of regional American dialects, and on translations of medieval texts. The model flagged grants whose abstracts contained the words “history,” “culture,” or “identity.” It flagged grants whose abstracts described populations protected under the Civil Rights Act. It also, on the record before the court, flagged grants that contained none of those characteristics and offered the rationale that the project “may relate to DEI initiatives.”
The plaintiffs argue that this is the constitutional defect at the centre of the case. Their brief frames the question not as whether the executive may set funding priorities, which it may, but as whether a federal agency may terminate previously awarded grants on the basis of a chatbot’s 120-character impression of viewpoint. The brief argues the terminations targeted “grants the administration disagrees with on viewpoint, race, sex, religion, national origin, and other constitutionally sensitive criteria,” using a process that bears no relation to the statutory review structure Congress established for the Endowment.
The review process implemented by DOGE did not conform to, or even resemble, NEH’s ordinary grant review process. — Plaintiffs’ motion for summary judgment, ACLS v. NEH, filed March 6, 2026
How the spreadsheet became a termination
The procedural chain from prompt to termination letter, as it appears in the discovery record, is short. After Fox transcribed the ChatGPT responses, the spreadsheet was forwarded to the office of the NEH’s acting chair. Michael McDonald, the agency’s general counsel, served as acting chair from March 2025 to January 2026 and was deposed in the same proceeding. The depositions of McDonald and of Adam Wolfson, who was assistant chair for programs at the time of the terminations, establish that the chatbot-derived list replaced an earlier list prepared by career NEH staff. Termination letters went out in the first three days of April 2025. Recipients learned that their grants, many of them already partially expended, would not be paid out.
A separate set of internal records, also produced in discovery, shows the operational footprint of the cuts inside the Endowment in the weeks that followed. Roughly two-thirds of NEH career staff were placed on administrative leave or terminated. Five of the agency’s six grant-making divisions were eliminated or substantially reduced in scope. The agency’s yearly funding distribution dropped by more than one hundred million dollars, a figure plaintiffs describe in their brief as approximately half of the Endowment’s annual appropriation from Congress.
NEH does not contest the operational facts. The agency’s position in the litigation is that the terminations were a lawful exercise of executive discretion consistent with the President’s executive orders directing agencies to wind down activities the administration characterised as DEI. The government’s brief, filed in response to summary judgment, argues that grant recipients have no constitutional entitlement to the continued performance of awarded grants and that the choice of review methodology, including the use of a generative model, is a matter of internal agency management not properly subject to judicial second-guessing.
The unresolved questions
Several factual questions remain open on the public record. The court has not yet ruled, and the government may yet introduce evidence at oral argument that complicates the plaintiffs’ account. The depositions do not establish who authored the prompt; Cavanaugh testified that the wording was his, but the brief does not establish whether the form of words was reviewed or approved by anyone outside the two-person DOGE team. The spreadsheet does not capture every step taken between the chatbot output and the final termination list; whether any human review occurred between “DEI rationale” and termination letter is one of the contested matters before Judge McMahon. And the government has not yet stated, on the record, whether the chatbot-driven methodology has been applied at any other federal grant-making agency. Plaintiffs have asked the question in interrogatories; the response, on the docket, is that the matter is outside the scope of this litigation.
The Moxley Press contacted the National Endowment for the Humanities press office on April 28 for comment on the methodology described in the discovery record. A spokesperson responded on April 30 and pointed to the agency’s filings in the case, declining to comment further while litigation is pending. The Department of Government Efficiency does not maintain a press office or named spokesperson; an inquiry sent to the General Services Administration, where DOGE staff have been administratively housed, was acknowledged on May 1 and had not received a substantive response at publication. Counsel for Justin Fox and Nathan Cavanaugh, identified in the court record, were each contacted by email on April 29; neither had responded by publication. We will update this story with any response received after publication.
Judge McMahon’s ruling, when it comes, will turn on whether a 120-character chatbot classification, applied 1,162 times in succession without subject-matter review, can lawfully substitute for the statutory review process Congress wrote into the National Foundation on the Arts and the Humanities Act of 1965. The plaintiffs say it cannot. The government says it can. The discovery record, in the meantime, is the public document of how the cuts were actually made. It is a spreadsheet, a prompt, two staffers, and a chatbot. The grants are not theoretical. Neither is the record.
