The Office of the Inspector General at the Department of Agriculture published an inspection report on May 12 finding that the department has not fully implemented the cybersecurity and governance controls the federal government now requires for artificial-intelligence systems, and that it cannot, at present, produce a current inventory of the AI tools running on its own networks. The report, numbered 50801-0018-12 and titled “Cybersecurity of Artificial Intelligence Technology at USDA,” is the first agency-wide evaluation of how a cabinet department is performing against the high-impact AI control regime created by Office of Management and Budget Memorandum M-25-21, signed in February 2025. Its four recommendations remain open. The Department has agreed to all four; the report does not specify when implementation will be complete.
The audit sits in a particular gap. AI governance at the federal level has, since the second Trump administration, been organised around two OMB documents: M-25-21, “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust,” signed in February 2025, and its acquisition companion M-25-22. Both reference a category called high-impact AI, defined by M-25-21 as AI whose output “serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” For systems in that category, the memorandum requires minimum risk-management practices: pre-deployment testing, an independent assessment where appropriate, Chief AI Officer sign-off before launch, a documented ability to pause or discontinue the system, and a process for affected individuals to seek human review. Agencies were directed to report compliance against these practices to OMB by September 22, 2026. The Agriculture Department’s September deadline is now four months away.
Four gaps, in the inspector general’s own words
The inspector general’s report lists four open recommendations. Read together, they describe a department that has built an AI strategy but not the operational scaffolding underneath it. The first recommendation directs USDA to establish department-wide controls that require a high-impact AI assessment, per OMB standards, for any system that meets the M-25-21 definition. The audit notes the absence of such controls at the time of fieldwork. The second directs USDA to update its policies and procedures to incorporate AI in compliance with the M-25-21 requirements. The third directs the department to “create an ongoing process to maintain a current USDA AI inventory.” The fourth directs the department to develop a pre-deployment process, including risk assessments, system authorisation, and impact analysis, before AI technologies are permitted to access the USDA network.
The third and fourth recommendations are the most consequential. The third is, in effect, a finding that the department’s public inventory of AI use cases, last updated in January 2026 and posted at usda.gov/ai/inventory as required by Executive Order 13960, is not the product of an ongoing tracking system but a point-in-time snapshot. Without a continuous inventory, the department cannot answer the predicate question for every subsequent control: what AI systems do we have? The fourth recommendation is a finding that AI tools have, until now, been able to reach the USDA network without a documented pre-deployment risk assessment. That is the gap an attacker would exploit, and the gap a Chief AI Officer cannot close by policy memo alone.
The strategy without the controls
The department has not been silent on AI. In early 2026 it published a Fiscal Year 2025–2026 AI Strategy, announced on February 26, that committed USDA to a five-objective framework: governance, workforce, infrastructure, data, and ethical implementation, with the framework aligned to the National Institute of Standards and Technology AI Risk Management Framework and informed by feedback from more than 200 outside stakeholders. The strategy named a Chief AI and Data Officer, Chris Alvares, sitting inside the Office of the Chief Information Officer, with delegated authority over the M-25-21 compliance plan. The compliance plan itself is USDA’s formal response to M-25-21. The Information Security Committee, the department’s internal governance forum, lists artificial intelligence and robotic process automation among its four standing priorities. None of these documents is what the inspector general’s report says is missing. What is missing, in the audit’s account, is the operational layer beneath them: the inventory process, the pre-deployment review, the binding department-wide control, the updated procedure.
The departmental context is also a factor the report does not fully treat. The previous Chief Information Officer, Gary S. Washington, left the agency entirely in November 2025 after eight years in the role and a brief intermediate post as Chief Innovation Officer. His successor as CIO, Sam Berry, was installed in September 2025. The audit fieldwork that produced the May 12 report spans, in part, the transition period. Whether the gaps the audit identifies are the artefact of an interregnum or a longer-standing condition is a question the report does not, on its public face, resolve.
What the inventory does, and does not, say
USDA has not fully implemented cybersecurity and governance controls within Artificial Intelligence systems in compliance with federal standards, leaving the agency at risk of data breaches or reputational harm. — USDA Office of Inspector General · report 50801-0018-12, Cybersecurity of Artificial Intelligence Technology at USDA, May 12, 2026
The Department of Agriculture inventory of AI use cases, published in the federal open-data catalog and cross-listed on the department’s public AI pages, lists the department’s declared AI use cases, with a short narrative description for each. The inventory is the artefact most readers can touch, and it has been updated in line with Executive Order 13960 and M-25-21. What it does not contain, and what the audit finds is not produced by any internal process, is a current, continuously maintained record that can be cross-referenced against the M-25-21 high-impact category. The CSV is a list; the audit asks for a system. The difference matters because high-impact designation triggers everything else, the independent testing, the pre-deployment authorisation, the Chief AI Officer approval, the discontinuation pathway. Without a process that flags new and changed use cases in near-real time and walks them through the high-impact screen, the September 22 OMB reporting deadline is met, on the present record, by a snapshot rather than a system.
What the department has agreed to
The Department concurred with all four recommendations in its written response to the audit, a response that, on the inspector general’s standard practice, accompanies the published report. The response does not, in the public version of the document, commit to specific implementation dates for any of the four. Under the OMB and inspector-general framework, open recommendations remain on the agency’s public ledger until the inspector general accepts the department’s evidence of closure; there is no statutory deadline by which the agency must close any single recommendation. The September 22 OMB reporting requirement is the only hard external date on the calendar, and that requirement is about reporting compliance, not about closing the inspector general’s recommendations.
The Office of the Inspector General’s public affairs office was contacted by The Moxley Press on the morning of May 12, hours after the report’s release, with three questions: the duration of the audit fieldwork, whether the inspector general’s office has visibility into the September 22 OMB submission USDA will make, and whether any of the four recommendations were closed between the close of fieldwork and the report’s publication. A spokesperson responded the same day, pointed to the published report, and declined to comment on the timeline of the audit or on inter-agency communications. The Department’s Office of Communications was contacted on the same morning with four questions: a target date for closing each of the four open recommendations, whether the Department has produced a draft of the September 22 OMB submission, whether any AI use cases on the public inventory currently meet the M-25-21 high-impact definition, and whether the Chief AI and Data Officer has invoked the M-25-21 pause-or-discontinue authority on any use case. A response had not been received by publication. Mr. Alvares, the Chief AI and Data Officer, was contacted at his published Department address; a response had not been received by publication.
What the audit does not, and cannot, do
The inspector general is an audit office, not an enforcement body. The report’s recommendations have the force of recommendation, not order. The office cannot, on its own authority, halt the use of any AI tool at the Department, cannot direct the Chief AI and Data Officer to invoke the M-25-21 discontinuation pathway on any specific use case, and cannot extend or modify the September 22 OMB reporting deadline. What it can do, and has done, is produce a public document, drawn from internal records, that lists the gaps and dates them. The next document in the chain is the Department’s September 22 OMB submission. Whether that submission, when filed, narrows the gap between the inspector general’s findings and the M-25-21 framework will be measurable on its face. The submission, on the present rule set, is required to be publicly available.
What is unresolved
Several questions are open. The Department has not stated publicly how many of the use cases on its January 2026 public inventory currently meet the M-25-21 high-impact definition; the audit does not produce that count either. The Department has not stated publicly whether any AI use case has been paused, discontinued, or denied authorisation under the M-25-21 framework; the audit does not surface such an instance. The September 22 OMB submission, the central downstream document in this chain, has not yet been filed, and the Department has not, on the public record, committed to publishing the draft. Whether the inspector general’s office will conduct a follow-on evaluation after September 22, comparing the Department’s reported compliance against the gaps catalogued in the May 12 report, is a question the May 12 document does not address.
The statute behind the high-impact framework is not, in this case, statute at all. M-25-21 is an executive-branch memorandum, and the obligations it creates run from the Office of Management and Budget to the Chief AI Officer at each agency. Congress has not, at the time of writing, enacted a federal AI cybersecurity statute that would harden these requirements into law or expose non-compliance to civil enforcement. The inspector general’s May 12 report is, in that sense, the only formal audit instrument currently available to test agency compliance against the framework. What it found at the Agriculture Department, on the public record, is a strategy without the controls underneath it, an inventory without the process behind it, and a network whose AI tools, until the recommendations close, can still arrive without a documented risk assessment.
