WH ‘studying’ AI security executive order

An EO requiring pre-deployment review of frontier AI models would likely increase the workload at NIST's Center for AI Standards and Innovation.

The Trump administration is considering issuing an executive order to ensure new artificial intelligence models are secure before they’re released publicly, according to a top White House official.

Kevin Hassett, director of the National Economic Council, compared the approach to how the Food and Drug Administration evaluates drugs for safety.

“We’re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AI that also potentially create vulnerabilities should go through a process so that they’re released in the wild after they’ve been proven safe, just like an FDA drug,” Hassett said during an interview on Fox Business on Wednesday.

Hassett’s comments come as government and private sector leaders continue to respond to Anthropic’s disclosure of its powerful “Mythos” model. The company previewed last month how Mythos was capable of quickly finding and exploiting decades-old vulnerabilities in widely used software, sparking concerns that cyber attackers will be able to use AI to quickly discover new vulnerabilities and create exploits before defenders can react.

Anthropic has limited the release of the Mythos model to a handful of partner companies.

Hassett said he was “highly confident” in National Cyber Director Sean Cairncross’s work to coordinate the government’s response to Mythos.

“We have scrambled an all-of-government effort and all the private sector to coordinate and to make sure that before this model is released out into the wild, that it’s been tested left and right to make sure that it doesn’t cause any harm to the American businesses or the American government,” Hassett said.

The shift toward more government oversight of AI would mark a change in direction for the Trump administration, which has touted its largely hands-off approach to the technology.

It would also likely increase the responsibilities of the Center for AI Standards and Innovation (CAISI), a unit within the Commerce Department’s National Institute of Standards and Technology.

Earlier this week, CAISI announced new agreements with Google DeepMind, Microsoft and xAI that will allow the center to conduct “pre-deployment” evaluations of the firms’ respective frontier AI models. The NIST center had already struck similar agreements with Anthropic and OpenAI.

So far, CAISI has conducted 40 evaluations, including on some models that have yet to be released.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in a statement. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”

CAISI was initially established as the “AI Safety Institute” under the Biden administration. The Trump administration rebranded the center as part of its AI Action Plan. Commerce Secretary Howard Lutnick has designated CAISI “to serve as industry’s primary point of contact within the U.S. government to facilitate testing, collaborative research and best practice development related to commercial AI systems.”

But some outside experts have raised concerns CAISI lacks the resources needed to adequately carry out its mission.

The Trump-aligned America First Policy Institute, in a recent issue brief, called CAISI “chronically underfunded,” with approximately 30 total staff. The think tank said the center has received $30 million since it was established in 2024, which is less funding than similar AI centers in Canada and Singapore, respectively, have received.

The issue brief argued Congress should fund CAISI with $50-100 million in annual funding.

Meanwhile, a proposal published by the Federation of American Scientists last year advocated for a “significantly enhanced” CAISI with an annual operating budget of up to $155 million, as well as $155-275 million in “set up costs” for things like high-security compute facilities.

The enhanced center would have “expanded capacity for conducting advanced model evaluations for catastrophic risks, provide direct emergency assessments to the president and National Security Council (NSC), and drive critical AI reliability and security research, ensuring America is prepared to lead on AI and safeguard its national interests.”

Copyright © 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Getty Images/ThinkNeoArtificial intelligence robotics concept

    Protecting federal AI systems: A primer on RAG and securing AI-driven data workflows

    Read more

    Customer experience modernization sounds straightforward, until agencies try to execute it

    Read more