President Trump's first-day promise to free artificial intelligence from government constraints has collided with a new reality: AI models that can hunt cybersecurity flaws with extraordinary speed and precision. Fifteen months into the administration, the White House is now preparing to become a gatekeeper for the most powerful models on Earth. Anthropic's Mythos, withheld from public use over safety concerns, was the first to trigger panic inside the administration.

It is a sea change in both Silicon Valley and Washington, accelerated by a class of models that no administration — even one ideologically committed to staying out of its way — can afford to ignore. OpenAI's GPT-5.5 now matches Mythos's capabilities, and Chinese labs are racing to catch up. The White House's shift signals that the frontier of AI has crossed a threshold demanding federal oversight.

Just two months ago, the Pentagon declared Anthropic a "supply chain risk" and effectively blacklisted the company. Now the same White House is developing guidance that would allow agencies to get around that designation and onboard new Anthropic models, according to Axios's Maria Curi and Ashley Gold. The reversal underscores the urgency of the moment.

The White House is also weighing an executive action as the next step. AI developers and defense contractors will be watching closely, as the Pentagon's earlier ban created uncertainty across the industry. If federal doors reopen for Anthropic, it could reshape the competitive landscape for AI in national security.

The shift exposes a tension: the administration wants to lead on AI safety without embracing heavy regulation. Critics may argue that picking winners by granting access while blacklisting others is itself a form of industrial policy.