
OpenAI's 'Trusted Access for Cyber' program is a controlled rollout initiative that restricts access to its advanced cybersecurity products to select defensive security operators.
OpenAI is limiting access due to concerns that powerful AI models could pose risks if released publicly, as highlighted by recent worries among cybersecurity experts.
GPT-5.3-Codex is OpenAI's most capable cybersecurity offering and serves as the foundation for the products being rolled out through the 'Trusted Access for Cyber' program.
OpenAI is backing participant access in the 'Trusted Access for Cyber' program with $10 million in API credits.

OpenAI is developing a cybersecurity product that will be available only through its 'Trusted Access for Cyber' program, limiting access to select defensive security operators. This decision follows concerns about the risks associated with powerful AI models like GPT-5.3-Codex.
OpenAI is currently building a cybersecurity product it plans to release exclusively through its "Trusted Access for Cyber" program, according to Axios. The program was previously announced in February, and it’s meant to be a controlled rollout that keeps certain products away from the general public and in the hands of defensive security operators only.
OpenAI launched the program after releasing GPT-5.3-Codex, currently its most capable cybersecurity offering, and is backing participant access with $10 million in API credits.
The news comes amid growing worry among cybersecurity experts over the potential for increasingly powerful AI products overwhelming existing systems. Just earlier this week, Anthropic spooked itself with its own creation, Claude Mythos.
Anthropic said Mythos is the company's most capable AI model, and turned out to be so effective at finding security vulnerabilities—zero-days in every major operating system and browser—that it decided only a handpicked group of organizations should have access to it.
Now OpenAI is, reportedly, doing something similar.
Anthropic is currently fighting a legal battle after the Pentagon designated it a supply chain risk after the company refused to lift usage restrictions on Claude for surveillance and autonomous weapons applications. Federal agencies have been scrutinizing AI companies' safety protocols with increasing intensity since early April.
As of now, OpenAI has not shared any public information officially confirming or denying the reports.
The reason for the restrictions isn't subtle. Anthropic's Mythos Preview, which leaked before its official rollout, was found capable of identifying "tens of thousands of vulnerabilities" that even advanced human bug hunters would struggle to locate. The model is described as "extremely autonomous" and reasons with the sophistication of a senior security researcher. That kind of capability, available to anyone with an API key, is the kind of thing that keeps security teams up at night.
Anthropic's response was Project Glasswing—a controlled access initiative that gives Mythos Preview only to vetted organizations: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and roughly 40 others involved in maintaining critical infrastructure.
OpenAI's decision to lock down products like this one looks like an attempt to get ahead of that regulatory pressure. By voluntarily restricting access before a government agency tells them to, OpenAI positions itself as the responsible actor in a space where Anthropic is getting hammered.
The restrictions also reflect something deeper than caution about one specific model. Anthropic's own safety report acknowledged that Cybench, the benchmark used to evaluate whether an AI poses serious cyber risk, "is no longer sufficiently informative of current frontier model capabilities"—because Mythos cleared it completely. The tool built to measure the danger is no longer adequate for what's being built. Anthropic added that its overall safety determination "involves judgment calls" and that many evaluations leave "more fundamental uncertainty."
Anthropic committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations as part of its rollout. OpenAI has not announced a comparable commitment alongside its access program, though both companies are framing their restricted programs as a net benefit for defensive security—the idea being that giving better tools to defenders before attackers get them is worth the tradeoff of limiting general access.
The pattern emerging across the frontier AI industry is that the most capable models will no longer arrive as broad product launches. They'll be distributed more like classified research—selectively, under agreement, to organizations with the infrastructure and intent to use them responsibly.
Share this article






See every story in Crypto — including breaking news and analysis.