OpenAI may soon mandate that enterprises undergo an ID verification procedure to access certain forthcoming AI models, as indicated by a support page released on the company’s website last week.
The certification procedure, termed Verified Organization, is described as “a new method for developers to gain access to the most advanced models and capabilities on the OpenAI platform,” according to the website.
Verification requires a government-issued identification from a country recognized by OpenAI’s API. An identification may authenticate just one business every 90 days, and not all companies qualify for verification, according to OpenAI.
“At OpenAI, we are committed to ensuring that AI is widely accessible and utilized safely,” states the portal. A few of developers deliberately use the OpenAI APIs in contravention of our usage regulations.
We are implementing a verification method to reduce the dangerous use of AI while maintaining access to complex models for the wider developer community.
OpenAI introduced a new Verified Organization status, enabling developers to access the most sophisticated models and capabilities on the platform, and to prepare for the next model release.
The new verification procedure may aim to enhance security for OpenAI’s products as they advance in sophistication and capability. The corporation has released many studies detailing its efforts to identify and counteract the malevolent use of their models, particularly by entities purportedly affiliated with North Korea.
It may also be intended to deter intellectual property theft. A Bloomberg article from earlier this year indicated that OpenAI was examining whether a group associated with DeepSeek, a China-based AI laboratory, illicitly extracted substantial data via its API in late 2024, perhaps for model training constituting a breach of OpenAI’s conditions.
OpenAI restricted access to its services in China last summer.