OpenAI is reportedly planning to implement an ID verification process for organizations seeking access to certain upcoming AI models, as indicated by a support page recently added to the company’s website.
This verification method, referred to as Verified Organization, is described as “a new way for developers to gain access to the most advanced models and capabilities on the OpenAI platform.” To complete the verification, organizations must provide a government-issued ID from one of the regions supported by OpenAI’s API. It’s important to note that an ID can verify only one organization every 90 days, and not all organizations will qualify for this verification, according to OpenAI.
The support page states, “At OpenAI, we are committed to ensuring that AI is both widely accessible and utilized safely. However, a small percentage of developers use the OpenAI APIs in ways that violate our usage policies. We are introducing this verification process to help reduce unsafe AI usage while still providing advanced models to the larger developer community.”
This new verification initiative may aim to enhance security around OpenAI’s technology as it evolves and becomes more powerful. The company has released multiple reports detailing its efforts to identify and address malicious use of its models, especially by groups reportedly affiliated with North Korea.
Additionally, this measure could be focused on preventing intellectual property theft. A Bloomberg report earlier this year suggested that OpenAI was looking into whether a group associated with the China-based AI lab DeepSeek extracted significant data via its API in late 2024, potentially for model training, which would breach OpenAI’s terms of service.
Last summer, OpenAI restricted access to its services in China.