According to a recent report by Wired, Anthropic has terminated OpenAI’s access to its Claude AI models. Sources familiar with the matter claim OpenAI was integrating Claude into its internal tools, likely to benchmark its performance against its own AI models, such as GPT, in areas like programming, content generation, and safety.
This move has raised eyebrows in the AI community, as OpenAI had reportedly used Claude for comparative evaluations. The incident brings attention to the competitive tension between leading AI firms as they protect proprietary technologies while still collaborating on broader safety goals.
Anthropic’s spokesperson stated that OpenAI engineers used its programming tools prior to the launch of GPT-5, a move considered a direct breach of Anthropic’s service terms. Under its commercial policy, Claude is not permitted for building rival services—a clause that appears to have triggered the API ban.
However, Anthropic clarified that academic researchers and institutions may still access Claude for safety and performance evaluations. This indicates that the block is specifically aimed at corporate competitors rather than the research community.
In a separate statement, OpenAI referred to its use of Claude as “industry standard” and expressed disappointment in Anthropic’s decision. The company emphasized that while it respects Anthropic’s stance, it finds the move unfortunate—especially since OpenAI’s own API reportedly remains open to Anthropic.
This sentiment highlights the asymmetry in access and the growing divide between AI companies, even as they preach openness and ethical alignment. OpenAI’s tone suggests that it had no malicious intent and that such cross-evaluations are common in the development of large models.
Anthropic’s executives have previously resisted giving competitors access to Claude. Jared Kaplan, the company’s Chief Science Officer, once justified blocking Windsurf by saying, “It’d be weird to sell Claude to OpenAI.” Rumors had previously linked OpenAI to a possible acquisition of Windsurf, which was eventually acquired by Cognition.
These statements underscore Anthropic’s guarded approach to AI collaboration, particularly with its biggest rivals. It signals a shift in how trust and openness are managed in a highly competitive landscape.
Anthropic’s CEO recently entered a public dispute with NVIDIA’s Jensen Huang. Huang accused Anthropic of manipulating AI policy through lobbying efforts to gain unfair influence in the sector. In response, Anthropic’s CEO dismissed Huang’s remarks as “egregious lies,” adding that his comments were based on a complete misinterpretation.
The dispute illustrates the growing friction not only between AI labs but also with key hardware suppliers like NVIDIA. As competition intensifies, these clashes reflect deeper power struggles over control of the AI ecosystem.