In a clear sign of intensifying rivalry in the AI race, Anthropic has accused OpenAI of violating its terms of service and partially blocked the ChatGPT-maker from accessing its Claude series of AI models via API (application programming interface).
OpenAI has been granted special developer access (APIs) to Claude models for industry-standard practices like benchmarking and conducting safety evaluations by comparing AI-generated outputs against those of its own models.
However, according to a report by Wired, Anthropic has now accused members of OpenAI’s technical staff of using that access to interact with Claude Code – the company’s AI-powered coding assistant – in ways that violated its terms of service.
The timing is notable as it comes ahead of the widely anticipated launch of GPT-5, OpenAI’s next major AI model which is purportedly better at generating code. Anthropic’s AI models, on the other hand, are popular among developers due to its coding abilities.
Anthropic’s commercial terms of service prohibits customers from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services.
“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service,” Anthropic spokesperson Christopher Nulty was quoted as saying by Wired.
Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry,” he added.
Story continues below this ad
“I think trust is really important. I think the leaders of the company have to be trustworthy people, they have to be people whose motivations are sincere no matter how much you’re driving forward the company. Technically if you’re working for someone whose motivations are not sincere, who’s not an honest person who does not truly want to make the world better, it’s not going to work. You’re just contributing to something bad,” Anthropic CEO Dario Amodei said in an appearance on a recent podcast, where he was explaining his decision to leave OpenAI and launch Anthropic with his other co-founders.
Responding to Anthropic’s claims, OpenAI’s chief communications officer Hannah Wong reportedly said, “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.”
This is not the first time that Anthropic has taken such measures. Last month, the Google and Amazon-backed company restricted Windsurf from directly accessing its models following reports that OpenAI was set to acquire the AI coding startup. However, that deal fell through after Google reportedly poached Windsurf’s CEO, co-founder, and tech for $2.4 billion.
Ahead of cutting off OpenAI’s access to the Claude API, Anthropic announced new weekly rate limits for Claude Code as some users were running the AI coding tool “continuously in the background 24/7.”
Story continues below this ad
Earlier this year, OpenAI accused Chinese rival DeepSeek of breaching its terms of service. The Sam Altman-led company said it suspected DeepSeek of training its AI model by repeatedly querying its proprietary model, a technique commonly referred to as distillation.