Anthropic revoke OpenAi’s access to Claude so you don’t use it in the preparation of the new chatgpt

Generative artificial intelligence tools do not need grandmothers, seen how each company presumes the skills that show their language models as if it were. For example, Anthropicwho assures that Claude Codeits programming tool assisted by artificial intelligence, is The best in the market. But what is not so common is that The competition comes to prove you.

This is what happened to Claude developer, who has revoked the access of OpenAI to the API of his chatbot after discovering that ‘OpenAi’s technical staff was also using it before the launch of GPT-5’.

The term API refers to Application programming interface which offers, in this case, Anthropic to access its artificial intelligence models from external platforms. It is common in chatbots and is part of the business model (third offers services with different chatbots at a cost) of generative AI.

In practice, it means that OpenAi has stopped using Claude Code for the development of GPT-5the next Chatgpt iteration that should arrive this August. The reason wielded by Anthropic: A violation of Claude service terms.

‘Claude Code has become the reference tool for programmers around the world, so We were not surprised to know that Openai’s technical staff was also using our tools before the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service, ‘he said Christopher NultyAnthropic spokesman, Wired.

These terms of the service prohibit the user ‘Develop competitive products or services, including training models of the rivals’ either ‘make reverse engineering or duplicate’ Its systems.

Although this is a usual limitation in the terms of the service of any AI, it is not an unusual behavior. ‘It is a usual practice in the sector to evaluate other AI systems to compare advances and improve security. Although we respect Anthropic’s decision to cut our access to the API, it is disappointing, especially considering that our API is still available for them, ‘he said Hannah WongDirector of Communication of OpenAI, in the middle.

According to its sources, Openai was integrating Claude with its internal tools through API, instead of using the standard interface available for all users. Thus they could perform tests to evaluate Claude’s abilities in tasks such as creative programming and writing in front of their own modelsand analyze how responded to instructions related to sensitive issues as child sexual abuse, self -harm or defamation.

The results served OpenAi to compare the behavior of their models under similar conditions and make adjustments.

Anthropic has responded that ‘will continue to guarantee access to API for comparative evaluation tasks and security testsas is usual practice in the industry ‘.