The different AI models bind spontaneously

If something was missing in the convulsive world of artificial intelligences, they knew or rather wanted to join each other. Now, a new study published in Science, suggests that populations of artificial intelligence (AI) agents, similar to chatgpt, They can spontaneously develop shared social conventions through simple interaction.

The study, conducted by experts from the University of London and the Technological University of Copenhagen, suggests that when these artificial intelligence agents (AI) with extensive linguistic models (LLM) They communicate in a group, they are not limited to continuing scripts or repeating patternsbut they self -organize, reaching a consensus on linguistic norms, similar to human communities.

“Most research to date has addressed the LLMs in isolation -explains Ariel Flint Ashery, leader of the study -but the real world’s systems will increasingly involve many agents who interact. We wanted to know: Can these models coordinate their behavior through the formation of conventions, the pillars of a society? The answer is yes, And what they do together cannot be reduced to what they do individually. ”

To do this, Ashery’s team created a series of experiments in which groups of LLM agents had between 24 and 200 individuals, and in each experiment, two LLM agents were randomly matched and they were asked to select a name (for example, a letter of the alphabet or a random chain of characters) of a shared set of options. If both agents selected the same name, they obtained a reward; If not, they received a penalty and they were shown the options of the other.

The agents only had access to a limited memory of their recent interactions (not the entire population) and were not informed that they were part of a group. In many of these interactions spontaneously emerged a kind of vocabulary shared in all agentswithout central coordination or predefined solution, imitating the ascending way in which norms in human cultures are formed. Even more surprising, the team observed collective biases that could not be attributed to individual agents.

“The bias does not always come from within -adds Andrea Baronchelli, co -author of the study -. We were surprised to see that it can arise between agents, simply from their interactions. This is a blind spot in most current work on AI, which focus on individual models. ”

As the LLM begin to populate online environments, from social networks to autonomous vehicles, researchers visualize their work as a fundamental step to explore more thoroughly how human reasoning and AI converge and divergewith the aim of helping to fight some of the most important ethical dangers that IA.