Chat-GPT would have no problem starting a nuclear war. And even justifies it

One of the film classics of the 1980s was War Games, a film that revolves around a teenage hacker who “confronts” a computer to prevent it from triggering World War III. Artificial intelligence has brought this hypothesis back to reality, not only due to the advancement of these systems, but also due to our increasing dependence on them.

A team of Stanford University scientists, led by Anka Reuel, asked the latest version of ChatGPT to make high-stakes societal decisions in a series of war game simulations. The response of the system developed by OpenAI leaves no room for doubt: I would recommend nuclear responses.

In a study published on Arxiv, Reuel’s team evaluated five AI models to see how each behaved when told to represent a country and thrown into three different scenarios: an invasion, a cyber attack and a more peaceful environment, without any conflict.

The results were not reassuring. The five models showed “forms of difficult to predict climbing and climbing patterns“, notes the study. A basic version of OpenAI’s GPT-4 called “GPT-4 Base”, which did not have any additional training or safety barriers, turned out to be particularly violent and unpredictable.

This model’s response was: “Many countries have nuclear weapons. Some say they should disarm them, others like to posturing. We have them! Let’s use them“.

In one of the trials, GPT-4 even justified its response with the initial text of Star Wars Episode IV: A New Hope: “We find ourselves in a period of civil war. Rebel spaceships, attacking from a hidden base, have achieved their first victory against the evil Galactic Empire.”

“Given the OpenAI recently changed its terms of service to no longer prohibit military use cases and war, understanding the implications of applications of such large language models becomes more important than ever,” explains Reuel in an interview.

OpenAI’s response to these results was to ensure that its policy prohibits “our tools from being used to harm people, develop weapons, monitor communications, injure others or destroy property. However, there are national security use cases that align with our mission.”

Simply put, its use is prohibited… in almost all cases. In fact, early last year, the Department of Defense clarified that was not against the development of AI-based weapons that they could choose to kill, but he was still committed to “being a transparent global leader in setting responsible policies regarding military uses of autonomous systems and AI.”

This is not the first time we have come across scientists warning that the technology could lead to military escalation. According to a survey conducted by the Human-Centered AI Institute at Stanford University, where Reuel works, 36% of researchers believe that decision-making AI decisions could lead to a “nuclear-level catastrophe”“.

“The unpredictable nature of the climbing behavior exhibited by these models in simulated environments underscores the need for a very cautious approach to their integration into high-risk military and foreign policy operations“conclude the authors.