Recently, more than 100 Nobel Awards and a dozen nuclear weapons experts met in the United States and their conclusions warn of the dangers of a nuclear war linked to AI. Experts greatly agreed that It is only a matter of time until an AI achieves nuclear codes.
“It’s like electricity -explained Bob Latiff, general retired from the American Air Force and member of the Science and Security Board of the Atomic Scientists Bulletin, in an interview -. It will sneak into everything”
Despite the ominous warning, it is not strange: It has already been shown that IA have numerous dark facetsresorting to blackmail to human users at an amazing pace when they are threatened to be disabled.
In the context of an AI, or AI networks, which safeguard an arsenal of nuclear weapons, this type of risks little understood become immense. And not to mention a genuine concern of some experts, which is also the plot of the film Terminator: a hypothetical superhuman that becomes rebellious and uses the nuclear weapons of humanity against him.
Earl this year, former Google executive director Eric Schmidt warned that a human level AI I might not have reasons to listen to us on a stage like this, arguing that “people do not understand what happens when you have intelligence at this level.”
This type of catastrophism on AI has been in the minds of technological leaders for many years, while reality is up to date on the slow motion. In its current form, the risks would probably be more trivial, since the best current AI models still suffer from hallucinations than They considerably reduce the usefulness of their results.
In addition, there is the threat that A defective AI technology leaves gaps in our cybersecurity, allowing adversariesor even adversary, access systems that control nuclear weapons.
To ensure that all members of the unusual meeting last month agree on a subject as complex as AI was a challenge, since the global risk director of the Federation of American scientists, Jon Wolfsthal, admitted that “No one really knows what AI is. In this area, almost everyone says that we want effective human control over decision -making on nuclear weapons. You have to assure people for whom you work that there is a responsible. ”
If all this sounds impossible, it is not strange. Under the presidency of Donald Trump, the federal government has been occupied by introducing AI in all possible areas, often while Experts warn them that technology is not yet, and maybe it never is, at the height of the task. So much that the energy department itself declared this year that AI is the “next Manhattan project”, in reference to the initiative of World War II that gave rise to the world’s first nuclear bombs.
Ai is The Next Manhattan Project, and The United States Will Will. 🇺🇸
– US Department of Energy (@Energy) May 29, 2025
Underlining the severity of the threat, Openai, creator of Chatgpt, also reached An agreement With the National Laboratories of the United States at the beginning of this year to use their AI in the safety of nuclear weapons.
Last year, the General of the Anthony Cotton Air Force, who is in charge of the American Nuclear Missile Arsenal, said at a defense conference that The Pentagon is redoubled its efforts in AIarguing that “it will improve our decision -making capacity.” Luckily, Cotton did not declare that we must let technology assume total control. Still.