“We have a minimal chance of survival,” warns an AI expert regarding the rebellion of the machines

Eliezer Yudkowsky is no newbie when it comes to artificial intelligence. Last year he had already warned of the danger of this technology. Despite being self-taught since the age of 15, he has published a dozen scientific studies on AI and ethics, collaborated with experts such as Nick Bostrom, was one of the founders of a blog funded by the University of Oxford and is the creator from the Artificial Intelligence Research Institute (MIRI). But when it comes to talking about the future of humanity and AI, he does not hesitate: “We have a minimal chance of survival”.

In a recent interview, Yudkowsky pointed out, when asked what awaits us as a species in the face of the development of AI, that “if you put me against a wall and force me to put probabilities to things, I have the feeling that our current remaining time is more like five years than 50 years. It could be two years, it could be 10.”

This expert has been warning of the dangers of AI for some time, in fact, less than a year ago he published an opinion article in Time that advised closing the data centers where AIs are located and trained. There he comes to speculate on the possible need for airstrikes targeting data centers; maybe even a nuclear exchange. “Without precision and preparation, the most likely outcome is that an AI will not do what we want and will not care about us or sentient life in general. That type of attention is something that, in principle, could be incorporated into an AI, but we are not prepared and we currently do not know how to do it,” he states in one of the paragraphs.

According to Yudkowsky, we still do not realize the dangers of artificial intelligence. And he is not the only one who thinks this way. Hundreds of experts, including Stephen Hawking, have already signed letters requesting regulation of AI and many others, such as Brian Merchant (columnist for The Los Angeles Times) or Molly Crabapple (contributor to the technology website Vice) agree with him. His mantra is that there is no need to adopt a technology just because we have developed it without thinking if it is good in the long run.

Today, the focus of much of the criticism of AI is not so much on the possibility of the technology destabilizing markets or eliminating some jobs, but on its possible ability to become a conscious entity and make decisions. that, more and more frequently, we are leaving it in their hands. And that is what worries Yudkowsky and many experts.

The question is how real is this threat? Not as big as the need to regulate the use of AI… but not so small that it can be discarded and forgotten. What is clear is that, although Yudkowsky continues to support the idea of ​​bombing data centers (although he already rules out the use of nuclear weapons), this extreme option is still far from even being an option.