“AI would be a barrier so difficult to overcome that it would prevent most of life from evolving,” says a scientific study.

Artificial intelligence has progressed at an astonishing rate in recent years. And even more so if we take into account the development of artificial superintelligence (ASI), a form of AI that would not only surpass human intelligencebut would not be limited by the learning speeds of humans.

But what if this milestone isn't just a notable achievement? What if it also represents a bottleneck in the development of all civilizations, one so challenging that it frustrates their long-term survival? This idea is at the center of an article recently published in Acta Astronautica. And the question on which it is based is very direct: Could AI to be the “great filter” of the universea threshold so difficult to overcome that it prevents most of life from evolving into space civilizations?

According to Michel Garret, an astrophysicist at the University of Manchester and leader of the study, this could explain why the search for extraterrestrial intelligence has not yet detected signatures of advanced technical civilizations in other parts of the galaxy.

The Great Filter Hypothesis is a proposed solution to the Fermi Paradox that poses why, in a universe vast and old enough to host billions of potentially habitable planets, we have not detected any signs of extraterrestrial civilizations. The answer is that there are insurmountable obstacles or bottlenecks in the evolution of civilizations that prevent them from becoming space explorers. Other evolutionary bottlenecks are obtaining the energy necessary to explore other worlds or genetic diversity.

According to Garret, the emergence of ASI could be that filter: AI progresses much faster than our ability to control it. “The challenge of AI, and specifically ASI, lies in its autonomous, self-amplifying and enhancing nature. It has the potential to improve its own capabilities at a speed that exceeds our own evolutionary schedules without AI – the study explains -. So, The potential for something to go wrong is huge., which would lead to the fall of civilizations before they had a chance to become multiplanetary. For example, if nations increasingly rely on autonomous AI systems that compete with and cede power to each other, military capabilities could be used to kill and destroy on an unprecedented scale. “This could potentially lead to the destruction of our entire civilization, including the artificial intelligence systems themselves.”

Garrett's team estimates that the Typical longevity of a technological civilization could be less than 100 years. That's about the time between the ability to receive and transmit signals between stars (1960) and the estimated appearance of ASI (2040) on Earth.

“This investigation is not simply a warning about a possible fatality. It serves as a wake-up call for humanity to establish strong regulatory frameworks to guide the development of AI, including military systems – the study states -. It's not just about Prevent malevolent use of AI on Earth; It is also about ensuring that the evolution of AI aligns with the long-term survival of our species. But even if all countries agreed to comply with strict rules and regulations, It will be difficult to control dishonest organizations. Humanity is at a crucial point in its technological trajectory. “Our actions now could determine whether we become a lasting interstellar civilization or succumb to the challenges posed by our own creations.”