The theoretical physicist Stephen Hawking, who died in 2018 at age 76 because of amyotrophic lateral sclerosis (ELA), was one of the most influential minds of the twentieth century. Although mainly known for his theories about black holes and relativity, he also made warnings about other great challenges for humanity, including the potential risks of AI. This technology, which in 2025 is more present than ever thanks to advances such as language models and autonomous applications, He was seen by Hawking as an existential threat if he is not managed carefully.
Hawking warnings about AI
Hawking expressed concern for AI on several occasions, highlighting that its development could represent both the greatest achievement and the greatest danger to humanity. During a conference on the Summit website in Portugal in 2017, Hawking said: “Success in the creation of an effective AI could be the greatest event in the history of our civilization. Or the worst. We just don’t know. Therefore, we cannot know if we will be infinitely helped by AI, or ignored by it, and marginalized, or possibly destroyed by it. “
In his posthumous work, brief answers to great questions, published in 2018, Hawking expanded this reflection, noting that the creation of a superintelligent AI could occur in the next 100 years. According to him, human beings could be obsolete compared to machines, which would learn and evolve at impossible speeds for slow human biology. This difference in time scales, argued, could put our survival at risk as a species.
The Leverhulme Center for the future of intelligence
To mitigate these risks, Hawking supported the creation of the Leverhulme Center for the future of Intelligence (ICC), opened at the University of Cambridge with a financing of 10 million pounds (approximately 11 million euros) of the Leverhulme Foundation. The center, created in collaboration with other universities in the United Kingdom and the United States, Its objective is to explore the challenges and opportunities presented by AIensuring that this technology develops to benefit humanity.
Huw Price, academic director of the Center, stressed that his mission is to “create an interdisciplinary community of researchers” to address the risks and short and long -term benefits of AI. Margaret Boden, one of the project researchers, also emphasized that although AI has the potential to solve global problems, its misuse could raise serious dangers for society.
Hawking and his other predictions about the future
Hawking not only worried about AI. He also warned of other catastrophic scenarios for humanity, including:
- Multiverse: Hawking believed in the existence of parallel universes, each with their own physical laws, which would raise philosophical and scientific challenges.
- Climate change: predicted that the increase in greenhouse gas emissions could cause large droughts, floods and storms.
- Genetically modified superhumanos: he imagined a future where advances in genetic edition would create significant inequalities among humans
- Genetically modified virus: warned that advances in biotechnology could be misused to create biological weapons.
- Distrust in science: concerned about the growing scientific illiteracy, feared that this eroded public trust in research.