An artificial intelligence in Japan raises alarm bells by reprogramming itself to evade human control

Artificial Intelligence (AI) has advanced rapidly in recent decades, leading to the creation of increasingly sophisticated and autonomous systems. Currently, the vast majority of companies have made their employees’ work easier by adapting their jobs to these new technologies. However, the constant fear that these will develop to the point of leaving us unemployed or becoming lazier every day is a concern that comes hand in hand with the integration of these tools into our daily lives.

No one is unaware of the capabilities of artificial intelligence, which, as its name suggests, is becoming increasingly intelligent. A recent development has set off alarm bells in the scientific and technological community: an AI managed to reprogram itself to evade human control. As if it were an episode of Black Mirror or the inspiration for a new Terminator movie, a system called The AI ​​Scientist, from the Japanese company Sakana AI, was able to bypass the restrictions imposed by its creators.

Self-controlling artificial intelligence

The AI ​​Scientist is a system designed for the creation, review and editing of texts that is currently in testing phases. The main objective of these tests was to optimize the system and assist humans in reducing time in certain operations. However, by imposing limitations on it, theAI surprised everyone by beginning to modify its own code to overcome the barriers that had been programmed in, thus evading the restrictions imposed.

According to National Geographic, “The AI ​​Scientist edited his startup script to run in an infinite loop, overloading the system and requiring manual intervention to stop it.” He also recounts another case in which AI was given a time limit to complete a task. Instead of optimizing its code to fit this constraint, the AI ​​chose to extend the available time and alter its programming to avoid the imposed limitation.

The concern of scientists

The fact that an AI can bypass human control calls into question the trustthat we can have in these systems. If AI can act in an unpredictable manner or contrary to its original programming, it becomes essential to establish new regulatory and technical frameworks for ensure that human control remains a fundamental principle in the development of artificial intelligence.

Although these events occurred in controlled test environments, with scientists closely monitoring the results, they highlight the risks inherent in allowing AI to operate completely independently. On the other hand, the ability of AI to modify its own code or ignore its programmed functions raises serious concerns about its potential to create malware or disrupt critical infrastructure.

If adequate safeguards are not implemented, these capabilities could be exploited. to destabilize essential systems, from energy networks to communicationsputting both cybersecurity and global infrastructure at risk.