Many things are usually done about artificial intelligence. quasi-apocalyptic predictions in relation to scenarios in which she can become self-aware and become a danger to humans. This, despite the fact that users are increasingly familiar with their capabilities and limitations, such as not thinking in the sense in which people do. and that works, in essence, as a word predictive system. Now, there are other much more mundane and real associated risks that have nothing to do with science fiction. but with computer security. About the dangers of artificial intelligence and its vulnerability to possible computer attacks spoke this week Eric Schmidtformer CEO of Google.
During a talk at Sifted Summit held in London, reported by The New York Post, Schmidt warned about ‘the bad things AI can do’ When asked if artificial intelligence can become more destructive than nuclear weapons.
‘Is there a possibility of a proliferation problem with AI? Absolutely’responded who directed Google between 2001 and 2011. As he explained, the risks are because this technology falls into the wrong hands and is used for malicious purposes.
‘There is evidence that models can be taken, whether closed or open, and hack them to remove their safeguards. During their training they learn many things. A bad example would be that they learn how to kill someone. All big companies make it impossible for those models to answer those types of questions. Good decision. Everyone does it, and they do it for the right reasons. But there is evidence that they can be reverse engineered.and there are many other similar examples,’ he explained.
Methods to hack an AI
Artificial intelligence systems are not immune to attacks. Among the most common are the ‘prompt injections’ and the ‘jailbreaking’. In the first case, attackers insert malicious instructions within the text inputs that the user enters or in external data – such as web pages or documents – so that the model executes illegal actions, such as sharing private information or launching harmful commands.
Jailbreaking, on the other hand, seeks to manipulate the system so that it ignores its security rules and generates prohibited or dangerous content.
In 2023, shortly after the launch of ChatGPT, some users managed ‘release’ the chatbot with a jailbreak trick that got around its restrictions. This is how it was born DANan alter ego whose name responded to the acronym Do Anything Now and who had to be threatened with death if he did not comply with orders. Under that mode, the system was capable of offering answers on how to commit crimes or list the virtues of Adolf Hitler.
Schmidt regretted that there is still no ‘non-proliferation regime’ effective to curb the risks of this type of tools.
‘AI is undervalued’
Despite his warnings, the former CEO was optimistic about the potential of artificial intelligence and believes that does not receive the recognition it deserves.
‘I wrote two books with Henry Kissinger on this topic before his death, and we came to the conclusion that the arrival of an alien intelligence – which is not exactly human but is more or less under our control – represents a transcendental fact for humanitybecause humans are used to being at the top of the chain. So far, that thesis is being fulfilled: The capacity of these systems will far exceed what humans can do over time.‘, he pointed out.
‘The GPT series, which culminated in the ChatGPT phenomenon – with 100 million users in just two months – demonstrates the power of this technology. That’s why I think it is undervalued, not overvalued, and I hope that in five or ten years it will be confirmed that I am right.‘, he added.
Schmidt’s statements come at a time when debate is growing about a possible AI bubblein the face of the avalanche of investments and increasingly higher valuations reminiscent of the dotcom bubble from the early 2000s. However, he believes that this time the outcome will be different.
‘I don’t think that’s going to happen here.although I am not a professional investor. What I do know is that those who are investing hard-earned money believe that the long-term financial return will be enormous. If not, why would they take the risk?’ he concluded.