Alarm at the escalation of “deepfake” attacks Cybercrime already amounts to 8.5 trillion dollars. If it were a country it would be the third power

The CEO scam began to become popular a few years ago. The usual modus operandi, until the appearance of AI, consisted of the scammer sending an email to an employee of a large company, usually the person in charge of finances, requesting a transfer. He posed as the CEO to gain credibility. That was all until the appearance of voice and synthetic image creation programs with artificial intelligence.

In 2019, Forbes published the case of a UK manager who had been deceived by a voice deepfake. The attackers stole $24,0000 from the company. We recently learned of a case that represents the pinnacle of sophistication in the CEO scam. An employee “apparently” met with his colleagues and the financial director of his company to ensure that the order to make a payment that had come in the mail, and which he suspected, was true. All the people he saw on the screen were synthetic. The thieves got a loot of 24 million euros.

AI tools are becoming increasingly sophisticated and cybercriminals rely on them to sophisticate their deceptions. In 2023 they cloned a man's voice to demand a ransom from his parents over the phone. The EMT of Valencia also suffered a million-dollar scam and in recent months we have seen famous Spanish television presenters “cloned” by computer asking for investments in cryptocurrencies through social networks. «The Barcelona Municipal Institute of Informatics (IMI) has also suffered a phishing scam of 349,497 euros through the impersonation of a company. I personally know the case of an employee who received a call from the supposed owner of the premises where he worked while he had the real owner next to him. In the CEO scam, the Hong Kong case is the most sophisticated that has been seen so far, because the identity and voice of several people are impersonated and identifying that what you see is not real is much more difficult than making sure that that the email you have received is fake. Now more than ever you have to be cautious and do all possible checks. If they call you asking for an urgent transfer, return the call for example,” says Albert Jové, collaborating professor at the UOC's Computer Science, Multimedia and Telecommunications Studies.

222% more attacks

Phishing has also become sophisticated and email attacks are considered one of the main infection vectors. Criminals use tools such as WormGPT, FraudGPT and ChaosGPT to do harm. “Email attacks skyrocketed 222% in 2023,” says the “Acronis Cyber ​​Threat Report.”

This report also indicates that Singapore, Spain and Brazil were the countries that suffered the most malware attacks in the fourth quarter of 2023. “We expect an increase in phishing attacks, automated attacks, as well as code-based phishing attacks. “QR that will bypass multi-factor authentication,” the text also indicates.

The current cost of cybercrime is estimated at 8 trillion dollars (if it were a country it would already be the third economy in the world). And «by 2025 the figure is expected to rise to 10.5 trillion dollars. There are different groups of attackers or cybercriminals, such as the so-called hacktivists who have political motives and, normally, have the support of some government (especially China or Russia) or cyberattackers who have economic motives. The most active are those who are simply looking for money and some of these groups function as real companies; There are programmers, human resources, people specialized in a certain action, there are even groups that subcontract part of the attack. Above all, they carry out ransomware attacks and what interests them is the theft of data to demand a ransom or sell it on the black market. AI enriches attackers. It will be increasingly common to find frauds like this, because in theory it is increasingly easier to use these technologies,” says Eusebio Nieva, technical director of Check Point Software for Spain and Portugal.

Election year

This 2024 is a historic year in terms of elections and, consequently, in the volume of alarms about possible deepfakes that cloud the electoral processes. Citizens of the United States, Mexico, Bangladesh, India and Pakistan are called to vote and governments are preparing to defend themselves against misinformation. “Policymakers and regulators from Brussels to Washington are rushing to draft legislation restricting AI-powered audio, images and videos in the election campaign. However, the European Union's landmark AI Law will not come into force until after the parliamentary elections in June. In the US Congress, bipartisan legislation that would prohibit AI misrepresentation of federal candidates likely will not become law before the November elections,” says The Washington Post in a report.

«It is possible that we are facing an era of digital disbelief. The generation of this fake news already recently affected the elections in Bangladesh. Many governments have stepped up and implemented anti-fake-news plans, but I think it will be very difficult to stop this wave, since flashy news has tremendous speed in the era of social networks, and even large media outlets are those responsible for printing this speed. In that sense, AI does represent a threat,” says Julián Estévez, professor of Artificial Intelligence at the University of the Basque Country.

Is it possible to protect yourself? Those consulted point out that more investment in cybersecurity in companies is necessary and “more R&D to prevent future attacks. Companies often invest when they have already been attacked. Also as tools improve, security systems improve. This is what happens with authentication, for example. Before everything was done through a pin. Now you have double and triple authentication systems such as sending an SMS with a password. The fingerprint is already installed in many places and the iris will arrive, but there will always be two or three levels of authentication to verify that you are the person you say you are,” says Alejandro Novo of the ethical hacking firm Synack.

It is also necessary to change mentality and improve work processes. «To protect yourself, you not only need to use technology, you can also apply protocols such as preventing one person from being the only one to make decisions within a company. There must be an action policy against suspicious emails… It is about putting up walls in the emails, in the navigation. That is to say, we must put up barriers, but not only at the perimeter but also within the company. If a user only needs permissions to access certain sites, don't give them the freedom to access everything. We are in a change of way of thinking regarding cybersecurity, because without it no company can carry out its activity, based on protection and responsibility,” explains the Check Point manager.

At a general level, it is believed that the creation of cryptographic certification standards can help, train and educate the public so that they can protect themselves from generative AI and make responsible use of it. «Many companies have been banned from using ChatGPT. OpenAI can use anything written in ChatGPT to improve the system. What companies fear is that confidential or sensitive information is provided to the chatbot and that it involuntarily shares it with other users, which represents a serious risk of data security and privacy,” says Julián Estévez from the University of the Basque Country.

Ethical hacking

AI also opens up opportunities for cybersecurity. «With it, traffic flows and vulnerabilities are analyzed; It helps you sift through the most dangerous ones,” says Alejandro Novo from Synack. This firm is dedicated to ethical hacking, that is, searching for vulnerabilities in its clients' computer systems. They have 1,500-800 ethical hackers in different countries who look for bugs 24 hours a day. «We also have a service to detect if software is misusing data and not keeping it within a private repository. If you use ChatGPT you have to make sure that private information does not leave the company. We also test for data poisoning. Imagine that we trick an AI system by telling it in various ways that the earth is flat, when asked if the earth is flat it will say yes. This is another vulnerability », he comments.