Creating realistic deepfakes is easier than ever: defending may require even more artificial intelligence

Washington – The phone sounds. He is the Secretary of State calling. Or not? For Washington, see and hear Donald Trump.

Digital falsifications are also reaching American companies, since criminal gangs and hackers associated with adversaries, including North Korea, use synthetic video and audio to impersonate executive directors and candidates for low -level jobs to obtain access to critical systems or commercial secrets.

Thanks to the advances in the artificial intelligencecreating realistic deepfakes is easier than ever, which causes security problems to governments, companies and individuals and makes confidence the most valuable currency of the digital age.

Responding to the challenge will require laws, better digital literacy and technical solutions that fight the AI with more.

‘As humans, we are remarkably susceptible to deception, “said Vijay Balasubramaniyan, CEO and founder of the Pindrop Security technology company. But he believes that solutions to the Deepfakes challenge can be within reach: ‘We are going to defend ourselves.’

This summer, someone used AI to create an Deepfake of Secretary of State Marco Rubio in an attempt to communicate with foreign ministers, an American senator and a governor through text messages, voice mail and the signal messaging application.

In May, someone passed through Donald Trump’s head of Cabinet, Susie Wiles.

Another false blond had appeared in a Deepfake earlier this year, saying that he wanted to cut Ukraine access to Elon Musk’s Starlink Internet service. The Ukraine government later refuted the false statement.

The implications for national security are huge: the people who believe they are chatting with Marco Rubio or Susie Wiles, for example, could discuss sensitive information about diplomatic negotiations or military strategy.

“Either attempts are tried to extract sensitive secrets or competitive information, or access to an email server or other sensitive network is sought,” said Kinny Chan, CEO of the QID cybersecurity company, about possible motivations.

Synthetic media can also aim to alter behavior. Last year, New Hampshire Democratic voters received an automatic call by urging them not to vote on the state’s next primary. The voice of the call sounded suspiciously like that of then President Joe Biden, but was actually created using AI.

His ability to deceive turns the Deepfakes of AI into a powerful weapon for foreign actors. Both Russia and China have used misinformation and propaganda aimed at Americans as a way of undermining confidence in democratic alliances and institutions.

Steven Kramer, the political consultant who admitted to having sent Joe Biden’s false automatic calls, said he wanted to send a message about the dangers that Deepfakes represent for the US political system. Kramer was acquitted last month of voter suppression charges and an impersonation of identity of a candidate.

‘I did what I did for $ 500,’ said Kramer. ‘Can you imagine what would happen if the Chinese government decided to do this?’

The greatest availability and sophistication of programs means that Deepfakes are increasingly used for corporate espionage and common fraud.

“The financial sector is right in the spotlight,” said Jennifer Ewbank, former deputy director of the CIA who worked in cybersecurity and digital threats. ‘Even people who know each other have been convinced of transferring vast sums of money.’

In the context of corporate espionage, they can be used to impersonate executive directors who ask employees to deliver passwords or route numbers.

Deepfakes can also allow scammers to request jobs – and even make them – under an assumed or false identity. For some, this is a way of accessing sensitive networks, stealing secrets or installing ransomware. Others just want work and can be working in some similar works in different companies at the same time.

The United States authorities have said that thousands of North Koreans with knowledge of information technology have been sent to live abroad, using stolen identities to obtain jobs in technological companies in the United States and other places. Workers have access to the company’s networks, as well as a payment check. In some cases, workers install ransomware that can be used later to extort even more money.

The plans have generated billions of dollars for the North Korean government.

Within a period of three years, it is expected that up to 1 in 4 employment requests be false, according to an Adaptive Security investigation, a cybersecurity company.

“We have entered an era where anyone with a laptop and access to an open source model can make pass convincingly by a real person,” said Brian Long, CEO of Adaptive. ‘It’s no longer about hacking systems, but about hacking trust.’

Researchers, experts in public policies and technology companies are now investigating the best ways of addressing the economic, political and social challenges that the Deepfakes raise.

The new regulations could require technological companies to make more to identify, label and potentially eliminate deepfakes on their platforms. Legislators could also impose greater sanctions on those who use digital technology to deceive others, if they can be trapped.

A greater investment in digital literacy could also increase the immunity of people to online deception, teaching them ways to detect false means and avoid being victims of scammers.

The best tool to detect AI can be another AI program, one trained to sniff the small failures in the Deepfakes that would go unnoticed by a person.

Systems such as Pindrop analyze millions of data points in the speech of any person to quickly identify irregularities. The system can be used during work interviews or other videoconferences to detect if the person is using voice cloning software, for example.

Similar programs can be a common day, executing in the background while people talk with colleagues and loved ones online. Someday, Deepfakes can follow the path of email spam, a technological challenge that once threatened to upset the usefulness of email, said Balasubramaniyan, CEO of Pindrop.

“You can adopt a defeatist vision and say that we will be submissive to misinformation,” he said. ‘But that is not going to happen.’