Manipulated video shared by Elon Musk imitates Kamala Harris’ voice and raises alarm over use of artificial intelligence in politics

NY – A manipulated video that imitates the voice of the vice president of the United States, Kamala Harristo make it seem like he said things he never said is raising concerns about the power of the artificial intelligence to deceive, just three months before the general elections.

The video garnered attention after the billionaire Elon Musk shared it on his social media platform X on Friday night without explicitly noting that it had originally been posted as a parody.

The video uses many of the same visual elements as an actual ad Harris, the presumptive Democratic presidential nominee, released last week to launch her campaign. But the video swaps out the voiceover for one that convincingly impersonates Harris.

“I, Kamala Harris, am your Democratic candidate for president because Joe Biden finally brought his senility to light in the debate”the voice in the video says. He claims that Harris is someone “hired for diversity” because she is a woman and a non-white person, and says she doesn’t know “the slightest thing about how to run the country.” The video maintains the “Harris for President” branding. It also adds some authentic clips of Harris that were taken in the past.

Mia Ehrenberg, a spokeswoman for Harris’ campaign, said in an email to The Associated Press that “we believe the American people want the true freedom, opportunity and security that Vice President Harris offers, not the false and manipulated lies of Elon Musk and Donald Trump.”

The widely shared video is an example of how AI-generated images, videos or audio clips have been used to both mock and deceive politics as the US heads into a presidential election. It exposes how, as high-quality AI tools have become much more accessible, there remains no significant federal action so far to regulate their use, leaving the rules governing AI in politics largely to states and social media platforms.

The video also raises questions about how best to deal with content that blurs the boundaries of what is considered an appropriate use of AI, especially if it falls into the category of satire.

The original user who posted the video, a YouTuber known as Mr Reagan, has indicated on both YouTube and X that the manipulated video is a parody. But Musk’s post, which has been viewed more than 123 million times, according to the platform, only includes the comment “This is amazing,” with a laughing emoji.

X users familiar with the platform will know to click on Musk’s post to go to the original user’s post, where the clarification about the parody is visible. Musk’s caption does not instruct them to do so.

While some users of X’s “community notes” feature — a feature used to add context to posts — have suggested tagging Musk’s post, no tags had been added as of Sunday afternoon. Some users were wondering whether his post might violate X’s policies, which say users cannot “misleadingly share false or altered media that is likely to cause harm.”

Social Network X provides an exception for memes and satire as long as they do not cause “significant confusion about the authenticity of the media.”

Musk this month endorsed the Republican candidate, former president Donald TrumpNeither YouTuber Mr Reagan nor Musk have responded to emailed requests for comment on Sunday.

Two AI-generated media experts reviewed the fake ad’s audio and confirmed that much of it was generated using AI technology.

One of them, Hany Farid, a digital forensics expert at the University of California, Berkeley, said the video is a testament to the power of generative AI and digitally manipulated videos.

“The AI-generated voice is very good,” he said in an email. “While most people won’t believe it’s Vice President Harris’ voice, the video is much more impactful when the words come out in her voice.”

Farid said generative AI companies that make voice cloning and other AI tools publicly available should do more to ensure their services are not used in ways that could harm people or democracy.

Rob Weissman, co-president of the advocacy group Public Citizen, disagreed with Farid, saying he believed the video would mislead many people.

“I don’t think it’s obvious that this is a joke”Weissman said in an interview. “I’m sure most people who see it won’t assume it’s a joke. The quality isn’t great, but it’s good enough. And precisely because it feeds off of pre-existing themes that have circulated around it, most people will believe it’s real.”

Weissman, whose organization advocates to Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing we’ve been warning about.”

Other generative AI deepfakes, both in the United States and abroad, have reportedly attempted to influence voters with misinformation, humor, or both. In Slovakia in 2023, fake audio clips impersonated a candidate talking about plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a satirical ad from a political action committee superimposed the face of a mayoral candidate onto that of an actor portraying him as an underachieving high school student.

He Congress The U.S. has yet to pass legislation on the use of AI in politics, and federal agencies have taken only limited action, leaving most existing U.S. regulation up to the states. More than a third of states have created their own laws to regulate the use of AI in campaigns and elections, according to the National Conference of State Legislatures.

In addition to X, other social media companies have also developed policies regarding altered and manipulated media content shared on their platforms. Users of the video platform YouTube, for example, must disclose whether they have used generative artificial intelligence to create videos or face suspension.