The United States Federal Communications Commission (FCC) on Thursday banned robocalls that contain voices generated by artificial intelligencea decision that sends a clear message that the exploitation of technology to scam people and mislead voters will not be tolerated.
The unanimous decision targets robocalls made with AI voice cloning tools under the Telephone Consumer Protection Act, a 1991 rule that restricts unsolicited calls that use artificial or artificial voice messages. prerecorded.
The announcement was made at a time when the authorities of N.H. are advancing their investigation into AI-generated robocalls that imitated President Joe Biden's voice to discourage people from voting in the state's primary that took place last month.
The rule, which takes effect immediately, gives the FCC power to fine companies that use AI voices in their calls or block service providers from transmitting them. It also opens the door for call takers to file lawsuits and gives state attorneys general a new mechanism to take strict action against violators, according to the FCC.
Jessica Rosenworcel, president of the body, said malicious people have used AI-generated voices in robocalls to misinform voters, impersonate celebrities and extort members.s of families.
“It seems like something from the distant future, but this threat is already here”Rosenworcel told The Associated Press on Thursday as the Commission weighed the measures. “Any of us can receive these hoax calls, and that's why we thought we had to act now.”
Consumer protection law generally states that telemarketing companies cannot use automated dialers or prerecorded voice messages to call cell phones and cannot call landlines without the prior written consent of the person calling them. other side of the line.
The new ruling classifies AI-generated voices in robocalls as “artificial” and therefore fall within the same standards, the FCC explained.
Those who violate the law are subject to large fines, with a maximum amount of more than $23,000 per call., the FCC said. The agency has used consumer protection law in the past to restrict robocalls that interfere with elections, even fining two conservative scammers $5 million for falsely alerting people in predominantly black areas that Voting by mail could raise the risk of being detained, subject to debt collection, or forced vaccination.
The law also gives those who receive these calls the right to take legal action with the possibility of collecting compensation of up to $1,500 for each unwanted call.
Josh Lawson, director of AI and democracy at the Aspen Institute, said that even with the FCC ruling, voters should be prepared for the eventuality of receiving unwanted calls, messages and social media posts.
“True criminals often ignore warnings and know that what they are doing is wrong,” he commented. “We must understand that these people will continue to stir up the hornet's nest and will go to the limit.”
Kathleen Carley, a professor at Carnegie Mellon specializing in computer disinformation, says that to detect the abuse of voice technology by AI, it is necessary to be able to clearly identify that the audio has been generated by AI.
This is possible now, he said, “because the technology to generate these calls has been around for some time. It is well known and makes common mistakes. But that technology will improve.”
Sophisticated generative AI tools, from voice cloning software to image generators, are already used in elections. USA and the rest of the world.
Last year, when the presidential race in the United States began, several campaign ads used audio or images generated by AI, and some candidates experimented with the use of chatbots to communicate with voters.
In it Congress Bipartisan efforts have been undertaken to regulate the use of AI in political campaigns. But nine months before the general elections, no federal law has yet been approved.
Rep. Yvette Clarke, who introduced a proposal to regulate the use of AI in politics, applauded the FCC's ruling but said it is now up to Congress to act.
“I think Democrats and Republicans can agree that AI-generated content to mislead people is a bad thing, and we need to work together to help give people the tools they need to help them distinguish what's real from what's fake.” it is not”, said Clarke, a Democrat from New York.
The AI-generated robocallers that attempted to influence the New Hampshire primary election on January 23 used a voice similar to Biden's, used his standard phrase “What a load of nonsense,” and falsely implied that participating in the primary would It would prevent the population from voting in the November election.
“New Hampshire got a spoonful of how AI can be used inappropriately in the electoral process”said New Hampshire Secretary of State David Scanlan. “It is certainly appropriate that we try to understand the use and application, so as not to mislead the electorate in a way that harms our elections.”
State Attorney General John Formella said Tuesday that investigators have identified Life Corp, a Texas-based company, and its owner Walter Monk, as the origin of the calls, which were made to thousands of residents. of the state, mostly registered Democrats. Monk testified that it was another Texas company, Lingo Telecom, that forwarded the calls.
According to the FCC, both Lingo Telecom and Life Corp. have been investigated before for illegal robocalls.
Lingo Telecom issued a statement Tuesday saying it “acted immediately” to assist with the investigation into the robocalls impersonating Biden. The company stated that it “had no involvement in the development of the content of the calls.”
A man who answered the phone at Life Corp. declined to comment Thursday.