WASHINGTON — Robocalls with voices generated by artificial intelligence are now illegal, according to a ruling issued Thursday by the Federal Communications Commission.
The ruling, which falls under the existing Telephone Consumer Protection Act of 1991, takes effect immediately and “protects consumers from unwanted calls made using an artificial or prerecorded voice,” federal officials said.
The federal ruling comes as an investigation continues into fake President Biden robocalls sent out last month encouraging voters not to vote in the presidential primary election in New Hampshire.
The ruling makes voice cloning technology used in common robocall scams targeting consumers illegal, giving State Attorneys General across the country new tools to go after “bad actors” behind the robocalls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”
The Telephone Consumer Protection Act of 1991 is the primary law the FCC uses to help limit junk calls, federal officials said. It restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.
Under FCC rules, the law also requires telemarketers to obtain prior express written consent from consumers before robocalling them.
“This Declaratory Ruling ensures AI-generated voices in calls are also held to those same standards,” the FCC said in its statement.
The Telephone Consumer Protection Act gives the FCC civil enforcement authority to fine robocallers, officials said. The commission can also take steps to block calls from telephone carriers facilitating illegal robocalls.
In addition, the law allows individual consumers or an organization to bring a lawsuit against robocallers in court. Also, State Attorneys General have their own enforcement tools which may be tied to robocall definitions under the Telephone Consumer Protection Act.
Meanwhile, Texas-based Life Corporation and a person named Walter Monk have been linked as the source of the fake robocalls impersonating President Biden in the Granite State, New Hampshire Attorney General John Formella said Tuesday.
Authorities said this week they are issuing a cease-and-desist order to Life Corporation for violating state election law that “prohibits any person from engaging in voter suppression by knowingly attempting to prevent or deter another person from voting or registering to vote based on fraudulent, deceptive, misleading, or spurious grounds or information,” Formella said.
The Jan. 21 robocalls were received by “numerous New Hampshire residents,” and “directly encouraged recipients not to participate in the New Hampshire Primary,” said Formella, whose office launched an investigation into these calls.
That investigation involved state and federal agencies including the Anti-Robocall Multistate Litigation Task Force, which is a bipartisan task force made up of 50 state attorneys general, and the Federal Communications Commission Enforcement Bureau, Formella said.
The rise of these types of calls has escalated during the last few years as “this technology now has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members,” the FCC said in a statement.
While currently State Attorneys Generals can target the outcome of an unwanted AI-voice generated robocall—such as the scam or fraud they are seeking to perpetrate—this action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law, the FCC said.
In November, the FCC launched a Notice of Inquiry to evaluate how the agency can combat illegal robocalls and how AI might be involved, federal officials said. The agency asked questions on how AI might be used for scams that arise out of junk calls, by mimicking the voices of those we know, and whether this technology should be subject to oversight under the Telephone Consumer Protection Act.
The FCC also asked about how AI can help with pattern recognition so that illegal robocalls may be recognized before they ever reach consumers on the phone, officials said.
A coalition of 26 State Attorneys General—more than half of the nation’s AGs—recently wrote to the FCC supporting the approach.
“By taking this step, the FCC is building on its work to establish partnerships with law enforcement agencies in states across the country to identify and eliminate illegal robocalls,” the FCC said. “These partnerships can provide critical resources for building cases and coordinating efforts to protect consumers and businesses nationwide. The FCC offers partner states not only the expertise of its enforcement staff but also important resources and remedies to support state investigations.”
In a statement Thursday, FCC Commissioner Geoffrey Starks called unwanted robocalls “a scourge on our society.”
‘‘I am particularly troubled by recent harmful and deceptive uses of voice cloning in robocalls,” Starks said. “Real world examples here are no longer theoretical. Bad actors are using voice cloning – a generative AI technology that uses a recording of a human voice to generate speech sounding like that voice – to threaten election integrity, harm public safety, and prey on the most vulnerable members of our society.”
This is a developing story. Check back for updates as more information becomes available.
Download the FREE Boston 25 News app for breaking news alerts.
©2024 Cox Media Group