Meta and X questioned by lawmakers over lack of rules against AI-generated political deepfakes

Deepfakes generated by artificial intelligence are having their moment this year, at least when it comes to making it look, or sound, like celebrities did something uncanny. Tom Hanks hawking a dental plan. Pope Francis wearing a stylish puffer jacket. U.S. Sen. Rand Paul sitting on the Capitol steps in a red bathrobe.

But what happens next year ahead of a U.S. presidential election?

Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate’s voice or actions. Now some U.S. lawmakers are calling on social media platforms X, Facebook and Instagram to explain why they aren’t doing the same.

Two Democratic members of Congress sent a letter Thursday to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino expressing “serious concerns” about the emergence of AI-generated political ads on their platforms and asking each to explain any rules they’re crafting to curb the harms to free and fair elections.

“They are two of the largest platforms and voters deserve to know what guardrails are being put in place,” said U.S. Sen. Amy Klobuchar of Minnesota in an interview with The Associated Press. “We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”

The letter to the executives from Klobuchar and U.S. Rep. Yvette Clarke of New York warns: “With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues.”

X, formerly Twitter, and Meta, the parent company of Facebook and Instagram, didn’t immediately respond to requests for comment Thursday. Clarke and Klobuchar asked the executives to respond to their questions by Oct. 27.

Read More:   DuckDuckGo founder says Google’s phone and manufacturing partnerships thwart competition

The pressure on the social media companies comes as both lawmakers are helping to lead a charge to regulate AI-generated political ads. A House bill introduced by Clarke earlier this year would amend a federal election law to require disclaimers when election advertisements contain AI-generated images or video.

“That’s like the bare minimum” of what is needed, said Klobuchar, who is sponsoring companion legislation in the Senate that she hopes will get passed before the end of the year. In the meantime, the hope is that big tech platforms will “do it on their own while we work on the standard,” Klobuchar said.

Google has already said that starting in mid-November it will require a clear disclaimer on any AI-generated election ads that alter people or events on YouTube and other Google products. This policy applies both in the U.S. and in other countries where the company verifies election ads. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

A more recent bipartisan Senate bill, co-sponsored by Klobuchar, Republican Sen. Josh Hawley of Missouri and others, would go farther in banning “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire.

AI-generated ads are already part of the 2024 election, including one aired by the Republican National Committee in April meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.

Klobuchar said such an ad would likely be banned under the proposed rules. So would a fake image of Donald Trump hugging infectious disease expert Dr. Anthony Fauci that was shown in an attack ad from Trump’s GOP primary opponent and Florida Gov. Ron DeSantis.

Read More:   Ron DeSantis sharpens his attacks on Donald Trump in their shared home state of Florida

As another example, Klobuchar cited a deepfake video from earlier this year purporting to show Democratic Sen. Elizabeth Warren in a TV interview suggesting restrictions on Republicans voting.

“That is going to be so misleading if you, in a presidential race, have either the candidate you like or the candidate you don’t like actually saying things that aren’t true,” Klobuchar said. “How are you ever going to know the difference?”

Klobuchar, who chairs the Senate Rules and Administration Committee, presided over a Sept. 27 hearing on AI and the future of elections that brought witnesses including Minnesota’s secretary of state, a civil rights advocate and some skeptics. Republicans and some of the witnesses they asked to testify have been wary about rules seen as intruding into free speech protections.

Ari Cohn, an attorney at think-tank TechFreedom, told senators that the deepfakes that have so far appeared ahead of the 2024 election have attracted “immense scrutiny, even ridicule,” and haven’t played much role in misleading voters or affecting their behavior. He questioned whether new rules were needed.

“Even false speech is protected by the First Amendment,” Cohn said. “Indeed, the determination of truth and falsity in politics is properly the domain of the voters.”

The Federal Election Commission in August took a procedural step toward potentially regulating AI-generated deepfakes in political ads, opening to public comment a petition that asked it to develop rules on the misleading images, videos and audio clips.

The public comment period for the petition, brought by the advocacy group Public Citizen, ends Oct. 16.


Associated Press writer Ali Swenson contributed to this report.

About The Author

Scroll to Top