Political ads have never been known for accurately portraying the candidate’s opponent — and now artificial intelligence threatens to make misrepresentation more realistic than ever. Rather than waiting for AI to cause chaos in the 2024 election, regulators, lawmakers and political parties should act now.
Florida Gov. Ron DeSantis’s 2024 team posted manufactured depictions of Mr. Trump during his time as president hugging Anthony S. Fauci.
Donald Trump became a household name by FIRING countless people *on television*
— DeSantis War Room 🐊 (@DeSantisWarRoom) June 5, 2023
But when it came to Fauci... pic.twitter.com/7Lxwf75NQm
Mr. Trump, in turn, shared a parody of Mr. DeSantis’s much-mocked campaign launch featuring AI-generated voices mimicking the Republican governor as well as Elon Musk, Dick Cheney and others. And the Republican National Committee released an ad stuffed with fake visions of a dystopian future under President Biden. The good news? The RNC included a note acknowledging that the footage was created by a machine.

Screenshots from
RNC’s AI-generated ad
The Republican National Committee released
an ad in April entirely illustrated with AI-
generated images that depicted a dystopian
future if President Biden were re-elected.
Source: Republican National Committee

Screenshots from RNC’s AI-generated ad
The Republican National Committee released an ad in
April entirely illustrated with AI-generated images
that depicted a dystopian future if President Biden
were re-elected.
Source: Republican National Committee

Screenshots from RNC’s AI-generated ad
The Republican National Committee released an ad in April entirely illustrated with AI-generated
images that depicted a dystopian future if President Biden were re-elected.
Source: Republican National Committee
The bad news is that there’s no guarantee disclosure will be the norm. Rep. Yvette D. Clarke (D-N.Y.) has introduced a bill that would require disclosures identifying AI-generated content in political ads. That’s a solid start — and a necessary one. But, better yet, the RNC, the Democratic National Committee and their counterparts coordinating state and local races should go further than disclosure alone, telling candidates to identify such material in all their messaging, including fundraising and outreach.
The party committees should also consider taking some uses off the table entirely. Large-language models can be instrumental for small campaigns that can’t afford to hire staff to draft fundraising emails. Even deeper-pocketed operations could stand to benefit from personalizing donor entreaties or identifying likely supporters. But those legitimate uses are different from simulating a gaffe by your opponent and blasting it out across the internet or paying to put it on television.
Ideally, campaigns would refrain altogether from using AI to depict false realities — including, say, to render a city exaggeratedly crime-infested to criticize an incumbent mayor or to fabricate a diverse group of eager supporters. Similar effects could admittedly be achieved with more traditional photo-editing tools. But the possibility that AI will evolve into an ever more adept illusionist, as well as the likelihood that bad actors will deploy it to huge audiences, means it’s crucial to preserve a world in which voters can (mostly) believe what they see.
- D.C. Council reverses itself on school resource officers. Good.
- Virginia makes a mistake by pulling out of an election fraud detection group.
- Vietnam sentences another democracy activist.
- Biden has a new border plan.
1/5
Party committees should do this rule-setting together, signing a pact, proving that the integrity of information in elections isn’t a partisan question. And honest candidates should refrain of their own accord from dishonest tricks. Realistically, though, they’ll need a push from regulators. The Federal Election Commission already has a stricture on the books prohibiting the impersonation of candidates in campaign ads. The agency recently deadlocked over examining whether this authority extends to AI images. The commissioners who voted no on opening the issue to public comment should reconsider. Even better, lawmakers should explicitly grant the agency the authority to step in.
There are plenty of reasons to worry about what the rise of AI will do to our democracy. Persuading foreign adversaries as well as domestic mischief-mongers not to sow discord is probably a lost cause. Platforms’ job of rooting out disinformation has become all the more important now that better lies can be told to so many people for so little money — and all the more difficult. Congress is working on a broad-based framework to regulate AI, but that will take months or even years. There’s no excuse for government not to take smaller steps forward on the path immediately in front of it.