The Washington PostDemocracy Dies in Darkness

Opinion AI could wreak havoc on elections. Congress can get ahead of it.

Screenshots of AI-generated images from ads by the Republican National Committee (left) and Florida Gov. Ron DeSantis’s 2024 campaign team. (Republican National Committee and DeSantis War Room)
4 min

Political ads have never been known for accurately portraying the candidate’s opponent — and now artificial intelligence threatens to make misrepresentation more realistic than ever. Rather than waiting for AI to cause chaos in the 2024 election, regulators, lawmakers and political parties should act now.

The past few months have shown us just how convincing AI-generated images can be. Sometimes the examples have had little to do with politics — see, for instance, the mock-up of the pope clad in a ludicrously puffy white coat that fooled swaths of the internet this spring. Sometimes they’ve involved candidates but not come from competing campaigns, as with deep-faked photos of Donald Trump being violently arrested. Yet they’ve also been directly deployed as electoral tools.

Florida Gov. Ron DeSantis’s 2024 team posted manufactured depictions of Mr. Trump during his time as president hugging Anthony S. Fauci.

Mr. Trump, in turn, shared a parody of Mr. DeSantis’s much-mocked campaign launch featuring AI-generated voices mimicking the Republican governor as well as Elon Musk, Dick Cheney and others. And the Republican National Committee released an ad stuffed with fake visions of a dystopian future under President Biden. The good news? The RNC included a note acknowledging that the footage was created by a machine.

Screenshots from

RNC’s AI-generated ad

The Republican National Committee released

an ad in April entirely illustrated with AI-

generated images that depicted a dystopian

future if President Biden were re-elected.

Source: Republican National Committee

Screenshots from RNC’s AI-generated ad

The Republican National Committee released an ad in

April entirely illustrated with AI-generated images

that depicted a dystopian future if President Biden

were re-elected.

Source: Republican National Committee

Screenshots from RNC’s AI-generated ad

The Republican National Committee released an ad in April entirely illustrated with AI-generated

images that depicted a dystopian future if President Biden were re-elected.

Source: Republican National Committee

The bad news is that there’s no guarantee disclosure will be the norm. Rep. Yvette D. Clarke (D-N.Y.) has introduced a bill that would require disclosures identifying AI-generated content in political ads. That’s a solid start — and a necessary one. But, better yet, the RNC, the Democratic National Committee and their counterparts coordinating state and local races should go further than disclosure alone, telling candidates to identify such material in all their messaging, including fundraising and outreach.

The party committees should also consider taking some uses off the table entirely. Large-language models can be instrumental for small campaigns that can’t afford to hire staff to draft fundraising emails. Even deeper-pocketed operations could stand to benefit from personalizing donor entreaties or identifying likely supporters. But those legitimate uses are different from simulating a gaffe by your opponent and blasting it out across the internet or paying to put it on television.

Ideally, campaigns would refrain altogether from using AI to depict false realities — including, say, to render a city exaggeratedly crime-infested to criticize an incumbent mayor or to fabricate a diverse group of eager supporters. Similar effects could admittedly be achieved with more traditional photo-editing tools. But the possibility that AI will evolve into an ever more adept illusionist, as well as the likelihood that bad actors will deploy it to huge audiences, means it’s crucial to preserve a world in which voters can (mostly) believe what they see.

Skip to end of carousel
  • D.C. Council reverses itself on school resource officers. Good.
  • Virginia makes a mistake by pulling out of an election fraud detection group.
  • Vietnam sentences another democracy activist.
  • Biden has a new border plan.
The D.C. Council voted on Tuesday to stop pulling police officers out of schools, a big win for student safety. Parents and principals overwhelmingly support keeping school resource officers around because they help de-escalate violent situations. D.C. joins a growing number of jurisdictions, from Montgomery County, Md., to Denver, in reversing course after withdrawing officers from school grounds following George Floyd’s murder. Read our recent editorial on why D.C. needs SROs.
Gov. Glenn Youngkin (R) just withdrew Virginia from a data-sharing consortium, ERIC, that made the commonwealth’s elections more secure, following Republicans in seven other states in falling prey to disinformation peddled by election deniers. Former GOP governor Robert F. McDonnell made Virginia a founding member of ERIC in 2012, and until recently conservatives touted the group as a tool to combat voter fraud. D.C. and Maryland plan to remain. Read our recent editorial on ERIC.
In Vietnam, a one-party state, democracy activist Tran Van Bang was sentenced on Friday to eight years in prison and three years probation for writing 39 Facebook posts. The court claimed he had defamed the state in his writings, according to Radio Free Asia. In the past six years, at least 60 bloggers and activists have been sentenced to between 4 and 15 years in prison under the law, Human Rights Watch found. Read more of the Editorial Board’s coverage on autocracy and Vietnam.
The Department of Homeland Security has provided details of a plan to prevent a migrant surge along the southern border. The administration would presumptively deny asylum to migrants who failed to seek it in a third country en route — unless they face “an extreme and imminent threat” of rape, kidnapping, torture or murder. Critics allege that this is akin to an illegal Trump-era policy. In fact, President Biden is acting lawfully in response to what was fast becoming an unmanageable flow at the border. Read our most recent editorial on the U.S. asylum system.
End of carousel

Party committees should do this rule-setting together, signing a pact, proving that the integrity of information in elections isn’t a partisan question. And honest candidates should refrain of their own accord from dishonest tricks. Realistically, though, they’ll need a push from regulators. The Federal Election Commission already has a stricture on the books prohibiting the impersonation of candidates in campaign ads. The agency recently deadlocked over examining whether this authority extends to AI images. The commissioners who voted no on opening the issue to public comment should reconsider. Even better, lawmakers should explicitly grant the agency the authority to step in.

There are plenty of reasons to worry about what the rise of AI will do to our democracy. Persuading foreign adversaries as well as domestic mischief-mongers not to sow discord is probably a lost cause. Platforms’ job of rooting out disinformation has become all the more important now that better lies can be told to so many people for so little money — and all the more difficult. Congress is working on a broad-based framework to regulate AI, but that will take months or even years. There’s no excuse for government not to take smaller steps forward on the path immediately in front of it.

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; Mili Mitra (public policy solutions and audience development); Keith B. Richburg (foreign affairs); and Molly Roberts (technology and society).