Congress wrestles with AI’s boost to campaigns, potential misuse

Congress wrestles with AI’s boost to campaigns, potential misuse

Gopal Ratnam | CQ-Roll Call (TNS)

WASHINGTON — Lawmakers pushing ahead with some of the first bills governing artificial intelligence are confronting old problems as they deal with a new technology.

At a recent Senate committee makeup of legislation that would prohibit the distribution of deceptive AI in campaigns for federal office and require disclosure when AI is used, some Republicans espoused support for the measures’ ideals while voting against them, citing the potential limits on free speech.

“We have to balance the potential for innovation with the potential for deceptive or fraudulent use,” Nebraska Republican Sen. Deb Fischer, ranking member of the Senate Rules and Administration Committee, said at the markup. “On top of that, we can’t lose sight of the important protections our Constitution provides for free speech in this country. These two bills do not strike that careful balance.”

Political battles are only likely to get more intense over AI as campaigns increasingly rely on it to fine-tune messages and find target audiences — and others use it spread disinformation.

The technology is here to stay, proponents say, because AI greatly increases efficiency.

“Campaigns can positively benefit from AI-derived messaging,” said Mark Jablonowski, who is president of DSPolitical, a digital advertising company that works with Democratic candidates and for progressive causes, and chief technology officer at its parent, Optimal. “Our clients are using AI successfully to create messaging tracks.”

But consultants, lawmakers, and government officials say the same tools that boost efficiency in campaigns will spread disinformation or impersonate candidates, causing confusion among voters and likely eroding confidence in the electoral process.

Senate Majority Leader Charles E. Schumer, D-N.Y., echoed those concerns at the Rules markup.

“If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy,” he said. “This is so damn serious.”

Sen. Amy Klobuchar, D-Minn., the committee’s chairwoman, said AI tools have “the potential to turbocharge the spread of disinformation and deceive voters.”

The panel advanced a measure that would prohibit deceptive AI in campaigns for federal office and one that would require disclaimers when AI is used, both on 9-2 votes with GOP lawmakers casting the opposing votes.

Klobuchar said she would be open to changes to address concerns raised by Republicans.

A third measure requiring the Election Assistance Commission to develop guidelines on the uses and risks of AI advanced on a 11-0 vote.

Campaign backend boost

Campaign workers may enter a few prompts into generative AI tools that then spit out 50 or 60 unique messaging tracks, with workers choosing the top three or four “that really hit the mark,” Jablonowski said in an email. “There are many efficiency gains helping campaigns do more with less and create a more meaningful message, which is very important in politics.”

Consultants and digital advertising firms now have access to more than two dozen AI-based tools that assist with various aspects of political campaigns, ranging from those that generate designs, ads and video content to those that produce emails, op-eds and media monitoring platforms, according to Higher Ground Labs, a venture fund that invests in tech platforms to help progressive candidates and causes.

“AI-generated content is becoming most prevalent in political communications, particularly in content generation across images, video, audio, and text,” Higher Ground Labs said in a May 23 report. “Human oversight remains critical to ensure quality, accuracy and ethical use,” the report said.

Related Articles

National Politics |


Justice Department’s ‘deepfake’ concerns over Biden interview audio highlights AI misuse worries

National Politics |


From robocalls to fake porn: Going after AI’s dark side

National Politics |


Democrats wanted an agreement on using artificial intelligence. It went nowhere

National Politics |


California advances measures targeting AI discrimination and deepfakes

National Politics |


How California and EU work together to regulate artificial intelligence

The report cited one study that found that using AI tools to generate fundraising emails “grew dollars raised per work hour by 350% to 440%.” The tools helped save time without losing quality even when employed by less experienced staffers, the report said.

AI tools also are helping campaigns with audience targeting. In 2023, Boise, Idaho, Mayor Lauren McLean built a target audience group using an AI tool that proved to be more capable in identifying supporters and “outperformed standard partisanship models,” according to the Higher Ground report.

But even the consultants who rely on these new technologies are aware of the downsides.

“I won’t sugarcoat it. As someone who has been in this space for two decades, this is the sort of Venn diagram I focus on,” Jablonowski said, referring to the intersection of AI tools and those who might misuse them. “I think we’re going to see a lot of good coming from AI this year, and we’re going to see significant potential challenges from bad actors. This keeps me up at night.”

Beyond legislation, the Federal Communications Commission is looking into new rules that would require disclosures on messages generated using AI tools.

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” FCC Chair Jessica Rosenworcel said in a May 22 statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see.”

The FCC said the proposed rules are not intended to prohibit content generated by AI but are intended to disclose use of the technology.

States also are racing to pass laws that would require campaigns and candidates to disclose use of AI tools in their messages.

Alabama became the latest state to enact a law to criminalize the use of AI in election campaigning. The measure, passed last month, makes it a misdemeanor for a first-time offense and a felony for subsequent violations for distributing AI-generated deepfakes falsely showing a person speaking or doing something they did not.

Florida legislation signed into law by Gov. Ron DeSantis in April likewise would impose prison terms for running AI-generated ads without disclosure.

Several other states have enacted laws requiring disclosure of of AI in generating messages and ads and imposing civil penalties for failing to do so.

Still evolving

Deepfake messages are not theoretical. Last week, the FCC issued a proposed $6 million fine to Steve Kramer, a political consultant, for organizing a fake robocall in New Hampshire, which authorities say was received by 20,000 or more voters, in which the artificial-intelligence-doctored voice of President Joe Biden asked them to skip the state’s primary in January. Kramer admitted he was behind the call in an interview with CBS News.

New Hampshire Attorney General John Formella charged Kramer with voter suppression, a felony, and misdemeanor charges for impersonation of a candidate. A spokesman for the attorney general’s office said Kramer is set to be arraigned in early June.

Jablonowski argued that some bad actors may break the rules regardless of whether laws require disclosure because the payoff might be worth any potential consequences.

“It is particularly concerning that people who use generative AI maliciously are not going to be following the rules no matter what industry and regulators say,” Jablonowski said. “Requiring folks to label content as being created by generative AI only works if people follow those rules with fidelity.”

One way to stem the spread of fake messages is for social media platforms to curb them, Jablonowski said. Meta Platforms Inc., for example, requires disclosure for ads using AI. And the company has said it will label AI-generated images on Facebook, Instagram and Threads.

Nick Clegg, president of global affairs at Meta, told MIT Technology Review at a May 22 conference that the company has yet to see large-scale use of AI-generated deepfakes on its platforms.

“The interesting thing so far — I stress, so far — is not how much but how little AI-generated content [there is],” Clegg said at the conference.

Tools to detect AI-generated material are not perfect and still evolving, and watermarks or digital signatures indicating AI-generated content can be tampered with, he said.

In addition to AI-generated deepfakes, social media platforms are still grappling with old-fashioned misinformation on their platforms, Jablonowski said, creating likely confusion and distrust among voters. Despite laws and actions by platforms, people bent on using AI to create confusion “are going to do whatever they think they can get away with,” said Jerry McNerney, a senior policy adviser at the law firm of Pillsbury Winthrop Shaw Pittman LLP.

McNerney is a former member of Congress who was co-chair of the Congressional Artificial Intelligence Caucus.

“Trying to keep ahead of [such bad actors] with specific prohibitions is going to be a losing battle,” McNerney said in an interview, arguing that federal agencies and industry groups may have to come up with standards that are enforceable. “You need something more systemic.”

©2024 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.