Please ensure Javascript is enabled for purposes ofwebsite accessibility

Artificial intelligence landscape is changing as 2024 election cycle heats up


A lone voter has a choice of privacy voting kiosks at which to cast her ballot at this north Jackson, Miss., election precinct during Mississippi's party primaries, Tuesday, March 12, 2024. (AP Photo/Rogelio V. Solis)
A lone voter has a choice of privacy voting kiosks at which to cast her ballot at this north Jackson, Miss., election precinct during Mississippi's party primaries, Tuesday, March 12, 2024. (AP Photo/Rogelio V. Solis)
Facebook Share IconTwitter Share IconEmail Share Icon

The next few months will bring us fully into the throes of the 2024 election campaign.

But this year will be like no other, as warnings grow over the changing landscape of artificial intelligence and the potential threat it poses before millions of Americans head to the polls in November.

“It’s one of my areas of greatest concern. The ability of these models to manipulate, to persuade," OpenAI CEO Sam Altman said.

Industry leaders like Altman acknowledge that deepfake audio has become indistinguishable from real people's voices. The latest example showing the danger: a voice impersonating President Joe Biden discouraging people from voting in New Hampshire's primary.

The AI voice of the president saying: “It’s important you save your vote for the November election. Voting this Tuesday only enables Republicans in their quest to elect Donald Trump again."

The Federal Communications Commission has now banned deepfake robocalls and the Federal Trade Commission has proposed a ban on impersonation of individuals in government and business.

Despite these actions, President of Public Citizen Rob Weissman says the Federal Election Commission needs to do much more.

“The Federal Election Commission is completely falling down the job, Leaving us terribly vulnerable for the 2024 election cycle," Weissman said.

He said on a positive note, many states have taken action. The Georgia House last month passed a bill making it a felony to use deceptive video or audio to impersonate candidates.

How can we have election integrity without knowing what the candidates are saying and what they're truthfully saying what they truthfully believe in?" State Rep. Todd Jones, R-Ga., asked.

Social media networks are already preparing for protection from AI content this year. YouTube now requires labels for some AI-generated content with similar policies from Meta, which owns Facebook and Instagram. But, concerns are rising that bad actors will disregard the policies in place.

"If you’re a candidate and you're portrayed two days before the election and a video falling down drunk, you’re in a lot of trouble," Weissman said. "And to get up and tell people that wasn’t me that’s gonna be a hard sell because people would’ve seen it with their own eyes."

Some even growing overtly concerned that if things continue down this path, the American people won't trust anything they see or hear, creating a lack of trust that could in turn undermine American democracy.

Loading ...