[ad_1]
When experts in artificial intelligence recently showed a gathering of state legislators a deepfake image that had been generated by A.I. in early 2022, depicting former presidents Donald J. Trump and Barack Obama playing one-on-one basketball, the crowd chuckled at how rudimentary it was.
Then the panel brought out a fake video that was made just a year later, and the legislators gasped at how realistic it looked.
Alarmed by the increasing sophistication of what can be false or highly misleading political ads generated by artificial intelligence, state lawmakers are scrambling to draft bills to regulate them.
With primary voters about to cast the first ballots in 2024, the issue has become even more pressing for legislators in dozens of states who are returning to work this month.
“States know that there’s going to have to be some regulatory guardrails,” said Tim Storey, president and chief executive of the National Conference of State Legislatures, which convened the A.I. panel at a conference in December. “It’s almost trying to figure out what’s happening in real time.”
The broader goal, legislators said, was to prevent what has already happened elsewhere, especially in some elections overseas. In Slovakia, deepfake voice recordings, falsely purporting to be of the leader of a pro-Western political party buying votes, may have contributed to that party’s narrow loss to a pro-Kremlin party. And last year, Gov. Ron DeSantis of Florida released fake A.I. images of former President Donald J. Trump embracing Dr. Anthony Fauci.
At the beginning of 2023, only California and Texas had enacted laws related to the regulation of artificial intelligence in campaign advertising, according to Public Citizen, an advocacy group tracking the bills. Since then, Washington, Minnesota and Michigan have passed laws, with strong bipartisan support, requiring that any ads made with the use of artificial intelligence disclose that fact.
By the first week of January, 11 more states had introduced similar legislation — including seven since December — and at least two others were expected soon as well. The penalties vary; some states impose fines on offenders, while some make the first offense a misdemeanor and further offenses a felony.
State Representative Julie Olthoff, a Republican from northwestern Indiana who attended the legislators’ conference in Austin, Texas, said her background as the owner of a marketing and advertising business made her realize the potential dangers of people trying to manipulate images and words.
Her bill, filed on Jan. 3, would require any “fabricated media” using A.I. to come with a disclaimer stating, “Media depicting the candidate has been altered or artificially generated.” The bill would also allow candidates who were the targets of A.I. ads to pursue civil action.
“People don’t know how much to trust a source anymore, so I think this will help,” she said.
Several A.I. bills have been introduced in Congress, including one led by Senators Amy Klobuchar of Minnesota, a Democrat, and Josh Hawley of Missouri, a Republican. But those bills would apply to federal elections, not state or local ones, said Robert Weissman, president of Public Citizen, which has petitioned the Federal Election Commission to take additional actions against deepfakes.
“It’s one thing to rebut a lie or a mischaracterization, but to rebut a convincing video or recording of you saying something, what do you do?” he said. “That’s why we’re seeing this breadth of interest.”
Some legislators have contemplated banning misleading A.I. ads altogether. But political ads generally are given a lot of latitude in what they can say, and to avoid any First Amendment challenges, most lawmakers have focused on requiring that those who make, produce or disseminate the ads disclose — in legible text or clear audio — that the deceptive ads were produced by artificial intelligence.
Many of the bills apply only to such ads that are released up to 90 days before an election, when voters are paying the most attention.
Minnesota’s new law, enacted in May, targets those who use deepfakes to create sexual content without consent, or to damage a political candidate or influence an election, said State Representative Zack Stephenson, a Democrat who represents the northern Minneapolis suburbs.
In Michigan, which adopted its law in late November, what made the issue come to life was testimony from one of the bill’s sponsors, State Representative Penelope Tsernoglou, a Democrat from East Lansing.
“No more malarkey,” said the voice purporting to be Mr. Biden. “As my dad used to say, ‘Joey, you can’t believe everything you hear.’ Not a joke.”
But it was not real. Instead, a friend of Ms. Tsernoglou’s with no experience in tech had used an A.I. voice generator, she said in an interview.
“He said it took him five minutes,” she added.
The proposals have encountered minimal opposition to date, said Ilana Beller, field manager for Public Citizen’s Democracy Campaign. Technology companies have also been generally supportive, while trying to make sure that they are not liable for unwittingly airing an unlabeled deepfake on their platforms.
Of the half-dozen states that have introduced A.I. bills since December, Kentucky stands out because even first-time violators would be subject to a felony, punishable by up to five years in prison.
One of the bill’s sponsors, State Representative John Hodgson, a Republican from suburban Louisville, said he felt that a fine of several hundred or thousand dollars would not be enough of a deterrent.
Noting that he took his pet sheep Sassy and Bossy to a living Nativity scene during the Christmas holidays, Mr. Hodgson, a retired executive for UPS Airlines, mused: “Imagine if it’s three days before the election, and someone says I’ve been caught in an illicit relationship with a sheep and it’s sent out to a million voters. You can’t recover from that.”
[ad_2]
Source link