Colorado law will require disclosure of AI-generated content in political ads
Communications without proper disclaimers are subject to civil penalties.
A law going into effect in July in Colorado will place new regulations and penalties on using artificial intelligence to manipulate video or images and using them in political campaigns.
House Bill 24-1147, signed into law last week by Democrat Gov. Jared Polis, will regulate the use of deepfakes created for communications about candidates for elective office. The nine-page bill passed in the Senate 24-10 and the final version in the House 44-18.
The new law will require disclaimers on communications generated or substantially altered by AI which falsely depict what a candidate or elected official has said or done. Communications without proper disclaimers are subject to civil penalties. The law also provides for a private right of action for candidates or office holders who are the subject of deepfakes.
Administrative hearing officers will be allowed to impose civil penalties for distribution of communication that includes a deepfake related to a candidate for elective office. The law falls under Colorado’s "Fair Campaign Practices Act.”
The law will prohibit “distribution of a communication that includes an undisclosed deepfake with actual malice as to the deceptiveness or falsity of the communication related to a candidate for public office,” according to the law’s summary.
The law states the definition of AI-generated content doesn’t include minimally edited, adjusted nor enhanced images, video, audio, multimedia or text. The laws states the AI-generated content is permissible if a “reasonable person” wouldn’t take away an altered meaning or significance of the communication.
“AI is a threat to American elections and may supercharge election disinformation through the use of deepfakes,” Democrat Secretary of State Jena Griswold said in a statement. “This new law will help ensure Coloradans know when communications featuring candidates or officeholders are deepfaked and will increase transparency.”
The law states a deepfake “is analogous to a person being forced to say something in a video recorded under duress, where the victim appears to say something they would not normally say, one through force and the other through deepfake technology. A voter’s opinion of a candidate may be irreparably tainted by a fabricated representation of a candidate or elected official saying or doing something they did not say or do.”
The law restricts the use of deepfakes in communications to voters 60 days before a primary election or 90 days before a general election.
Last week, New Hampshire Republican Attorney General John Formella announced an indictment of Steven Kramer, a political consultant from Louisiana, on 13 counts of felony voter suppression and 13 misdemeanor counts of impersonation of a candidate. An investigation found thousands of New Hampshire residents allegedly received a robocall message with an AI-generated voice similar to President Joe Biden urging them not to vote in the state’s presidential primary.