
Userba011d64_201/iStock via Getty Images
A bipartisan group of U.S. senators has introduced legislation intended to counter the rise of deepfakes and protect creators from theft through generative artificial intelligence.
“Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content,” said U.S. Senator Marsha Blackburn (R.-Tenn.).
The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, is co-sponsored by Blackburn, Maria Cantwell (D-Wash.) and Martin Heinrich (D-N.M.), who is also a member of the Senate AI Working Group.
“The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed,” Cantwell said.
The act, if passed, would require the National Institute of Standards and Technology to create guidelines and standards for “provenance information, watermarking, and synthetic content detection.”
It would also prohibit the unauthorized use of content by journalists, artists and musicians to train AI models or generate AI content. The proposed law would give individuals the right to sue violators and authorize the Federal Trade Commission and state attorneys to enforce the bill.
The law would prevent tampering or disabling AI provenance information as well.
Content provenance information refers to “state-of-the-art, machine-readable information documenting the origin and history of a piece of digital content, such as an image, a video, audio, or text,” according to the bill.
“Deepfakes are a real threat to our democracy and to Americans’ safety and well-being,” Heinrich said. “I’m proud to support Senator Cantwell’s COPIED Act that will provide the technical tools needed to help crack down on harmful and deceptive AI-generated content and better protect professional journalists and artists from having their content used by AI systems without their consent.”
A wide array of organizations endorsed the bill, including SAG-AFTRA, Nashville Songwriters Association International, Recording Academy, National Music Publishers’ Association, Recording Industry Association of America, News/Media Alliance and the National Newspaper Association, to name a few.
Microsoft-backed (NASDAQ:MSFT) OpenAI has created partnerships with several major media companies to use archives to train AI models without infringing on copyrighted material.
In May, OpenAI stopped using the voice Sky in an audible version of ChatGPT after it was pointed out that it sounded identical to actress Scarlett Johansson. Last year, OpenAI was sued by a number of authors, including John Grisham and George R.R. Martin, for using their work to train AI models.
This is not the first U.S. Senate bill targeting the use of AI-generated deepfakes. Last month, U.S. Senator John Hickenlooper (D-Colo.) co-sponsored another bipartisan bill known as the Take It Down Act.
This bill specifically targeted the use of AI-generated deepfake pornography.
“AI innovation is going to change so much about our world, but it can’t come at the cost of our children’s privacy and safety,” said Hickenlooper. “We have a narrow window to get out in front of this technology. We can’t miss it.”
Dear readers: We recognize that politics often intersects with the financial news of the day, so we invite you to click here to join the separate political discussion.