ROP on the Rise: Right of Publicity Claims Will Rise as States Address AI Generated Deepfakes and Voice Cloning
Authors
Myriah V. Jaworski , Chirag H. Patel , Nicolas V. Dolce
Laws prohibiting the use of a person’s likeness for commercial gain have been in effect for some time, testing everything from the value of an influencer’s endorsement to “freemium” reports by people search companies.
Right of Publicity (ROP) claims, like the state statutes and common law upon which they rely, are varied in substance and outcome. But in general, they are brought to remedy some alleged misuse of a person’s actual likeliness – their image in a photograph or voice in a recording. But what happens if the alleged likeness is entirely AI-generated?
To the extent AI-generated deepfakes and voice imagery did not already fit within the existing right of publicity laws, states are amending those laws or enacting new ones specifically to address AI-derived content. This legislation follows advocacy by musicians and entertainers, including an open letter issued by the Artists Rights Alliance on behalf of 200+ songwriters and musicians, calling out the commercial use of AI vocals and imagery, including in training data for AI models, “predatory” and one that “must be stopped.”
For example, in March 2024, Tennessee enacted the Ensuring Likeness, Voice, and Image Security (ELVIS) Act of 2024 (effective July 1, 2024), becoming one of the first states to enact legislation that directly regulates the misappropriation of an individual’s identity through use of generative AI (e.g., using AI to create a deepfakes or clone an artist’s voice). This right, as one may imagine from a law with the acronym ELVIS, extends to provide postmortem rights to the estates of deceased individuals.
And more is on the way. Currently, there are at least seven proposed state and federal laws that seek to regulate the use of AI to create deepfakes or otherwise impersonate unique attributes of an individual’s identity (i.e., their image, voice, or likeness). Federal regulators are getting involved too – the FTC and US Copyright Office both have active initiatives around AI replication of appearances, voices, or other unique aspects of an individual’s identity.
Importantly artificial intelligence services, internet platforms, and other technology companies should remain vigilant of this active and proposed regulation, as they could be subject to liability based on content available through their services or platform.
This article discusses Tennessee’s new ELVIS law and other proposed legislation and regulations relating to AI replication of appearances, voices, or other unique aspects of an individual’s identity.
Tennessee’s ELVIS Act
Liability
Prior to the amendment, Tennessee’s existing right of publicity law, the Personal Rights Protection Act of 1984, Tenn. Code § 47-25-1101 et seq. (TPRPA), prohibited the “unauthorized use” of an individual’s “name, photograph, or likeness” for a “commercial purpose.” The ELVIS Act amended the TPRPA to similarly prohibit the unauthorized commercial use of an individual’s voice.
“Voice” is broadly defined as “a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual.” Generative AI is already powerful enough to use artists’ audio samples to create new music that duplicates the artists’ voice and sound. This broad definition is aimed at regulating this use of audio samples to create AI-generated deepfakes or audio cloning.
The ELVIS Act created potential liability for not only those who create AI deepfakes or audio clones, but also those who publish and distribute unauthorized AI-generated content and create algorithms, software, or tools that allow for the creation of AI-generated content. Additionally, while the former TPRPA prohibited unauthorized use that was both “knowingly” and “commercial,” these liability provisions are not expressly subject to such restrictions.
Thus, these provisions have massive implications for artificial intelligence services, internet platforms, and other technology companies. The ELVIS Act’s lack of knowledge and commercial use requirements could be read as indirectly imposing content moderation requirements against the sharing of content among users. The ELVIS Act could also impose liability against creators of AI tools, even where the tool’s creator itself does not create the infringing content (or derive a profit from it).
Exemptions
The ELVIS Act narrows the exemptions under the TPRPA in two ways. First, the ELVIS Act narrows fair use exemption. The TPRPA formerly identified “fair use” as any use “in connection with any news, public affairs, or sports broadcast or account.” Now, the fair use is coextensive with any “use [that] is protected by the First Amendment.” Facially, this is not a significant change (as the former “fair use” definition was intended to embody First Amendment protections). However, the shift in language potentially raises the burden on the party invoking the defense because it must prove a protected use under the First Amendment rather than relying on the statutory language.
Second, the ELVIS Act narrows the TPRPA’s advertising exemption under Tenn. Code § 47-25-1107(c). The former version of TRPRA did not hold an advertising publisher (e.g., newspapers, magazines, and radio stations) liable for publishing an advertisement with an authorized use unless it knew of unauthorized use. Under the ELVIS Act, the advertising publisher is now subject to liability if it knew or “reasonably should have known” of the unauthorized use.
Remedies and Enforcement
The ELVIS Act’s remedies are the same as those existing under the TPRPA. The Act allows injunctive relief to prevent further unauthorized use and/or destruction of materials, and actual damages (including profits derived from the unauthorized use), plus attorneys’ fees and costs.
However, the ELVIS Act has one important change as to who has standing to bring claims under the TPRPA. The ELVIS Act grants standing to a licensee (i.e., a record company) when it has either “a contract for an individual’s exclusive personal services as a recording artist” or “an exclusive license to distribute [the artist’s] sound recordings.”
Other Proposed Litigation and Regulation
Artificial intelligence services, internet platforms, and other technology companies should remain aware of other proposed laws and regulations in this space:
Federal Activity
-
- The Congressional Research Service has suggested that “uses of artificial intelligence (AI) to create realistic images, videos, replicas, or voice simulations of real people” may require a Federal Right of Publicity Law.
-
- Nurture Originals, Foster Art, and Keep Entertainment Safe (“NO FAKES”) Act of 2023: This Senate Bill seeks to create liability for distribution of AI-generated deepfakes and sound impersonations.
-
- No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (“No AI FRAUD Act”): This House Bill seeks to create liability for creation of AI-generated deepfakes and sound clones.
-
- AI Labeling Act of 2023: This House Bill seeks to require disclosure of all AI-generated content (including interactions AI chatbots).
State Activity (Proposed Law or Amendments)
-
- California (AB 1836): This proposed law would create civil liability for the use of “digital replicas (i.e., AI-generated deepfakes or voice replication) of deceased celebrities. The bill defines “digital replica” as a “simulation of the voice or likeness of an individual that is readily identifiable as the individual and is created using digital technology.”
-
- Illinois (SB 3325): The proposed legislation would amend Illinois’ Right of Publicity Act to expand the definition of “identity” to include a “simulation of the attribute of an individual, or is created through the use of artificial intelligence.” The amendment would also give standing to third parties who have exclusive distribution rights.
-
- Kentucky (SB 317): The proposed legislation, much like the ELVIS Act, regulates the authorized commercial use of individual’s identity through AI-generated content.
-
- Louisiana (SB 217): This proposed legislation amends the Louisiana Election Code to require disclosure of AI-generated deepfakes or digital replication of political candidates.
Regulatory Activity
-
- The Federal Trade Commission: The FTC issued proposed rulemaking relating to companies that create technologies that are or could be used for creating AI deepfakes or clones. The proposed rule is currently in the public comment phase.
-
- The US Copyright Office: In early 2023, the US Copyright Office “launched a comprehensive initiative to examine the impact of generative Artificial Intelligence (AI) on copyright law and policy.” The US Copyright Office expects to publish a report on its findings in mid-2024. The first report will focus on “digital replicas, or the use of AI to digitally replicate individuals’ appearances, voices, or other aspects of their identities.” The reports could lead to regulatory or legislative activity.
Increase in ROP Litigation to Address AI
To date, courts have repeatedly beat back efforts by artists, comedians, and authors to address the use of their content as training data for AI models through copyright or other IP protections. But, where the stringent standards of copyright law may be an imperfect fit, the flexibility of the right to publicity or ROP claims may provide a better avenue to address the same or similar harms.
For example, in Andersen v. Stability AI Ltd., while the District Court for the Northern District of California dismissed the plaintiff’s (a visual artist who alleged her “artist identity” was misappropriated by use in Stability’s training models) right of publicity act claims, it did so with leave to amend and specifically directed that the plaintiffs “clarify their right of publicity theories as well as alleged plausible facts in support regarding each defendants’ use of each plaintiffs name in connection with advertising specifically and any other commercial interests of the defendants.”
And, in In re Clearview AI, Inc. Consumer Privacy Litigation, the District Court for the Eastern District of Illinois held that the plaintiffs, a class of consumers who alleged that their online information was scrapped by Clearview AI for use in its AI model, had adequately stated ROP claims under both California and New York statutory and common law because the plaintiffs had alleged the use and scrapping of their photographs and photographic images online, and that the use in the Clearview AI database was sufficiently commercial, such that the plaintiff did not need to plead a separate and specific advertising of their name or likeliness.
With the ELVIS Act and other state and federal proposals, we expect to see a continued increase in ROP claims to address AI-generated content.
This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.