One of the best ways to pass the time driving to the mountains or beaches is a good podcast.
Podcasting has created commercial and creative opportunities for individuals to capitalize on their distinctive voices. From journalists and comedians to thought leaders and content creators, podcasting has allowed individuals to build recognizable audio brands. These voices become deeply associated with their creators and serve as powerful tools of influence and engagement.
In many cases, these individuals may wish to assert control over their voice as it becomes part of their personal brand. However, this also means that if their voice is cloned or imitated using AI, they may need to turn to legal recourse.
Podcasters are not alone in the community of folks looking to protect their distinctive voices as a brand. If you heard the voice of certain actors or singers, would you know it was them? Probably, yes.
Some newer or more experimental applications use AI voice cloning to replicate the sound of a distinctive voice. Unfortunately, these "soundalikes" can launch into the market without the involvement of the individual whose voice is featured. Unless the owner of the so-called "voice rights" has explicitly licensed their voice for this use, this could violate their right of publicity or constitute unauthorized commercial use of their likeness. That growing possibility is pushing the boundaries of how courts view voice in the intellectual property legal landscape.
Ultimately, even if the voice itself is not literally copied but merely imitated, the commercial use of someone’s voice raises serious legal and ethical concerns.
So, how is voice protected by law?
The Voice as a Protectable Right
In many states, voice is explicitly recognized as a protectable aspect of a person’s identity or likeness. For example, California’s right of publicity statute (Cal. Civ. Code § 3344) is among the most expansive, stating that “any person who knowingly uses another’s name, voice, signature, photograph, or likeness… [for commercial purposes] without such person’s prior consent” may be held liable. This law reflects a growing understanding that voice, like one’s image or name, can serve as a personal identifier in commercial contexts.
Voice has also been explicitly recognized as a protectable right in case law for decades. For example, in Midler v. Ford Motor Co. (849 F.2d 460 (9th Cir. 1988)), the “distinctive voice of a professional singer” was a recognized part of a person’s identity that deserves legal protection. In Midler v. Ford, Bette Midler famously sued Ford over her common law right to control her voice's likeness against intentional imitation. Id. (The imitator turned out to be a background singer of hers.) What gave the voice commercial value was its recognizable tie to Ms. Midler. So it makes sense that Ms. Midler should be compensated for such commercial use.
North Carolina does not have a distinctive right of publicity statute. In North Carolina, name, image, and likeness (NIL) rights – which can include voice – are protectable as an invasion of an individual's right of privacy. Specifically, courts have confirmed that when an individual’s identity is used to promote a product or service without permission, nominal damages or injunctive relief may be appropriate (see, for example, Flake v. Greensboro News Co., 195 S.E. 55 (N.C. 1938); Barr v. Southern Bell, 185 S.E.2d 714 (N.C. Ct. App. 1972)).
Copyright and Competing Claims
What about protecting voice through copyright law?
In the United States, a person's name, likeness, image, or voice is not considered a work of authorship as required to be copyrightable (see Downing v. Abercrombie & Fitch, 265 F.3d 994, 1004 (9th Cir. 2001)). However, a person's name, likeness, image, or voice could be embodied in a copyrightable work. Put another way, a recording of a voice is copyrightable, but the voice alone may not be.
What makes this more complicated is whether the person whose identity is featured in that work is the same as the person who holds the copyright to that work. That is usually not the case. In Laws v. Sony Music Entm’t, Inc. (448 F.3d 1134 (9th Cir. 2006)), for example, the Court held that because the disputed vocal performance was fixed in a copyrighted recording, the singer’s misappropriation claim under state law was preempted by the Copyright Act. Put another way, the copyright holder's rights superseded the performer's rights of privacy and publicity.
Ms. Midler's claim discussed above was not preempted by copyright law because she sought damages against the soundalike, not against the use of the same song (which is copyrightable).
This creates competing interests: On one hand, the performer has an interest in their voice and identity; on the other, the copyright owner of the sound recording may control the underlying media. As AI-generated voices become more prevalent, courts will likely be asked to draw clearer lines between what is protected by copyright (the recorded performance) and what is protected by NIL laws (the voice as a unique personal trait).
Yet, we see those lines being blurred. For example, already in Denmark, copyright law will be updated to provide copyright protection to individual faces, likeness, and voice as a means to enforce against AI-generated deepfakes.
Will other jurisdictions follow by extending copyright protection in this way?
False Association and the Lanham Act
What about protecting voice through trademark law?
In addition to right-of-publicity, misappropriation, and copyright claims, trademark law may offer another tool for those seeking to protect their voice. The Lanham Act (the federal trademark statute) prohibits false endorsement or association, particularly when a commercial use implies that an individual has approved of or is affiliated with a product or service.
This legal argument was successfully employed in Waits v. Frito-Lay, Inc., 978 F.2d 1093 (9th Cir. 1992), where singer Tom Waits sued after a soundalike imitated his distinctive gravelly voice in a commercial without his permission. The Court held that the use of the imitation voice created a likelihood of consumer confusion, giving rise to a valid Lanham Act false endorsement claim, in addition to a California right of publicity violation.
The Waits case demonstrates that even when a voice is merely imitated, there can still be liability if that imitation of voice causes consumers to falsely believe the person endorsed the product. As with Midler v. Ford, the Court recognized that a unique voice can serve as a strong personal identifier, one worthy of legal protection even under federal trademark law.
The Lanham Act focuses less on the ownership of the voice itself and more on the consumer perception of endorsement, making it especially effective in cases involving celebrities or public figures whose voices are tied to their commercial personas.
Perhaps Voice is Best Protected by State Laws
In Lehrman v. Lovo, professional voice actors Linnea Sage and Paul Lehrman sued AI startup Lovo Inc., alleging the unauthorized use of their voices to create and sell AI-generated voice clones. Lehrman v. Lovo, Inc., No. 23-cv-08269, 2025 WL 1902547 (S.D.N.Y. July 10, 2025).
The actors were previously hired through another platform, and they were told that their recordings would be used for academic research. Yet, Lovo eventually accessed and used the recordings to train its AI voice generator.
The case addresses emerging legal questions at the intersection of AI and intellectual property, involving claims under the Lanham Act, the Copyright Act, and various state laws. Just this past July 2025, the Court dismissed the federal trademark claims and copyright claims, holding that Page and Lehrman's voices were neither protected as source identifiers (under trademark law) nor as copyrights (because while the original sound recordings are copyrightable, the voices alone are not).
Interestingly, the Court allowed state law claims to proceed to trial, including breach of contract, New York’s right of publicity statute, and consumer protection claims, recognizing that the actors plausibly alleged misuse of their identities and misleading commercial practices. This decision suggests how courts may evaluate using third-party content to train AI models, AI-generated content, generally, and voice cloning under existing legal frameworks.
A Legal System Struggling to Keep Pace
As artificial intelligence technology enables new forms of content creation, the law is under increasing pressure to adapt. While NIL laws, misappropriation torts, copyright law, and trademark law offer a patchwork of safeguards, it is unclear whether these laws can fully anticipate the complexity of digital identity replication.
The growing demand for legal clarity around so-called "voice rights" reflects a broader truth: In a world where technology allows near-limitless creation and imitation, the law should continue to protect individuals and the integrity of human expression.