For the better part of the last three years, AI avatar technology has been operating in what legal scholars politely call a gray area. The rest of us might call it the Wild West. Faces duplicated without permission. Voices cloned for ads. Digital likenesses put to uses their owners never imagined and never approved.
That is starting to change. And the changes matter a lot for anyone who has a digital presence they care about.
Until recently, if someone used your AI-generated likeness without permission, your legal options depended almost entirely on which state you lived in. Some states had strong right-of-publicity protections. Others had almost nothing. And even in the strongest states, the laws were written for a world before AI could generate a convincing replica of your face from a single photograph.
The result was an uneven, unpredictable patchwork that mostly benefited the people doing the misusing, because the effort and cost of pursuing a claim almost never made sense relative to the harm.
Tennessee moved first in a meaningful way. In 2024, the state enacted what became known as the ELVIS Act, short for the Ensuring Likeness, Voice, and Image Security Act. The law prohibits knowingly using an individual's voice or likeness without consent and specifically extends protection to AI-generated digital recreations of voices. Not just recordings. Recreations. That distinction matters enormously.
Other states followed quickly. Arkansas enacted legislation in early 2025 expanding publicity rights to cover AI-generated images, videos, and voice simulations. New York and California have their own protections. Oregon moved to regulate AI avatars specifically in healthcare settings. The list keeps growing.
The biggest development is happening at the federal level. The NO FAKES Act was reintroduced in the Senate in April 2025, representing the most serious attempt yet to create a national framework for protecting digital likenesses.
The bill would establish what it calls a digital replication right, essentially a federal right of publicity allowing individuals to authorize or refuse the use of their voice or visual likeness in AI-generated content. The right would continue after death for up to 70 years and is transferable to heirs. It includes a notice-and-takedown procedure similar to the DMCA, so platforms could be required to remove unauthorized content upon notification.
The coalition behind the bill is unusually broad. SAG-AFTRA, major studios, recording industry associations, and individual artists have all lined up in support. When Hollywood unions and major technology interests find common ground on legislation, it tends to signal real momentum.
A few things are worth understanding clearly.
First, the rights being discussed are yours, not the platform's and not the AI company's. The laws being written are designed to give individuals control over how their digital identity is used and deployed, even after it has been created with AI assistance.
Second, consent is becoming infrastructure. Legal experts tracking this space make the point that companies relying on AI likenesses must treat publicity rights as core infrastructure, not as an afterthought. That applies to businesses using employee or spokesperson avatars as much as it applies to individuals.
Third, state-by-state compliance is not optional if you are operating at scale. If your content or your business reaches people in New York, California, Tennessee, and Arkansas, you are subject to four different sets of rules. That complexity is only going to increase as more states act.
Even when the NO FAKES Act passes, legal frameworks are reactive. They give you tools to fight back after something has gone wrong. They do not prevent the deepfake from being made. That is why the community forming around InsureMyAvatar matters. We are the people asking what real protection looks like before the answers are fully built, because waiting until everything is figured out is its own kind of risk.
The most important takeaway from all the legislative activity is this: lawmakers at the state and federal level have concluded that AI avatars create real harm that requires real protection. They are right. The question is what you are doing, beyond waiting for the law, to protect yourself today.