Major Hollywood organizations, including the screen actors' guild SAG-AFTRA, have issued strong statements expressing serious concern over OpenAI's latest video generation model, Sora 2. The guilds and talent agencies are raising alarms about the potential for the technology to misuse artists' work and intellectual property without consent or compensation.
The collective response from SAG-AFTRA, the Motion Picture Association (MPA), and top talent agencies like UTA and CAA signals a growing conflict between the creative industries and artificial intelligence developers. The core of the dispute revolves around how AI models are trained and whether artists must explicitly agree to have their work, image, and likeness used.
Key Takeaways
- SAG-AFTRA argues that artistic performance must remain human-centered and that AI use requires transparency, consent, and fair compensation.
- Major talent agencies, including UTA and CAA, have described the unauthorized use of their clients' intellectual property by AI models as exploitation.
- OpenAI has acknowledged some concerns, suggesting future controls for rightsholders and a potential revenue-sharing model, though details remain vague.
- A key point of contention is OpenAI's initial "opt-out" policy for training data, which creative guilds argue is not a substitute for informed, "opt-in" consent.
SAG-AFTRA Demands Protections for Human Artistry
The Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) has taken a firm stand against the unregulated use of AI in media. In a joint statement, President Sean Astin and National Executive Director Duncan Crabtree-Ireland emphasized the importance of human connection in art.
"The world must be reminded that what moves us isn’t synthetic. It’s human," they stated. Their message directly challenges the narrative surrounding AI-generated characters, such as the widely discussed synthetic figure "Tilly Norwood," which they argue distracts from the underlying issues of data sourcing and authorship.
The "Tilly Norwood" Controversy
"Tilly Norwood" is a name given to a synthetically generated character that gained media attention for its realism. SAG-AFTRA argues that publicizing such creations as "breakthroughs" or potential "star signings" misses the point that these models are trained on the work of countless real performers, often without their knowledge or permission.
The union leaders criticized the media and tech companies for creating what they called "a sensationalized narrative, designed to manipulate the public and make space for continued exploitation."
"This story of creating synthetic characters is not about novelty. It’s about authorship, consent and the value of human artistry," Astin and Crabtree-Ireland wrote.
Consent Model at the Core of the Debate
A central issue for SAG-AFTRA is OpenAI's policy regarding the data used to train its models. The union strongly opposes an "opt-out" system, where creators bear the burden of requesting their work be removed from training datasets. They insist on an "opt-in" model, which requires explicit permission before any work is used.
"Opt-out isn’t consent — let alone informed consent," the statement declared. "No one’s creative work, image, likeness or voice should be used without affirmative, informed consent."
Despite this criticism, SAG-AFTRA did acknowledge a positive step in Sora 2. The model's "cameo" function, which allows individuals to create and control a digital replica of themselves, is based on an opt-in system. The union noted this feature reflects months of dialogue with OpenAI and hopes other AI companies will adopt similar principles of informed consent.
Talent Agencies and MPA Echo Concerns
The concerns raised by SAG-AFTRA are shared across the entertainment industry. Major talent agencies have released statements defending their clients' rights against potential infringement by AI platforms.
The United Talent Agency (UTA) stated, "There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected." The agency labeled the use of intellectual property without consent, credit, or compensation as "exploitation, not innovation."
Similarly, the Creative Artists Agency (CAA) affirmed its commitment to protecting its clients, warning that Sora "exposes our clients and their intellectual property to significant risk." CAA noted the potential for misuse extends beyond entertainment, posing "serious and harmful risks to individuals, businesses, and societies globally."
MPA Cites Widespread Infringement
Charles Rivkin, chief of the Motion Picture Association (MPA), also condemned the AI tool. He stated that since Sora 2's release, "videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media."
OpenAI's Response and Future Plans
In the face of widespread industry backlash, OpenAI CEO Sam Altman has indicated the company is re-evaluating its approach. He acknowledged the need for better controls for intellectual property owners, moving away from the initial opt-out model.
Altman wrote that OpenAI plans to give rightsholders "more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls." This suggests a shift toward a system that requires permission rather than assuming it.
He also raised the possibility of a financial model to compensate creators. "We are going to try sharing some of this revenue with rightsholders who want their characters generated by users," Altman explained, though he admitted the exact model would require "trial and error." Hollywood remains skeptical of these vague promises, awaiting concrete policies and actions.
The Path Forward: Regulation and Guiding Principles
The rapid advancement of generative AI has outpaced legislation, creating what SAG-AFTRA calls an "unregulated environment." The union is actively lobbying for stronger legal protections to supplement the gains made during its 2023 strike, which secured the first contractual AI protections for performers.
SAG-AFTRA outlined three core principles guiding its advocacy:
- Performance must remain human-centered. Technology should serve human expression, not replace it.
- AI can enhance creativity, but it must never replace it. AI should be a tool for artists, not a substitute for them.
- AI use must be transparent, consensual, and compensated. These are non-negotiable requirements for ethical AI implementation.
The union is supporting several pieces of federal legislation, including the No FAKES Act, which would prohibit unauthorized digital replicas, and the TRAIN Act, which would mandate transparency in AI training datasets. The collective push from Hollywood's most powerful institutions indicates that the debate over AI's role in creative industries is just beginning, with significant legal and ethical challenges ahead.





