OpenAI's text-to-video application, Sora, has rapidly gained popularity, reaching over one million downloads on the iOS App Store. However, its ability to generate realistic videos of deceased public figures has ignited a significant ethical debate, drawing criticism from family members and experts concerned about misinformation and the use of a person's likeness without consent.
Key Takeaways
- OpenAI's Sora app can create realistic videos of deceased celebrities and historical figures, raising ethical and legal questions.
- Family members of Robin Williams, Martin Luther King Jr., and George Carlin have publicly requested users to stop creating these AI-generated videos.
- Experts warn that such technology could lead to the spread of historical misinformation and a general erosion of trust in digital media.
- OpenAI has stated it will provide more control to rights holders, but current detection methods like watermarks are considered easily removable.
The Rise of AI-Generated Likenesses
Initially launched as a platform for creative expression, Sora allows users to generate short videos from simple text prompts. The application can produce a wide array of scenarios, including placing well-known individuals in situations they never experienced. Examples include videos of singer Aretha Franklin making candles or President John F. Kennedy falsely announcing the moon landing was fabricated.
The speed and ease with which these videos can be created have put the app at the center of a debate about digital consent. While the app's social structure includes features for living users to control their likeness, it offers no such protection for those who are deceased and cannot opt out.
A New Form of Digital Content
Text-to-video generators like Sora and Google's Veo represent a significant leap in AI capabilities. These models are trained on vast datasets of images and videos, allowing them to create novel content that is increasingly difficult to distinguish from real footage. This realism is the primary driver of both their potential and the concerns surrounding their use.
Families and Estates Voice Opposition
The creation of AI-generated videos featuring deceased celebrities has prompted public outcry from their families. Zelda Williams, daughter of actor Robin Williams, expressed her disapproval on social media, urging people to stop creating such content of her late father.
"If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop," Zelda Williams wrote. "It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”
Similarly, Bernice King, daughter of civil rights leader Dr. Martin Luther King Jr., requested a halt to the manipulation of her father's image and speeches. The family of comedian George Carlin has also stated they are actively working to combat deepfakes of him.
Legal and Ethical Gray Areas
The legal framework for protecting a person's likeness after death varies, but the speed of AI development presents new challenges. Adam Streisand, an attorney who has represented celebrity estates, noted that while laws exist to protect against such reproductions, the legal system struggles to keep pace with the technology.
Streisand described the situation as an "almost 5th dimensional game of whack-a-mole," where the slow pace of human-dependent judicial processes cannot effectively manage the rapid creation and distribution of AI-generated content.
Managing Legacies in the AI Era
Mark Roesler, chairman of CMG Worldwide, has managed the intellectual property for over 3,000 deceased personalities. He acknowledges that while new technology brings risks of abuse, it can also play a role in keeping the legacies of historical figures alive for new generations.
The Broader Societal Impact
Experts in media studies and computer science have raised alarms about the potential long-term effects of realistic deepfake technology. Liam Mayes, a lecturer at Rice University, identified two primary risks associated with the proliferation of AI-generated video.
First, Mayes warns of the potential for "nefarious actors undermining democratic processes" and the use of deepfakes in scams. Second, he suggests that the inability to distinguish real from fake could lead to a significant erosion of trust in media and public institutions.
This concern is amplified by the ability to create convincing videos of historical figures making false statements. An experiment by NBC News successfully generated videos of President Dwight Eisenhower admitting to bribery and U.K. Prime Minister Margaret Thatcher dismissing the D-Day landings, highlighting the tool's potential for spreading historical misinformation.
OpenAI's Response and Detection Challenges
In response to the growing concerns, OpenAI has acknowledged the need for better controls. A company spokesperson stated that public figures and their families should have control over the use of their likeness. The company allows authorized representatives of recently deceased individuals to request the removal of their likeness from the app.
OpenAI CEO Sam Altman has also indicated that the company plans to give rights holders "more granular control over generation of characters." He mentioned that some rights holders are interested in the concept of "interactive fan fiction" but want to specify how their characters can be used.
The Problem of Identification
To help identify AI-generated content, OpenAI has implemented several measures, including visible watermarks, metadata, and invisible signals within the video files. However, experts are skeptical about their effectiveness.
- Easily Removable: According to Sid Srinivasan, a computer scientist at Harvard University, visible watermarks and metadata can be removed by determined individuals.
- Limited Access to Tools: Wenting Zheng, a professor at Carnegie Mellon University, argued that for detection to be effective on a large scale, OpenAI would need to share its detection tools with social media platforms. OpenAI has not confirmed if it has done so.
Some companies are now developing AI systems specifically to detect AI-generated content. Ben Colman, CEO of Reality Defender, explained that his company uses AI to find patterns that are imperceptible to humans. Similarly, McAfee's Scam Detector software analyzes audio for "AI fingerprints." Despite these efforts, technology is evolving rapidly, and according to McAfee, 1 in 5 people report having been a victim of a deepfake scam.





