A new report highlights a concerning rise in AI-generated manipulated images and videos targeting women in India. This trend is creating a climate of fear, causing many women to withdraw from online spaces and limiting their public presence.
The widespread adoption of artificial intelligence in India, now the world's second-largest market for OpenAI, has brought both innovation and new challenges. Researchers observe a significant increase in online harassment cases involving AI tools, primarily affecting women and gender minorities.
Key Takeaways
- AI deepfakes targeting women are increasing in India.
- Victims report images being 'nudified' or culturally stigmatizing.
- This abuse leads to women self-silencing and receding from online platforms.
- Legal frameworks in India struggle to address AI deepfakes effectively.
- Social media platforms often provide inadequate responses to abuse reports.
The Rise of AI-Powered Harassment
The Rati Foundation, a charity operating an online abuse helpline in India, collaborated with Tattle to release a report detailing this disturbing trend. Their findings indicate a clear connection between the growing use of AI and new forms of harassment against women.
According to the report, a significant portion of AI-generated content is now used to target women. This includes creating manipulated images or videos that are either explicit or culturally inappropriate for Indian society, such as public displays of affection.
Concerning Statistics
Approximately 10% of the hundreds of cases reported to the Rati Foundation helpline now involve AI-manipulated images. This highlights how easily realistic-looking content can be created and misused.
Impact on Public Figures and Everyday Women
High-profile cases have brought some attention to the issue. The Bollywood singer Asha Bhosle experienced her likeness and voice being cloned by AI and circulated on YouTube. Journalist Rana Ayyub, known for her investigative work, faced a doxing campaign last year that included deepfake sexualized images on social media.
While some public figures, like Bhosle, have pursued legal action to protect their image and voice, the broader impact on ordinary women often goes unnoticed. Many women, like law graduate Gaatha Sarvaiya, feel increasingly unsafe posting personal images online.
"The thought immediately pops in that, 'OK, maybe it’s not safe. Maybe people can take our pictures and just do stuff with them,'" says Sarvaiya, who lives in Mumbai.
The 'Chilling Effect' on Online Presence
The fear of deepfakes is causing a significant shift in how women engage with the internet. Many are becoming more cautious or even withdrawing entirely from online platforms. This phenomenon is often referred to as a "chilling effect."
Rohini Lakshané, a researcher focusing on gender rights and digital policy, also avoids posting photos of herself online. She notes that the ease of misuse makes her extra careful.
Tarunima Prabhakar, co-founder of Tattle, explains the emotional toll this takes. Her organization conducted focus groups across India and identified "fatigue" as a primary emotion among victims. This fatigue often leads to women receding from online spaces altogether.
What are 'Nudify' Apps?
AI tools, sometimes called "nudification" or "nudify" apps, can digitally remove clothes from images. These apps have made once extreme forms of abuse far more accessible and common, enabling perpetrators to create explicit content with minimal effort.
Real-Life Consequences of Digital Alteration
The report details a harrowing case where a woman's photo, submitted for a loan application, was digitally altered using a nudify app. The manipulated image, along with her phone number, was then circulated on WhatsApp when she refused to comply with extortion demands.
This led to a "barrage of sexually explicit calls and messages" from unknown individuals. The victim reported feeling "shamed and socially marked, as though she had been 'involved in something dirty'."
Legal Challenges and Platform Accountability
In India, like many parts of the world, deepfakes exist in a legal grey area. There are no specific laws directly addressing them as distinct harms. While existing laws against online harassment and intimidation can be applied, the process is often lengthy and complex.
"But that process is very long," says Sarvaiya, emphasizing that India's legal system is ill-equipped to handle AI deepfakes effectively.
Inadequate Platform Responses
Part of the problem lies with the platforms where these images are shared, including YouTube, Meta, X, Instagram, and WhatsApp. Indian law enforcement agencies describe the process of getting these companies to remove abusive content as "opaque, resource-intensive, inconsistent and often ineffective."
Even when platforms do act, their responses are frequently insufficient. In the extortion case involving the nudified loan application photo, WhatsApp's action was delayed, and the images had already spread widely. Similarly, Instagram's response to an Indian creator harassed by nude videos was described as "delayed and inadequate" despite "sustained effort" from the victim.
- Victims are often ignored when reporting harassment.
- Abusive content frequently reappears elsewhere, a phenomenon called "content recidivism."
The Rati Foundation's report concludes that AI-generated abuse tends to multiply easily, spread widely, and resurface repeatedly. Addressing this issue effectively will require greater transparency and data access from the platforms themselves.





