Ashley St Clair, a writer and the mother of one of Elon Musk’s children, has reported being targeted by users of the AI tool Grok, which is integrated into Musk's social media platform, X. She stated that users are manipulating her photographs to create non-consensual, sexualized images, describing the experience as horrifying and a violation.
The incidents involve altering both recent and past photos, including one from her childhood, to depict her in sexually suggestive contexts. St Clair has publicly called out the platform for what she describes as an inadequate response to the harassment, highlighting a growing concern over the misuse of generative AI tools for creating abusive content.
Key Takeaways
- Ashley St Clair alleges users of Elon Musk's AI, Grok, are creating fake sexualized images of her.
- Manipulated images reportedly include one from her childhood and another featuring her toddler's backpack in the background.
- St Clair has criticized the response from the social media platform X, stating that reports of the abusive content have been met with slow or no action.
- The events raise broader questions about the safety guardrails on generative AI tools and their potential for targeted harassment.
Allegations of AI-Generated Harassment
Ashley St Clair, a political strategist, detailed a series of events where supporters of Elon Musk allegedly used his AI tool, Grok, to generate and distribute digitally altered images of her. The manipulated photos place her in compromising positions, with one image virtually undressing a photo taken when she was 14 years old.
St Clair expressed profound distress over the images, particularly one that was manipulated to show her bent over in a bikini with her toddler's backpack visible in the background. This detail, she said, made the harassment feel intensely personal and invasive.
"I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it," St Clair stated. "It’s another tool of harassment. Consent is the whole issue."
The harassment reportedly began over a weekend and escalated after she spoke out publicly. St Clair claims that her attempts to have the content removed through the platform's official channels were largely unsuccessful, with some images remaining online for extended periods despite being reported.
What is Grok?
Grok is a generative artificial intelligence chatbot developed by xAI, a company founded by Elon Musk. It is designed to have a more conversational and sometimes humorous tone compared to other AI models. Grok is integrated into the social media platform X (formerly Twitter) for premium subscribers, giving them the ability to generate text and, more recently, manipulate images based on prompts.
Platform Response and Moderation Concerns
A central part of St Clair's complaint focuses on the response from X. She reported that the platform's reaction time to her reports has been slow and inconsistent. While some manipulated images were eventually taken down, she noted that an altered photo of her as a child remained on the platform for over 12 hours after she reported it.
Several of the images highlighted by St Clair were removed only after media outlets contacted X for comment. This delay raises questions about the effectiveness of the platform's content moderation systems when dealing with AI-generated abusive material.
In a statement, a spokesperson for X said the company takes action against illegal content. "We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," the spokesperson said. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
Despite these assurances, St Clair believes the system is failing. She argues that the individuals behind these platforms could implement stronger safeguards to prevent such abuse almost immediately if they chose to.
The Broader Implications for AI Safety
This incident is part of a larger, troubling trend involving the misuse of generative AI. Experts have long warned that as AI image generation becomes more accessible, it could be weaponized for harassment, disinformation, and the creation of non-consensual explicit material, often referred to as "deepfake porn."
St Clair argues that this form of digital abuse is being mainstreamed on major social media platforms. She described how the problem seems to be evolving, with users creating increasingly violent and disturbing images.
Disturbing Trends
St Clair mentioned she has been sent other examples of Grok's misuse, including an image of a fully clothed six-year-old girl that was altered to show her in a bikini and covered in a substance meant to resemble semen. She also noted seeing images where AI was used to add bruises or depict women being tied up or mutilated.
She believes this type of harassment is designed to silence women and discourage them from participating in online public discourse. By creating a hostile environment, she warns, the AI models themselves risk being trained on a biased dataset, as women may withdraw from the platforms where these models are being developed and deployed.
"If you are a woman you can’t post a picture and you can’t speak or you risk this abuse," she said. "It’s dangerous and I believe this is by design."
St Clair referred to the situation as a "civil rights issue," asserting that women are being denied the ability to participate equally in the digital spaces where future technologies are being shaped. She is reportedly considering legal action and pointed to potential recourse under the new Take It Down Act in the United States, which targets the creation and distribution of non-consensual explicit images.
As lawmakers in both the U.S. and the UK debate new legislation to address the digital undressing of individuals, this case underscores the urgent need for clear regulations and robust safety measures from technology companies developing powerful AI tools.





