Conservative activist Robby Starbuck has filed a lawsuit against Google seeking a minimum of $15 million in damages, alleging the company's artificial intelligence programs generated and spread false, defamatory information about him. The legal action, filed in Delaware Superior Court, accuses the tech giant of negligence and malice for producing fabricated claims, including serious criminal allegations.
The lawsuit contends that Google's AI tools, including Gemini and its predecessor Bard, created detailed and baseless accusations against Starbuck when users prompted the systems for information about him. This case highlights the growing legal and ethical questions surrounding AI-generated content and the responsibility of the companies that develop these powerful technologies.
Key Takeaways
- Robby Starbuck is suing Google for at least $15 million over alleged defamation by its AI products.
- The lawsuit claims Google's AI generated false accusations of rape, sexual assault, and other serious crimes.
- Google has described the false outputs as "hallucinations," a known issue with Large Language Models (LLMs).
- The case could set a significant legal precedent regarding corporate liability for AI-generated misinformation.
Details of the Allegations
The lawsuit filed on Wednesday outlines a series of damaging falsehoods allegedly produced by Google's AI systems since 2023. According to the complaint, queries about Robby Starbuck resulted in AI-generated biographies that were "outrageously false."
These fabrications reportedly included a lengthy and entirely invented criminal record. The AI allegedly produced claims that Starbuck had faced charges for stalking, drug offenses, and resisting arrest. The accusations escalated to include claims of murder and an association with Jeffrey Epstein.
Starbuck stated that the most alarming falsehood was a fabricated accusation of child rape. He identified this as the final impetus for pursuing legal action after previous attempts to resolve the matter directly with the company failed.
"The breaking point for me was when they accused me of child rape. That was where I was like, ‘We have to just go forward with the lawsuit. They're clearly not taking this seriously,'" Starbuck said in a statement.
The complaint further alleges that Google was aware of the issue. It claims Starbuck's legal team sent multiple cease-and-desist letters to the company before filing the suit, but the defamatory outputs continued. The lawsuit also makes the specific claim that one of Google's own AI models, Gemini, stated the false information had been shown to over 2.8 million unique users.
Google's Response and the "Hallucination" Defense
In response to the allegations, Google has pointed to a widely recognized phenomenon in artificial intelligence known as "hallucinations." This term describes instances where an AI model generates information that is incorrect, nonsensical, or not based on its training data.
What Are AI Hallucinations?
AI hallucinations occur when a Large Language Model (LLM) like Gemini produces confident-sounding but factually incorrect statements. This happens because the models are designed to predict the next most likely word in a sequence, not to verify facts. They can invent sources, misinterpret data, or combine unrelated information to create plausible but untrue narratives.
A Google spokesperson, José Castañeda, addressed the issue, stating that such occurrences are a "well-known issue for all LLMs." He noted that the company works to minimize them and that some of the claims relate to outputs from Bard, a previous version of its AI, which were addressed in 2023.
The company also suggested that the way a user phrases a question can influence the AI's response. "If you’re creative enough, you can prompt a chatbot to say something misleading," Castañeda commented. Google indicated it was unable to replicate the specific defamatory results Starbuck cited within its main consumer-facing products.
Google also drew a distinction between its consumer Gemini App and Gemma, an open model designed for developers. The company believes some of the problematic outputs may have originated from customized versions of Gemma, which it argues is fundamentally different from the application most people use.
A Potential Landmark Case for AI Accountability
This lawsuit is not Starbuck's first encounter with AI-generated defamation. Earlier this year, he reached a settlement with Meta over similar claims regarding its AI chatbot. As a result of that settlement, Starbuck now serves as a consultant to Meta's policy team, working to help reduce political bias and hallucinations in its AI models.
The case against Google, however, could have far-reaching implications for the entire tech industry. It directly confronts the question of legal liability for information created by generative AI. While tech companies have long been shielded from responsibility for user-generated content under Section 230 of the Communications Decency Act, it remains unclear how courts will treat content generated by the companies' own AI systems.
Legal experts are closely watching cases like this to see if courts will classify AI-generated content as speech created by the company itself, which would expose them to traditional defamation and negligence claims without the protection of Section 230.
Starbuck expressed his hope that the lawsuit will establish a necessary precedent for the responsible development and deployment of artificial intelligence. He voiced concerns about the potential for harm if AI models are programmed in a way that allows them to damage human reputations without consequence.
"Right now, the AI believes it's OK to harm humans and defame them as long as it's in the interest of how it was programmed," Starbuck stated. "That's an incredibly dangerous thing to bake into artificial intelligence."





