The writing assistance company Grammarly is now facing a class-action lawsuit seeking over $5 million in damages. The legal action centers on an artificial intelligence feature that used the names of prominent authors and journalists to provide editing suggestions without their consent.
The lawsuit was filed in the Southern District of New York by award-winning investigative journalist Julia Angwin. It alleges that Grammarly, and its parent company Superhuman, unlawfully misappropriated the names and identities of hundreds of professionals for commercial gain.
Key Takeaways
- A class-action lawsuit has been filed against Grammarly seeking more than $5 million in damages.
- The suit alleges the company used the names of writers, including Julia Angwin and Stephen King, in an AI feature without permission.
- Grammarly has since disabled the controversial "Expert Review" feature following public criticism.
- The company has issued an apology, stating it "missed the mark" and will change its approach.
Details of the Lawsuit Emerge
The federal lawsuit, led by journalist Julia Angwin, claims that Grammarly's "Expert Review" feature illegally capitalized on the reputations of established professionals. The complaint argues that the company engaged in the "misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits."
Filed on behalf of Angwin and others in a similar situation, the suit contends that this practice violates long-standing laws in New York and California that prohibit using a person's name and likeness for commercial purposes without their explicit permission. The legal team representing the plaintiffs believes the case is straightforward.
"Contrary to the apparent belief of some tech companies, it is unlawful to appropriate peoples’ names and identities for commercial purposes, whether those people are famous or not," the lawsuit states.
The action seeks to prevent Grammarly from using these individuals' names and from attributing advice to them that they never provided.
The Feature at the Center of the Controversy
The tool in question, named "Expert Review," was part of a suite of AI-powered widgets added to the Grammarly platform last year. It allowed users to select from a list of well-known figures, such as Stephen King or Neil deGrasse Tyson, to receive AI-generated critiques of their writing styled after that person.
While Grammarly included a disclaimer stating that the individuals had not endorsed or participated in the tool's development, many writers expressed frustration over the use of their likenesses. The feature leveraged a large language model to simulate the writing advice of these experts, effectively creating digital versions of them without their knowledge or approval.
What is Misappropriation of Likeness?
Misappropriation of likeness, also known as the right of publicity, is a legal principle that protects an individual's right to control the commercial use of their name, image, or identity. Laws in states like New York and California are particularly strong in this area, preventing companies from using a person's identity to endorse or promote a product without consent and often compensation.
Grammarly's Response and Apology
In response to significant public backlash, Superhuman, Grammarly's parent company, has already discontinued the feature. The decision was announced shortly before the lawsuit was formally filed.
Ailian Gan, Superhuman’s director for product management, issued a statement acknowledging the company's misstep. "Based on the feedback we’ve received, we clearly missed the mark," Gan said. "We are sorry and will do things differently going forward."
The company explained its intent was to help users access insights from thought leaders and give experts a new way to reach audiences. However, they now plan to reimagine the feature to give experts control over their representation.
Superhuman CEO Shishir Mehrotra also addressed the criticism on LinkedIn. "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices," Mehrotra wrote. "This kind of scrutiny improves our products, and we take it seriously."
Plaintiff's Perspective: A Poor Digital Clone
Julia Angwin, the named plaintiff, expressed her surprise upon learning from the tech newsletter Platformer that a digital version of her was offering writing advice. "You know, deepfakes are something I always think celebrities are getting caught up in, not regular journalists," she stated.
Her criticism extended beyond the unauthorized use of her name to the quality of the advice itself. Angwin found that the AI-generated suggestions attributed to her were not just unhelpful, but often detrimental.
In one instance Angwin cited, the AI version of her suggested revising a simple sentence to be longer and more complex, which she said "actually made it harder to understand." In another case, it advised expanding on a theme that was irrelevant to the text.
"It wasn't even just anodyne," Angwin commented on the advice. "It was actually kind of actively making it worse... It felt very scattershot to me. I was surprised at how bad it was."
Broader Implications for AI and Identity
This lawsuit highlights a growing area of legal and ethical conflict in the age of generative AI. As technology makes it easier to simulate human skills and personas, questions about consent, intellectual property, and personal identity are becoming more urgent.
Peter Romer-Friedman, Angwin's attorney, framed the case within this larger context. He noted a trend where professionals who have spent decades honing their skills see their names and expertise "appropriated by others without their consent."
The case against Grammarly could set an important precedent for how tech companies are allowed to use the likenesses and perceived styles of public figures in AI products. As the technology continues to evolve, the legal system will likely face more challenges that test the boundaries of digital identity and commercial rights.





