A federal lawsuit filed in San Francisco accuses Meta of violating user privacy through its popular AI-powered smart glasses. The complaint alleges that highly sensitive, personal footage captured by the devices is being viewed by human contractors, directly contradicting the company's privacy assurances.
The legal action, which seeks class-action status, follows a detailed investigation that raised questions about how data from the more than 7 million pairs of glasses sold is handled. The lawsuit claims false advertising, fraud, and breach of contract.
Key Takeaways
- A new lawsuit accuses Meta of misleading consumers about the privacy of its AI smart glasses.
- The complaint alleges that sensitive user footage is reviewed by human contractors in Kenya.
- Meta states that media remains on the device unless users choose to share it for AI features.
- The lawsuit seeks to halt Meta's current advertising practices and requests punitive damages.
Details of the Federal Complaint
The lawsuit, filed on Wednesday in San Francisco, centers on Meta's marketing of its AI smart glasses, which are promoted with the slogan, “Designed for privacy, controlled by you.” Plaintiffs argue that this representation is deceptive. They contend that consumers would not have purchased the devices had they known their private moments could be exposed to strangers.
“Consumers purchased these Glasses believing Meta’s privacy assurances,” the complaint states. “They did not, and could not reasonably, understand that their bedrooms, bathrooms, families, bodies, and more would be exposed to strangers around the world.”
The legal filing is demanding an injunction to stop Meta's current advertising methods. It also seeks punitive damages for the alleged violations, aiming to hold the company accountable for its privacy promises.
By the Numbers
- 7 million+: Pairs of Meta's AI smart glasses sold in 2025.
- 30: Number of employees of a Meta contractor in Nairobi interviewed for the initial investigation.
The Investigation That Sparked the Lawsuit
The lawsuit heavily references a late February investigation by the Swedish newspaper Svenska Dagbladet. The report, a collaboration with other journalists, uncovered claims from data labelers in Nairobi, Kenya, who work for Sama, a Meta contractor.
These workers are tasked with reviewing and annotating data to train Meta's artificial intelligence systems. According to the investigation, the data they reviewed included extremely private footage captured by the smart glasses.
Disturbing Content Described
The Kenyan data labelers described seeing a wide range of sensitive content. This included people who were naked or changing clothes, footage of individuals watching pornography, and conversations about criminal activities and protests. They noted that while faces were often blurred, they were sometimes visible depending on the lighting conditions.
One annotator told the reporters: “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”
This testimony forms a core part of the argument that users were not adequately informed about how their data would be used when interacting with the glasses' AI features.
Meta's Position on Data Handling
In response to the allegations, Meta has stated it is analyzing the lawsuit. A company spokesperson, Chris Sgro, explained the company's policy on data captured by the glasses.
“Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device,” Sgro said. This suggests that data transmission is an opt-in process for users who want to utilize the device's AI capabilities.
What is Data Labeling?
Data labeling, or annotation, is a critical process for training artificial intelligence. It involves humans reviewing raw data (like images or videos) and adding labels or tags to help machine learning models understand and recognize patterns. For a device like smart glasses, this could mean identifying objects, transcribing speech, or categorizing scenes.
The spokesperson acknowledged the use of contractors for data review but emphasized that measures are in place to safeguard user information.
“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do,” Sgro’s statement continued. “We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”
Questions Remain on User Consent
A central point of contention is what it means to “share content” with Meta AI. The company's Terms of Service state that Meta may review interactions with its AIs, and this review can be either automated or manual (human).
The Swedish newspaper's investigation also performed a technical test. Reporters found that the AI tool on the glasses would not function without an internet connection. Once connected, the device contacted multiple Meta servers, which the lawsuit interprets as evidence that footage is transmitted for processing.
This raises a critical question for consumers: is using a core AI feature of the glasses implicitly considered consent for potential human review of the captured data? The lawsuit argues that this process is not transparent and that a “central function” of the glasses undermines the promise of user-controlled privacy.
The outcome of this case could have significant implications for the entire wearable technology industry, setting a new precedent for transparency and user consent in the age of ambient computing.





