Smart glasses promised hands-free convenience, not voyeurism training for overseas workers. Yet that’s exactly what happened when Meta contractors in Kenya reported viewing Ray-Ban Meta footage of users having sex, changing clothes, and using toilets—allegedly without users knowing despite the glasses’ recording light.
Corporate Cleanup or Retaliation?
Meta terminated its contract with Sama after workers spoke out about processing intimate footage.
In February 2026, Sama employees—tasked with annotating Ray-Ban Meta videos and images to improve AI—began speaking to Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, plus Kenyan journalist Naipanoi Lepapa about the disturbing content crossing their screens. An anonymous worker told reporters they were expected to process such footage regardless of content.
Less than two months later, Meta ended its contract with Sama, impacting 1,108 workers. Meta claims the termination stemmed from Sama “not meeting standards,” while emphasizing user consent protocols and filtering measures like face blurring. The company stated: “We take them seriously. Photos and videos are private… with clear user consent.”
Sama maintains it “consistently met all standards” and “stand behind integrity” of their work, receiving no prior notice of performance issues. The timing raises uncomfortable questions about whether this was performance management or retaliation for speaking out.
Workers report seeing footage that suggests users weren’t aware their most private moments were being recorded for AI training—despite Meta’s claims about consent and filtering.
Courts and Regulators Circle Meta
Multiple investigations target Meta’s data practices and worker treatment.
March 2026 brought a class-action lawsuit in California’s Northern District Court against both Meta and EssilorLuxottica, Ray-Ban’s parent company. Plaintiffs seek damages for consumer protection violations and injunctions against current practices.
Regulators aren’t waiting. The UK’s Information Commissioner’s Office called the reports “concerning” and plans to question Meta about transparency. Kenya’s Data Protection Commissioner launched its own investigation into AI training data privacy.
This isn’t Sama’s first scandal. OpenAI fired the company in 2022 after workers suffered trauma from classifying child sexual abuse material. The pattern suggests broader issues with how tech giants outsource their darkest AI training tasks to vulnerable global workforces.
These Ray-Ban Meta devices might capture memories effortlessly, but they’re also feeding an algorithm trained on humanity’s most private moments. The real question isn’t whether the technology works—it’s whether you can trust the companies building it.




























