Meta’s Ray-Ban Smart Glasses Expose Your Private Moments & Data to Offshore Workers

Kenyan contractors review intimate bathroom and bedroom footage from millions of unsuspecting Ray-Ban users

C. da Costa Avatar
C. da Costa Avatar

By

Image: Meta

Key Takeaways

Key Takeaways

  • Meta routes Ray-Ban smart glasses footage to Kenyan contractors reviewing bathroom recordings
  • Workers view intimate moments including undressing and sexual activity without user knowledge
  • Users lose permanent control over private footage once entered into training pipelines

You probably thought your Ray-Ban smart glasses kept your data relatively private, processed by algorithms in some distant server farm. The reality is far more disturbing: human contractors in Kenya are reviewing footage of people in bathrooms, bedrooms, and other intimate settings—all captured by glasses worn by users who remain largely unaware their most private moments are being watched by strangers.

This surveillance pipeline affects millions of smart glasses owners who believed they understood the privacy trade-offs of AI-enabled devices. Swedish news investigations revealed that Meta routes user footage to Sama, a Kenyan data annotation company, where workers spend their days watching intimate recordings to train AI models.

The Scale of Exposure is Staggering

Workers describe viewing deeply personal content that would horrify the people being recorded.

Contractors report seeing footage of individuals:

  • Using bathrooms
  • Changing clothes
  • Engaging in sexual activity

One worker described watching a man place his glasses on a bedside table, then witnessing his wife enter and undress. Others encounter credit card numbers, private medical information, and countless moments that would mortify their subjects if they knew strangers were watching.

The consent gap reveals itself in contractor observations: “I don’t think they know, because if they knew they wouldn’t be recording.” Meta buries disclosure language deep within AI terms of service, warning users not to share sensitive information—guidance that’s clearly being ignored by people who don’t realize their AI interactions potentially route footage to human reviewers overseas.

Your Data Becomes Permanently Beyond Your Control

Once footage enters Meta’s training pipeline, users lose practical control over how it’s used.

The working conditions compound these privacy violations. Contractors describe feeling pressured to process disturbing content without questioning ethical implications. “You are not supposed to question it. If you start asking questions, you are gone,” one worker explained. This creates an environment where legitimate privacy concerns cannot be raised by the very people witnessing them.

Technical safeguards prove inadequate. Former Meta employees indicate that anonymization protocols fail under certain lighting conditions, meaning faces remain identifiable despite supposed privacy protections. Data protection lawyer Kleanthi Sardeli summarizes the permanence problem: “Once the material has been fed into the models, the user in practice loses control over how it is used.”

The implications extend beyond current privacy violations. Internal Meta documents suggest the company views the current moment as opportune for launching facial recognition features on these same devices—capabilities previously avoided on ethical grounds. For the millions who purchased these glasses expecting reasonable privacy protections, the revelation suggests a fundamental misalignment between marketing promises and actual data handling practices.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →