Why Meta Is Now Scanning Your Skeleton to “Protect” You

Meta introduces AI technology that analyzes bone structure and height in photos to identify underage users across Instagram, Facebook and WhatsApp

Nikshep Myle Avatar
Nikshep Myle Avatar

By

Image: Meta

Key Takeaways

Key Takeaways

  • Meta launches AI bone structure analysis to detect underage users on social platforms
  • Technology scans photos for height and developmental markers without facial recognition capabilities
  • System achieves 88-96% accuracy but faces challenges with non-medical social media images

Underage accounts slip through social media’s age barriers daily, exposing children to content they shouldn’t see. Meta’s new AI bone structure analysis aims to close those gaps by scanning uploaded photos for height and developmental markers. This isn’t your typical content moderation update—it’s proactive age detection that could reshape how platforms verify users. Your teenager’s selfies now undergo algorithmic scrutiny designed to protect younger kids from accessing adult-oriented spaces.

Beyond Facial Recognition

The technology deliberately avoids facial recognition, instead focusing on general visual cues like height and bone structure to estimate age ranges. “Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person,” Meta states in their official announcement. The company combines these physical markers with contextual clues from posts—school mentions, birthday references, friend interactions—to build age profiles. When the AI flags potential underage accounts, users face deactivation until they provide verification proving they’re 13 or older. No proof means permanent deletion.

Rolling Out Across Platforms

Meta’s deployment begins with Instagram teen accounts in Brazil and 27 EU countries, where 13-to-15-year-olds get shifted into heavily restricted modes with parental controls. Facebook implementation launches in the US first, then spreads to EU and UK markets. WhatsApp adds parent-managed accounts for under-13 users. The staggered rollout reflects regulatory pressures rather than technical limitations—different countries demand different compliance approaches.

Privacy Concerns Meet Safety Goals

Privacy advocates worry this represents another step toward comprehensive image scanning, even though Meta insists the analysis focuses on developmental markers rather than personal identification. False positives could lock out legitimate users who look younger than their age—think baby-faced college students or shorter adults. Yet parents exhausted by platform safety failures might welcome any tool that keeps their 10-year-old from stumbling onto inappropriate content through fake accounts.

The real test comes at scale. Medical AI achieves 88-96% accuracy in bone age analysis, but social media photos aren’t controlled medical images. Your blurry bathroom mirror selfie presents different challenges than clinical radiographs. As other platforms face similar regulatory pressure, companies like TikTok and Snapchat will likely enhance their existing age detection systems. The question isn’t whether AI age verification spreads industry-wide—it’s whether we’re comfortable with algorithms analyzing our physical development every time we post.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →