Investigation Finds AI YouTube Videos Teaching Children Unsafe and Toxic Behaviors

Mass-produced channels upload 50 videos daily with dangerous lessons like traffic play and toxic food consumption

Rex Freiberger Avatar
Rex Freiberger Avatar

By

Image: YouTube

Key Takeaways

Key Takeaways

  • AI channels upload 50 videos daily teaching toddlers dangerous traffic play behaviors
  • Mass-produced content comprises 20% of YouTube with unchecked hazardous misinformation for children
  • YouTube removes billion-view channels but harmful videos persist despite platform safety measures

When toddlers request “educational” videos during dinner prep, some now show children walking into busy streets with cars approaching—presented as normal playtime. These aren’t isolated incidents but mass-produced AI content flooding YouTube’s kids’ section with potentially lethal misinformation.

The Industrial Scale of Danger

AI channels upload 50 videos daily, teaching toxic food consumption and traffic play as educational content.

One AI channel produced 10,000 videos in seven months, according to recent investigations. These videos masquerade as trusted educational content but depict hazardous behaviors like kids eating whole grapes (choking hazards), honey (infant botulism risk), and raw elderberries (toxic when uncooked).

Others show distorted learning content—states sing-alongs featuring “Ribio Island” and “Conmecticut,” or traffic rules claiming “green means right” instead of go.

The Kapwing study reveals AI-generated content comprises roughly 20% of YouTube. While Sesame Street tests every educational message, these automated channels mass-produce unchecked content that exploits how children learn through repetition and visual cues. Futurism highlights a growing of number of children’s content.

Former PBS Kids executives call the phenomenon “downright dangerous” as millions of views spread misinformation.

“The more content I find, the more horrified I get… downright dangerous,” warns Carla Engelbrecht, former executive at Sesame Street and PBS Kids. Dr. Dana Suskind from the University of Chicago describes it as “toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.”

These experts understand critical development windows—when repeated exposure to incorrect information can literally rewire young brains. The scale mirrors other content trust breakdowns parents have navigated, but with stakes involving physical safety rather than just screen time concerns.

Platform Responses Fall Short

YouTube deletes billions-view channels and surveys users about “AI slop,” but harmful videos continue slipping through.

YouTube has removed some channels and implemented stricter kids’ content principles. The platform now surveys users about “AI slop” and previews features against clickbait. Yet harmful videos persist, prompting urgent calls from child safety experts for better moderation systems.

The broader AI-child safety landscape grows more complex daily. UN bodies warn about deepfakes and grooming risks, while countries like Australia ban social media for under-16s entirely, citing harm data that keeps expanding.

Parents can protect kids immediately:

  • Preview videos before allowing independent viewing
  • Stick to verified educational channels
  • Report suspicious content showing dangerous behaviors

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →