EU Bans AI Systems Deemed ‘Unacceptable Risk’ in Landmark Regulation

EU implements first phase of AI Act, banning systems deemed to pose unacceptable risks to fundamental rights and safety, with penalties up to €35 million.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

  • First major global regulation banning specific AI applications takes effect across European Union
  • Companies must immediately cease using prohibited AI systems or face significant penalties
  • Additional regulations for high-risk AI systems to follow in coming months

The European Union began enforcing its first major artificial intelligence restrictions today, prohibiting AI systems that pose “unacceptable risks” to fundamental rights and safety, marking a significant milestone in global AI regulation.

Why it matters: The ban fundamentally changes how companies can deploy AI in Europe by establishing clear boundaries around technologies like emotion recognition in workplaces and social scoring systems, with violations carrying fines up to €35 million or 7% of global revenue.

Technical Details: The restrictions target specific AI applications considered too dangerous for deployment:

  • Subliminal manipulation systems
  • Social scoring and classification tools
  • Workplace emotion recognition software

Industry Impact: Companies must immediately cease using prohibited AI systems or face severe penalties. The ban affects various sectors:

  • Employment screening tools
  • Surveillance systems
  • Educational monitoring

Nitish Mittal, Everest Group Partner: “This is a watershed moment for EU AI Act implementation, as it will tell us a lot about the phased implementation of the EU AI Act”, according to Mittal.

Looking Forward: While today’s ban targets the most dangerous AI applications, stricter regulations for high-risk AI systems will follow in August 2025, requiring companies to implement comprehensive compliance programs. 

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →