AI Enters the Operating Room — Are Patients Paying the Price?

FDA reports show AI-enhanced surgical tools logged over 100 malfunctions since 2021, with injuries including strokes and skull punctures

Annemarije de Boer Avatar
Annemarije de Boer Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image: Wikimedia

Key Takeaways

Key Takeaways

  • TruDi Navigation System’s AI upgrade triggered 1,250% increase in FDA injury reports
  • AI medical devices doubled to 1,357 units while malfunctions span multiple manufacturers
  • FDA verification gaps leave patients exposed to unproven AI surgical technology

AI promised safer surgeries and smarter medical devices, but FDA incident reports tell a different story. Since 2021, AI-enhanced surgical tools have logged a disturbing spike in malfunctions and injuries—turning operating rooms into testing grounds for rushed technology.

Navigation Systems Lead Patients Astray

The TruDi system’s AI upgrade coincided with a tenfold increase in FDA reports and serious surgical injuries.

The numbers paint a stark picture. Integra LifeSciences’ TruDi Navigation System—used to guide sinus surgeries—saw FDA reports jump from 8 pre-AI to over 100 after adding artificial intelligence. At least 10 injuries followed between late 2021 and November 2025, including:

  • cerebrospinal fluid leaks
  • skull punctures
  • strokes from alleged instrument mislocation

Two Texas lawsuits claim TruDi’s AI misled surgeons near carotid arteries, causing blood clots and strokes. The suits allege the AI was rushed to market with only an 80% accuracy goal, despite surgeon warnings about the risks.

Integra denies any causal link between their AI and the injuries, but the timing raises uncomfortable questions. You’re witnessing beta testing of life-critical software on actual patients.

Pattern Recognition Across Multiple Devices

Samsung‘s ultrasound AI and Medtronic’s heart monitors join the growing list of problematic AI medical gadgets.

TruDi isn’t alone. Samsung’s Sonio Detect AI ultrasound reportedly misidentified fetal body parts in June 2025, though Samsung insists there’s no safety issue. Medtronic’s heart monitoring AI allegedly missed abnormal heartbeats in 16 cases, though the company disputes most reports and claims no patient harm occurred.

The broader context is unsettling: AI-authorized medical devices doubled to 1,357 by late 2025. This resembles the same “move fast and break things” mentality that gave us social media disasters—except now it’s happening in operating rooms.

The Verification Gap

FDA reports remain unverified and incomplete, creating a blind spot in AI medical device safety.

The FDA acknowledges these reports don’t prove causation and often lack crucial details. Companies test their AI for “hallucinations” and degradation, but the verification process feels like asking the fox to guard the henhouse.

You’re left wondering whether AI integration genuinely improves surgical outcomes or simply creates new categories of preventable errors. The central tension remains: AI’s surgical potential is real, but current safety protocols seem calibrated for a slower, more cautious era of medical innovation.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →