Google’s AI Called a Fiddler a Sex Offender. Now He’s Suing for $1.5 Million.

Canadian fiddler Ashley MacIsaac sues Google after AI wrongly labeled him a sex offender, costing him concerts and reputation

Alex Barrientos Avatar
Alex Barrientos Avatar

By

Image: Tabercil – Wikimedia Commons

Key Takeaways

Key Takeaways

  • Google’s AI falsely labeled Canadian fiddler Ashley MacIsaac a sex offender
  • MacIsaac sues Google for $1.5 million after concert cancellation and reputation damage
  • Lawsuit could establish AI companies’ liability for defamatory search results

You trust Google’s AI Overview the same way you trust autocorrect—until it spectacularly fails you. For Canadian fiddler Ashley MacIsaac, that trust cost him a concert, his reputation, and potentially his career. Now he’s fighting back with a $1.5 million lawsuit that could determine whether AI companies bear responsibility when their algorithms destroy lives.

When AI Confidence Meets Human Catastrophe

Google’s search summary confused a three-time Juno winner with a convicted criminal sharing his last name.

MacIsaac discovered the error when Sipekne’katik First Nation cancelled his December concert after public complaints. Google’s AI Overview had confidently declared the Cape Breton musician a convicted sex offender, complete with fabricated details about sexual assault, internet luring, and lifetime registry placement.

The algorithm had apparently confused MacIsaac with a different man in Atlantic Canada who shared his surname—a mistake that spread with the authoritative tone AI users have learned to trust. The AI generated false claims about:

  • Convictions for sexual assault of a woman
  • Internet luring of a child
  • Assault causing bodily harm
  • Placement on Canada’s national sex offender registry

The First Nation later issued a public apology, acknowledging harm from “incorrect AI-assisted search results.” But the damage was done. MacIsaac reported “tangible fear” about performing safely, watching his livelihood crumble because an algorithm couldn’t tell two people apart.

The $1.5 Million Question About AI Accountability

MacIsaac’s lawsuit argues AI creators should face the same responsibility as human publishers.

Filed in February 2026 at Ontario Superior Court, the lawsuit seeks $500,000 each in general, aggravated, and punitive damages. MacIsaac’s legal team argues Google bears liability for “foreseeable republication” of defamatory content, pointing to defective AI design and the company’s indifferent response—no apology, no direct contact, just standard corporate messaging about improving AI through feedback.

This case could set crucial precedent. Canadian defamation law requires no proof of intent, making Google’s “the algorithm did it” defense potentially worthless. Google’s December 2025 response emphasized their investments in AI quality and policy violation responses, but MacIsaac’s lawyers argue the tech giant’s AI expertise makes negligence obvious.

The lawsuit lands as users increasingly rely on AI-generated summaries that present misinformation with the same confidence as facts. Your next Google search might confidently tell you something completely wrong—and until now, nobody’s been held accountable for the consequences.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →