Scientists Just Developed AI That Can Read Your Mind (Without Implants)

Berkeley and NTT researchers use fMRI and AI to decode brain activity into text with 50% accuracy

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Key Takeaways

  • Scientists decode brain activity into readable English using fMRI and AI
  • System achieves 50% accuracy identifying specific video content from 100 options
  • Technology could help aphasia and ALS patients communicate through thoughts

Your thoughts just became slightly less private. Researchers at UC Berkeley and Japan’s NTT Communication Science Laboratories have developed AI that translates brain activity into readable English—no surgical implants required. Before you start panicking about Minority Report scenarios, though, the reality involves expensive MRI machines and hours of calibration that make unauthorized thought-snooping about as practical as using a Tesla Cybertruck for grocery runs.

The Neural Translation Breakthrough

The technology combines fMRI brain imaging with deep learning models that identify semantic patterns in visual and associative brain regions—areas that store meaning rather than process language. When someone watched a video of a person jumping off a waterfall, the system initially interpreted the neural activity as “spring flow,” then refined it to “a person jumps over a deep water fall on a mountain ridge.”

Not word-perfect, but capturing the semantic essence with surprising accuracy. The system works on both what you’re actively watching and what you’re remembering or imagining, functioning as an interpretive bridge between mental representations and text.

Performance Reality Check

Testing shows the decoder correctly identifies specific video content from 100 options about half the time—impressive for mind-reading, less so for practical communication. The University of Texas system achieved similar results: when someone heard “I don’t have my driver’s license yet,” the decoder translated their brain activity as “she has not even started to learn to drive yet.”

Semantically equivalent, but your private thoughts aren’t getting livestreamed to Twitter anytime soon. The focus remains on capturing meaning rather than exact wording.

Medical Game-Changer Potential

The real breakthrough lies in helping people who’ve lost the ability to speak but retain cognitive function. Individuals with severe aphasia, locked-in syndrome, or advanced ALS could potentially communicate by thinking, with the system translating neural patterns into text.

UC Berkeley’s Alex Huth emphasizes current limitations, stating “Nobody has shown you can do that, yet” regarding unauthorized thought-reading—but the medical applications remain genuinely revolutionary.

Your cognitive privacy remains intact for now. The technology requires cooperative participation, specialized MRI equipment, and extensive individual calibration that prevents casual deployment. Still, this represents a genuine step toward the sci-fi future where thoughts become text—just with more lab coats and fewer dystopian implications than Hollywood predicted.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →