Mind-reading device can tell the story you're imagining

Publicly released:
International

International researchers have developed a non-invasive language decoder – essentially a mind-reading device – that can reconstruct perceived or imagined speech from functional MRI data. The team recorded MRI data from three participants as they listened to 16 hours of stories to train the model, then tested the participants’ brain responses as they listened to new stories that weren’t used during training. While it wasn’t perfect, the decoder could capture the meanings of the new stories, and even generated some exact words and phrases from the stories. The team also found the decoder was able to predict the meaning of a participant’s imagined story, or the contents of a silent movie. Additionally, when a participant actively listened to one story while ignoring another, the decoder could identify the meaning of the story being actively listened to. While the study is small and is heavily reliant on participant cooperation, the team says that policies may be needed to protect mental privacy as the technology develops.

Media release

From: Springer Nature

Neuroscience: Language decoder can reconstruct meaning from brain scans *PRESS BRIEFING* *VIDEOS* 

Related documents (2)Related videos (6)

A non-invasive language decoder that can reconstruct the meaning of perceived or imagined speech from functional MRI (fMRI) data is described in a paper published in Nature Neuroscience.

Previous speech decoders have been applied to neural activity recorded following invasive neurosurgery, which limits their use. Other decoders that have used non-invasive brain activity recordings were limited to decoding single words or short phrases, and it is unclear whether these decoders could work with continuous, natural language.

Alexander Huth and colleagues developed a decoder that reconstructs continuous language from brain patterns obtained from fMRI data. The authors recorded fMRI data from 3 participants as they listened to 16 hours of narrative stories to train the model to map between brain activity and semantic features that captured the meanings of certain phrases and the associated brain responses. This decoder model was then tested on participants’ brain responses as they listened to new stories that were not used in the original training dataset. Using this brain activity, the decoder could generate word sequences that captured the meanings of the new stories, and also generated some exact words and phrases from the stories. The authors found that the decoder could infer continuous language from activity in most brain regions and networks known to process language.

The authors also found that the decoder, which was trained on perceived speech, was able to predict the meaning of a participant’s imagined story or the contents of a viewed silent movie from fMRI data. When a participant actively listened to a story, while ignoring another simultaneously played story, the decoder could identify the meaning of the story that was being actively listened to.

Huth and co-authors conducted a privacy analysis for the decoder and found that when it was trained on one participant’s fMRI data it did not perform well at predicting the semantic contents from another participant’s data. The authors conclude that participant cooperation is crucial for the training and application of these non-invasive decoders. They note that depending on the future development of these technologies, policies to protect mental privacy may be needed.

Multimedia

Video 1
Video 3
Video 4
Video 5

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Journal/
conference:
Nature Neuroscience
Research:Paper
Organisation/s: The University of Texas, USA
Funder: We thank J. Wang, X. X. Wei and L. Hamilton for comments on the manuscript and A. Arcot for writing answers to the behavioral comprehension questions. This work was supported by the National Institute on Deafness and Other Communication Disorders under award number 1R01DC020088-001 (A.G.H.), the Whitehall Foundation (A.G.H.), the Alfred P. Sloan Foundation (A.G.H.) and the Burroughs Wellcome Fund (A.G.H.).
Media Contact/s
Contact details are only visible to registered journalists.