The system has the potential to translate brain activity into text, offering hope to people who are mentally conscious but unable to physically speak due to conditions such as strokes. By providing a means of communication, the semantic decoder could significantly improve their quality of life.
The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an Assistant Professor of Neuroscience and Computer Science at UT Austin. The research team developed a noninvasive system that uses functional magnetic resonance imaging (fMRI) to measure brain activity after extensive training, during which the participant listens to hours of podcasts in the scanner.
This innovative approach represents a significant leap forward compared to previous systems, as it can decode continuous language for extended periods of time and process complex ideas. Although the system does not produce a word-for-word transcript, it can capture the gist of what is being said or thought with about 50% accuracy in intended meaning.
One of the key features of the semantic decoder is that it only works with cooperative participants who have willingly participated in the training process. Results for individuals who have not trained the decoder or try to resist the process are unintelligible.
The researchers are currently exploring the potential of transferring this work to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS). This would make the technology more practical for use outside the laboratory and enable broader applications.