Skip to content

Stanford Develops Brain-Computer Interface to Decode Inner Speech

Stanford Develops Brain-Computer Interface to Decode Inner Speech
Saralnama

Stanford researchers have developed a brain-computer interface (BCI) capable of decoding inner speech—the silent, internal monologue—in patients with severe paralysis. Unlike previous BCIs that required attempted speech involving muscle movement, this system decodes neural signals from the motor cortex related to silent speech, offering a less tiring communication method. The team worked with four nearly paralyzed participants implanted with microelectrode arrays, achieving up to 86% accuracy on a limited vocabulary and 74% accuracy on a larger vocabulary of 125,000 words. To address privacy concerns, the researchers implemented safeguards including AI training to distinguish inner from attempted speech and a mental password system, where users imagine a phrase like “Chitty chitty bang bang” to activate the device, achieving 98% recognition accuracy. However, decoding unstructured inner speech remains challenging, producing mostly unintelligible output. The study, published in Cell in August 2025, represents a proof of concept with current limitations attributed to hardware and brain region targeting. Ongoing projects aim to improve speed and explore applications for conditions like aphasia. This work marks a significant step toward assistive communication technology for people unable to speak physically.[2][3][4] (Updated 23 Aug 2025, 19:12 IST; source: link)