Eddie Chang (right), a neuroscientist at the University of California at San Francisco, discusses findings with postdoctoral researcher David Moses. (UCSF Photo / Noah Berger)

Neuroscientists have demonstrated a computerized system that can determine in real time what’s being said, based on brain activity rather than actual speech.

The technology is being supported in part by Facebook Reality Labs, which is aiming to create a non-invasive, wearable brain-to-text translator. But in the nearer term, the research is more likely to help locked-in patients communicate through thought.

“They can imagine speaking, and then these electrodes could maybe pick this up,” said Christof Koch, chief scientist and president of the Seattle-based Allen Institute for Brain Science, who was not involved in the study.

The latest experiments, reported today in the open-access journal Nature Communications, were conducted by a team at the University of California at San Francisco on three epilepsy patients who volunteered to take part. The work built on earlier experiments that decoded brain patterns into speech, but not in real time.

“Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases,” UCSF postdoctoral researcher David Moses, the study’s principal investigator, said in a news release.

The technique, known as high-density electrocorticography or ECoG, required invasive brain surgery to implant electrodes onto the surface of the cortex. The primary goal of the operation was to identify the sources of the patients’ epileptic seizures, but Moses and his colleagues also monitored brain activity that was associated with speaking and listening to speech.

Brain electrode
Electrodes like this one were temporarily placed on the surface of patients’ brains for a week or more to map the origins of their seizures in preparation for neurosurgery. (UCSF Photo / Noah Berger)

Researchers trained a computer model to link patterns of electrical activity to the pronunciation of spoken sounds. Once the model was trained, the team recorded highly structured conversations and had the computer try to determine what was being said, based only on the brain patterns. The conversations included such questions as “What’s your favorite musical instrument?” … “How is your room currently?” … and “From zero to 10, how comfortable are you?”

The computer model achieved accuracy rates as high as 61 percent for what the patient was saying, and 76 percent for what the patient was hearing. That’s much higher than what would be expected by chance (7 and 20 percent, respectively).

These experiments were an improvement over the team’s previous work, in part because the computer model was fine-tuned to take account of the context in which questions were asked and answered. That helped the model distinguish between similar-sounding words, such as “fine” and “five.”

Brain-computer interfaces, or BCIs, have been in the news lately, thanks to ambitious projects such as Elon Musk’s Neuralink venture and Facebook’s efforts to develop an brainwave-to-text translator that doesn’t require surgery.

But Koch said that the approach taken by the UCSF team goes in a different direction. “It’s cool, no question. It’s promising, no question,” he said. “But day-to-day, it doesn’t really help. That’s the problem with all these wonderful technologies: It’s slow going.”

The senior author of the UCSF study, Edward Chang, emphasized future applications for patients who can’t speak for themselves — for example, people who have the type of neurodegenerative disease that afflicted the late physicist Stephen Hawking.

“Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly using residual eye movements or muscle twitches to control a computer interface,” Chang said in today’s news release. “But in many cases, information needed to produce fluent speech is still there in their brains. We just need the technology to allow them to express it.”

Chang and his colleagues have launched a research project known as BRAVO (BCI Restoration of Arm and Voice) to determine whether the types of implants used in the newly reported study can be used to restore the capacity of movement and speech to patients with paralysis caused by stroke, neurodegenerative disease or traumatic brain injury.

Could the technology lead to precise mind-reading tools for healthy people as well? Koch said it depends on how close researchers can get to tracking brain activity on a neuron-by-neuron basis.

“It’s like you’re over a stadium,” he explained. “You’re in one of those blimps, and you’re trying to pick up individual people talking.”

One option would be to drop a microphone from the airship on the end of a boom and put it right in front of a speaker’s face. That’s analogous to what future brain implants might be able to do. But if you can’t get the microphone down into the stands, probably the best you could do is to pick up the roar of the crowd after a touchdown, or the swirl of sound that rises up while they’re doing the wave.

“That’s the problem we’re facing, trying to decode the human brain,” Koch said.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.