Trending: Secretive investment firm Lone Pine Capital is quietly minting tech unicorns in Seattle
Brain experiment
Subjects viewed a random sequence of images of faces and houses and were asked to look for an inverted house like the one at bottom left. “That was a distractor,” Jeff Ojemann said. “We were interested in what the brain was doing at the other times.” (Credit: Kai Miller / Brian Donohue / UW)

University of Washington neuroscientists and their colleagues have developed a system that uses electrodes implanted in the human brain’s temporal lobe to decode brain signals at nearly the speed of perception.

“Clinically, you could think of our result as a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked-in,” Rajesh Rao, a UW professor who directs the Center for Sensorimotor Engineering, said in a news release.

The study was published Jan. 28 in PLOS Computational Biology.

Rao and his colleagues inserted the electrodes into the brains of epilepsy patients undergoing care at Seattle’s Harborview Medical Center. The patients’ seizures couldn’t be relieved by medication alone, so they were given the implants temporarily in an attempt to locate the seizures’ focal points.

“They were going to get the electrodes no matter what,” said Jeff Ojemann, a neurosurgeon at UW Medicine. “We were just giving them additional tasks to do during their hospital stay while they are otherwise just waiting around.”

In the experiment, the patients were shown pictures of houses and human faces, randomly interspersed among blank gray screens. Each picture was flashed for 400 milliseconds. Computer software analyzed the patterns of electrode firings, and that analysis led to an algorithm for distinguishing between the patterns for houses vs. faces.

Datastream of brain signals
The numbers 1-4 denote electrode placement in temporal lobe, and neural responses of two signal types being measured. (Credit: Kai Miller / Stanford / UW)

“Traditionally scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

When the algorithm was left on its own to decide whether the patient was looking at a house, a face or a blank screen, it came up with the right answer 96 percent of the time, with an average time accuracy of 20 milliseconds.

A more complex network of brain electrodes could theoretically make it easier for patients with neurodegenerative disease, such as physicist Stephen Hawking, to communicate.

This is only the latest in a series of studies on this theme conducted at the university. In one experiment, Rao and UW’s Andrea Stocco created a system that allowed Rao to move Stocco’s finger merely by thinking about it. In a follow-up study, Stocco and his colleagues linked up two brains in such a way that one person could guess what was on the other person’s mind.

The lead author of the study in PLOS Computational Biology, titled “Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change,” is Kai Miller, a neurosurgery resident and physicist at Stanford University who obtained his M.D. and Ph.D. at UW. In addition to Miller, Rao and Ojemann, the authors include Dora Hermes and Gerwin Shalk.

Subscribe to GeekWire's Space & Science weekly newsletter

Comments

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.