Trending: Seattle shooting rattles tech companies and raises concerns about safety of city’s urban core
Solving paralysis with brain-computer interface arrives ethical questions. (Photo courtesy of Wyss Center)

Losing the ability to control your own mind seems like something out of a dystopian science-fiction novel, but some researchers aren’t taking chances.

They’re worried about the ethical implications of advances in mind-reading. And no, they don’t mean the telepathic ability that Professor X has in the X-Men comics and movies, but rather the capacity of a brain-machine interface, or BMI, to classify thoughts, emotions and intentions.

In this context, mind-reading means “decoding the activity of the brain to determine what the person is thinking or planning,” University of Washington neuroscientist Eberhard Fetz told GeekWire.

Fetz is one of the contributors to a report published today by the journal Science, titled “Help, Hope and Hype: Ethical Dimensions of Neuroprosthetics.” He said that while the technology to decode a person’s mental state isn’t very advanced yet, it should definitely be on our ethical radar.

A BMI is a device like a robotic arm that can communicate with an implant in the brain. It can give paralyzed people or amputees the ability to control their limbs or prosthetics, using the signals transmitted from their brains.

Fetz and the report’s other authors say we should regard advancements in machine learning and artificial intelligence with the same measure of caution we use when we consider accountability for self-driving cars and privacy for smartphones.

Fetz recalled the time security researchers proved they could hack into a Jeep Cherokee over the internet and disable it as it drove on the freeway. He said that in the world of prosthetics, a hacker could conceivably take over someone’s arm.

“The hack could override the signals,” he said. It could even override a veto, and that’s the danger. The strategy to head off that scenario would have to be to make sure the system can’t be influenced from the outside.

Study co-author John Donoghue, a director of the Wyss Center for Bio and Neuroengineering in Geneva, said these are just a few things we would have to think about if these mechanisms became the norm.

“We must carefully consider the consequences of living alongside semi-intelligent, brain-controlled machines, and we should be ready with mechanisms to ensure their safe and ethical use,” he said in a news release.

Donoghue said that as technology advances, we need to be ready to think about how our current laws would apply. “Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field,” he said.

Researchers urge medical professionals to tell their patients of the ethical risks of having a semiautonomous system in their brain. (Photo courtesy of Wyss Center)

For example, the team asks who would be responsible for an accident: The human, the semi-autonomous robot in his or her brain, or the manufacturer? It can be hard to tell, because something as simple as moving an arm happens almost unconsciously.

“We propose that any semi-autonomous system should include a form of veto control,” the report’s authors say. “This could be a useful adjunct to address some current weaknesses of direct brain-machine interaction.”

This way, if a person with a BMI does cause an accident – for example, dropping a baby from their robotic arms – the liability would depend on whether or not the human exercised an emergency stop option. If not, the human could be considered as culpable as someone who caused a car crash by not pressing the brakes.

The research team is also worried about “brainjacking” – that is, the outside manipulation of a brain implant. The BMI devices could store valuable data about the host’s neural activity, and researchers are worried that since the devices usually use WiFi or Bluetooth to communicate, the connection may not be secure.

While an everyday person’s brain data may not sound hack-worthy, the report points out that paralyzed politicians or people using other devices may be at risk. “The potential for hacking biomedical devices with possibly fatal consequences has been demonstrated for insulin pumps and implantable cardiac defibrillators,” the report noted.

Fetz noted that IBM’s Watson provides an example of artificial intelligence outsmarting people, but the world of medicine is becoming smarter too.

“For prosthetics, you want to have as intelligent of a device as possible so it isn’t exclusively dependent on the brain for controlling every aspect of the prosthetic device,” he said. “You want to have some kind of intelligent control for higher-order signals.”

Elon Musk, the billionaire CEO of SpaceX and Tesla, is also taking an interest in brain technology. He recently backed a new venture called Neuralink, which plans to develop implantable brain-computer interfaces to help those who are blind, deaf, or have severe brain injuries.

In addition to addressing accountability and security, the report’s authors encourage health literacy, because even though brain-machine interfaces are currently used almost exclusively to treat paralysis for now, they could be used in the future to improve attention and concentration.

Before agreeing to an implant, people should be aware of all the risks, including the ethical ones.

“Every citizen should be provided the basic understanding necessary for an informed choice,” the report says.

Fetz says that ultimately, it’s doctors who are the most informed of the risks and benefits of brain-machine interfaces for their patients. Thus, they have the responsibility to stay up to date as new technology emerges.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline


Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.