Image: Bomb disposal robot
A Northrop Grumman Andros bomb-disposal robot is used in a training exercise in Uruguay. A more advanced Andros robot is thought to have played a role in ending the Dallas shooting standoff. (Credit: U.S. Navy file)

Thursday night’s horrific Dallas shooting ordeal may well mark the first time police ended a standoff with a suspect by sending in a killer robot, but it almost certainly won’t be the last time.

Dallas Police Chief David Brown said the robot was jury-rigged to carry an explosive into the parking garage at El Centro College where the suspect was holed up, after several hours of negotiations had stalled.

“We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was,” Brown told reporters today. “Other options would have exposed our officers in grave danger. The suspect is deceased as a result of detonating the bomb.”

The circumstances are still unclear. For example, did the suspect see this coming? How much control did the authorities exert over the robot? How close did it get? What is clear is that this isn’t a routine strategy for domestic police. Peter W. Singer, a strategist and senior fellow at the New America Foundation and the author of “Wired for War,” said in a series of tweets that it appears to be unprecedented:

Bomb-disposal robots like the one used in Dallas are designed to dispose of bombs and improvised explosive devices (IEDs), of course, not set them off. But for better or worse, the trends point toward using robots for offensive as well as defensive purposes. Singer referred to the case of an Army unit in Iraq that hooked up MARCbot scouting robots with remotely detonated Claymore mines to kill hiding insurgents.

“Each insurgent killed in this fashion has meant $5,000 worth of blown-up parts, but so far the Army hasn’t billed the soldiers,” Singer wrote in his 2009 essay on the subject.

In 2014, Albuquerque police said they used a bomb-squad robot to “deploy chemical munitions” in a motel room where an armed suspect had been barricaded. The police said the maneuver led to the suspect’s surrender.

Death by robot is already part of U.S. military operations overseas, thanks to missile-equipped drones such as the Predator and Reaper. Those drone operations have played an essential role in the fight against terrorists, but they’ve also stirred up protests in locales ranging from Pakistan to Nevada. Even in Seattle, a plan for police to use surveillance drones caused a public outcry.

Weaponized robots may well have to become part of the arsenal for police as well as the military in the years ahead, but that’s sure to add a new dimension to the already-divisive debate over policing. And when you add artificial intelligence to the mix, that’s exactly the kind of thing British physicist Stephen Hawking is worried about.

Update for 1:27 p.m. PT July 8: The robot maneuver in Dallas came in for strong criticism from Marjorie Cohn, professor emerita at the Thomas Jefferson School of Law in San Diego.

“The fact that the police have a weapon like this, and other weapons like drones and tanks, is an example of the militarization of the police and law enforcement — and goes in the wrong direction,” she told Common Dreams, a progressive news site. “We should see the police using humane techniques, interacting on a more humane level with the community, and although certainly the police officers did not deserve to die, this is an indication of something much deeper in the society, and that’s the racism that permeates the police departments across the country.”

Meanwhile, Matt Blaze, director of the Distributed Systems Lab at the University of Pennsylvania, voiced concern over the prospect of hacking or disrupting a killer robot:

Hat tip to Popular Science and NBC News.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.