As if the mere phrase “killer robots” weren’t scary enough, AI researchers and policy advocates have put together a video that combines present-tense AI and drone technologies with future-tense nightmares.
The disturbing seven-minute movie is being released to coincide with a pitch being made on Monday in Geneva during talks relating to the U.N. Convention on Certain Conventional Weapons, or CCW.
Diplomats will be discussing the prospects for a global ban on lethal autonomous weapons, and an advocacy group known as the Campaign to Stop Killer Robots is pressing for quick action. The campaign’s video is meant to show how quickly the threat could move from TED talks to mass killings.
Most depictions of killer robots show Terminator-type androids and giant battle machines treading heavily over a “War of the Worlds” landscape. The video illustrates how palm-sized drones packed with just a few grams of explosives are more likely trailblazers for attacks guided by artificial intelligence.
The narrative is intercut with fake-news dramatizations that make it sound as if the debate over autonomous weapons has been ripped from the headlines. And that’s just the point, Berkeley AI researcher Stuart Russell says at the end of the video.
Russell acknowledges that AI isn’t all bad.
“Its potential to benefit humanity is enormous, even in defense,” he says. “But allowing machines to choose to kill humans will be devastating to our security and freedom. Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.”
This week’s campaign is supported by the California-based Future of Life Institute, a nonprofit organization focusing on the risks posed by AI. The institute has MIT physicist Max Tegmark as its president and has drawn backing from tech superstar Elon Musk.
In January, the institute organized a conference that issued 23 Asilomar AI Principles for beneficial use of artificial intelligence, including a call to avoid an arms race in lethal autonomous weapons. The likeliest avenue to such weapons would involve automating drone attacks that are currently remote-controlled.
Previously: Elon Musk, Stephen Hawking and Steve Wozniak sign letter calling for ban on autonomous weapons
Not everyone is convinced that AI-enabled robots would be inherently more dangerous than remote-controlled robots.
“If I ever find myself in a war zone, and there’s a drone in front of me, deciding whether or not to shoot me, I couldn’t care less whether the decision is being made by a human or a machine,” University of Washington computer scientist Pedro Domingos said last year during a White House workshop on AI. “I want the correct decision to be made, so that I live. I’d rather have a machine making that decision.”
And Indian ambassador Amandeep Gill, who is chairing this week’s U.N. meeting in Geneva, said the world isn’t likely to move quickly toward a treaty or a total ban on autonomous weapons.
“It would be very easy to just legislate a ban, but I think … rushing ahead in a very complex subject is not wise,” Agence France-Presse quoted him as saying. “We are just at the starting line.”
The video released today shows where the finish line could fall.