He promised he’d be back — and 35 years after Arnold Schwarzenegger created what’s now a cliche for artificial intelligence gone wrong in the first “Terminator” movie, the cinematic nightmares about time-traveling killer robots have returned to the big screen.
“Terminator: Dark Fate” also marks the return of writer/producer James Cameron — who directed the first two movies in the franchise, but wasn’t involved in the three sequels that followed. Cameron skips over those films and reboots the saga with an alternate timeline for the robo-apocalypse.
Although monstrous machines have figured in movie plots since Fritz Lang’s “Metropolis” in 1927, Schwarzenegger’s performance in “The Terminator” set the stage for worries about out-of-control intelligent machines.
Billionaire techie Elon Musk is among the best-known doomsayers. “I keep sounding the alarm bell, but until people see robots going down the street, killing people, they don’t know how to react because it seems so ethereal,” Musk said in 2017.
On the other side of the debate, Oren Etzioni, the CEO of Seattle’s Allen Institute for Artificial Intelligence, or AI2, keeps telling people to calm down. “Even though AI is more and more being used, I want to reassure people that Skynet and Terminator are not around the corner for many, many reasons,” he told GeekWire in 2016.
Does the new “Terminator” movie update the saga with all the developments in AI, automation and robotics since 1984? How does “Dark Fate” stack up against the realities of our AI age? To get some informed perspectives, I invited two folks who work in the field to watch the movie with me — and share their thoughts afterward.
Carissa Schoenick is senior program manager and communications director for AI2, and works with the Project Aristo and ReViz teams. Ryan Calo is a law professor at the University of Washington and co-director of the UW’s Tech Policy Lab. I’ve edited the transcript of our post-movie chat for brevity, clarity — and to reduce spoilers — but if you don’t want to hear a single thing about the plot before you see the movie, stop right here, and come back after you’ve watched “Terminator: Dark Fate.”
GeekWire: Did the producers and writers and the rest of the team behind “Terminator” learn anything since the earlier films?
Carissa Schoenick: “Yeah … Since we’ve already taken for granted that these AIs seem to have motivations of their own, now they are trying to depict an AI that shows the flip side of that — that a Terminator could begin to understand human motivations. Like, why bother having a family? Or why bother taking care of someone? It was refreshing to see them trying to work in an angle of, since we’re in a future where robots can be evil, why not have robots that can also aspire to be good? But the idea of having robots with intrinsic motivation at all is something that’s pretty silly. AI is a tool or an expression of the humans that build it.”
Ryan Calo: “What I thought they did with this one is that they brought in a number of actually credible technologies. Somebody comes back from the future with some terrible motivation and these horrible capabilities, right? But it enters a world that is full of cameras and completely networked, with widespread availability of weapons.
“Like the previous films, this one seems to be commenting on the present-day technological capabilities that we’ve built that are dangerous, if you had somebody with the motivation to leverage them. Facial recognition, drones, cameras everywhere, our phones being tracking devices … these are all interesting contemporary issues. It doesn’t matter if it’s a robot from the future or an alien from beyond the stars. We have created a world where something that can take advantage of these affordances would be all the more perilous and deadly.”
GeekWire: The movie does touch upon other themes relating to AI and automation — for example, robots displacing humans on the factory floor, and the idea of augmented humans. That’s something that Elon Musk has talked about: If you can’t beat them, join them. If AI is going to take over, we should augment ourselves first so that we have the AI integrated into our brains.
Schoenick: “Why would anyone want to do that? We don’t even understand how our own bodies work, for the most part. Augmenting our bodies meaningfully with AI is pretty insane. I also wonder about saying ‘if you can’t beat them, join them.’ Who is ‘them’? If we put AI in our bodies, wouldn’t they just use the same magic integration hacking to just take over our bodies?”
GeekWire: I think you’re saying that it makes for an interesting plot twist, but not much more than that.
Calo: “What I love is that the Terminator is this hyper-sophisticated AI that can do all of this planning and chasing, but then he makes these really erratic, terrible decisions. Like, in every vehicle that I’m in, I’m going to rip the windscreen out and start shooting a gun, and then I’ll end up destroying what I’m pursuing and making rash, ridiculous decisions that no well-trained neural network would make.”
Schoenick: “Maybe that’s a nice illustration of how when AI isn’t working with humans, it goes off the rails and sucks. You know, like you actually need the human element to keep AI as a tool on the path that you need it to follow.”
Calo: “The Terminator’s strategies are relatively simple: Find and destroy. But if you think about social robotics and the capacity of robots, especially when they are embodied to tap into our tendency to anthropomorphize, there are times when the Terminator uses that strategically to get what he wants.”
Schoenick: “That’s a good example of AI impersonation — like the Google Duplex program that can make haircut appointments. At AI2, we say, ‘Hey, if there’s an AI system that’s impersonating a human, or is built to impersonate a human, we feel that it should disclose itself to that human, because you’re going to make all kinds of assumptions and feel a certain way about interacting with a creature that you think is another human versus an AI.’ It brings up into the social consciousness that if AI does start to mimic humans, we need to be educated about that.
“The ability of humans to think ahead and plan, and have intrinsic hope, is kinda the theme of this movie. Keep planning, keep doing something, keep trying to survive — whereas AI doesn’t have that life force. It’s willing to throw itself away or blow itself up for its mission. Right now, and in the foreseeable future, AI is not something that we ‘found.’ It’s not some evil force that came out of nowhere and is fighting us. It’s a tool that we’re building, and we have all the agency there. So just as much as we can build AI for nefarious purposes, we can build it for positive reasons.”
Calo: “I think that a truly sophisticated AI sent from the future to kill somebody would go about it in a much more efficient and methodical way, This AI was optimized for a James Cameron movie. That’s why it’s ripping things apart. That’s why it’s causing chaos, again and again.”
Schoenick: “It was trained on action films.”
GeekWire: Does it worry you that the movie might revive a lot of misconceptions about AI? Or do you feel as if this is a teachable moment?
Schoenick: “There’s plenty of fear-mongering going on around AI. I do think that ‘Terminator’ is a classic. We actually point to that as a canonical example of AI fear. But I think movies like this — and the media, too — can raise counterpoints, like, ‘Let’s think about how a surveillance society might actually be a problem, or how automation might affect everyday people.’
“Just getting people to think about that and be aware of all the facets of AI in general is a better way to diffuse this fearful, unknown oracle thing that we have going on with AI. ‘Terminator’ is not an educational film, but at least they’re touching on some of these ideas. And these ideas are what we as a society are going to have to think about, to have a sane approach to handling AI in the future.”
Calo: “One thing that was unfortunate: At one point, they talk about frying the Terminator’s ‘neural net,’ but it’s not a neural net. A neural network is a trained model. The idea that current neural nets are going to evolve into that thing …”
Schoenick: “Yes, ‘the neural net is bad.’ It’s a buzzword, and when people hear that in the movie, they’ll be like, ‘Oh, I know what that is.’ It’s one of those tricks.”
GeekWire: So how many stars would you give ‘Terminator: Dark Fate,’ as an action movie and as a teachable moment for AI, with 4 stars being the top?
Schoenick: “I would give it 3 stars as an action movie, because it had a plane chase, which is pretty cool. I haven’t seen an actual giant military plane chase before. They worked in lots of types of crashes — helicopters, a dam. You’re going to get what you want out of an action film, and some nostalgia.
“But as a useful AI film, I’m going to give it a 1. It really doesn’t inspire much in the way of advancing the conversation about AI.”
Calo: “I also would give it a 3 for an action film. I would give it a 2 for technology. Everybody knows that time travel is a fantasy, right? Same with nanotechnology, to reach a stage where you could just arbitrarily reconstitute something. A spontaneously evil super-intelligence is also probably a fantasy, at least in the lifetime of the actors portrayed.
“I did like that the movie was calling attention to the perils of facial recognition, drone surveillance and interconnectivity. I think it’s doing a lot of good in terms of sounding alarm bells. But I think it failed, because it’s purposely trying to put super-evil super-intelligence in the same category as the surveillance state, rather than where the proper category is, which is alongside time travel. In fact, I’m more confident that we’re going to get time travel, personally.”