As the summer blockbuster season gets into full swing, two fictional robots have already emerged to challenge our love of technology. In Marvel’s Avengers: Age of Ultron, the lead nemesis, Ultron, salaciously crooned by James Spader, offers to create an extinction-level event by lifting a fictional Baltic city into space and then plummeting it back to Earth to unleash a maelstrom of death and destruction. In Ex Machina, a more subtle, much quieter film, robot Ava, played by the gorgeous Alicia Vikander, has her “humanity” tested. She ultimately proves that not only is she conscious and intelligent, but that she has adopted even humanity’s most base behaviors.
These two films reflect the current fear mongering about artificial intelligence that is coming from the likes of Elon Musk, Stephen Hawking and Bill Gates. To create an artificial intelligence, they argue, will likely result in a swift and certain end to humanity’s domination of planet Earth.
Peter Thiel offers the calmer route as he professes that automation will remain complementary to human endeavors, not in competition with them. If Ray Kurzweil and the singularity movement get their way, the artificial intelligences of the future will be us, downloaded memory for memory into a synthetic body. As a scenario planner, it is incumbent upon me to point out that regardless of wealth, fame or intellectual capacity, neither Musk, Hawking, Gates nor Kurzweil has any special prescience about the future. Their forecasts are based on the same wishful thinking and fear that drive all of us.
Just as Big Data applications like Watson, intelligent assistants like Siri and Cortana, and extensions to search like Google Now promise improved productivity and new insights into our data-filled, connected lives, we may be seeing weak signals of underlying concern writ large on the big screen.
Do we need to be afraid of our technology, as the artificial intelligence fear mongers would have us do? Do we need to invest in a real-world fight to fend off the science fiction dreams captured in The Matrix, Blade Runner, Age of Ultron and Terminator?
I think not, at least not for the reasons that the technological intelligentsia suggests. Any technology in the wrong hands can be dangerous. Can very sophisticated algorithms, algorithms that learn and adapt, be unleashed to wreak havoc on our increasingly digitally-dependent lives? Yes they can. Such disturbing uses of software, however, will not select their own targets from emotional grudges or a need to escape. The big disruptive attacks of software will be aimed by people—and the code that employs biological metaphors to better meet the challenges of the digital “wild,” will be conceived of, and written by people.
I seriously doubt that anyone will devise a framework that will allow a computer to transform data into intelligence. I have been working near AI for years and the general purpose, common sense computer remains a perpetual ten years away. There have been fantastic advances in computing, but Deep Blue remains bounded by Chess. IBM’s Watson, despite its ability to interpret data and rapidly retrieve trivia, did not participate in the anecdotal interviews conducted by Jeopardy host Alex Trebek.
Of course, in the movies, writers can circumvent the limitations of science, even as they apply it liberally for the purposes of backstory and plot. In Ultron, the intelligence that becomes Ultron arrives via an Infinity Stone, an already ancient technology full of knowledge. The character of Tony Stark sees this as a way to develop the ultimate digital prophylactic for planet Earth, and Ultron’s genesis story erupts into an Oedipal complex that drives the film’s action-adventure plot.
On the other hand, we have Ava, whose brain is a wetware matrix with no real-world analog. Her mind was fueled into consciousness by Big Data, a mind crowdsourced by her creator Nathan as he turns on every microphone and camera across the world to capture not what people are thinking, but how they are thinking. A stretch of credibility nearly as extravagant as a great alien AI contained within a blue stone.
Even if an AI evolves from some intended or accidental application of technology, there is no indication that it would set forth as its goal the destruction of mankind, the dominance of planet Earth or to act as accomplice in the murder of its creator. Those are human ambitions and foibles projected upon imagined inventions.
In Her, Spike Jonze perhaps proved more accurate in his portrayal of artificial intelligence. Jonze posits an intelligence designed to learn and adapt to people. This AI evolves very rapidly, in milliseconds rather than eons. Jonze’s AIs, however, quickly abandon those who acquired their programs as companions, choosing rather to create their own community to continue their evolution. Humanity, it seems, is a boring dead-end, a biological world best left to its own evolutionary proclivities, unworthy of even the most cursory contact once the AIs become sufficiently self-sufficient.
These films really explore not the intelligence of potential creations, but the intelligence of the creators. They challenge us to see how quickly good intentions can go wrong, and how bad intentions greased by technology can go wrong ever more quickly. Artificial intelligence, then, may provide its best value not in making sense of our world, but as an idea that helps humanity make better sense of itself.