Workplace automation and dwindling jobs are the real danger, says Daniel Weld. (Photo via Shutterstock).
Workplace automation and dwindling jobs are the real danger, says Daniel Weld. (Photo via Shutterstock).

Many people find recent advances in artificial intelligence (AI) quite alarming. Indeed, luminaries, ranging from Nobel laureate Stephen Hawking to technology pioneers Elon Musk and Bill Gates, have warned that artificial intelligence technology might be more dangerous to humankind than the atomic bomb.

Oxford philosopher Nick Bostrum has argued that an “intelligence explosion” may lead to the extinction of humanity at the hands of rampant robots. These arguments distract us from the large and more imminent threat — seismic loss of jobs, surging unemployment, and potentially calamitous social strife. This week, when the White House launches a sequence of workshops studying the future of AI, they should focus on the real dangers, not imaginary ones.

The possibility that future AI systems might autonomously turn on humanity is far fetched. This notion, popularized by movies like Terminator, the Age of Ultron and Ex Machina, stems from a simplistic notion of intelligence. Super-human computers are nothing new.

Computers already exceed our performance at many cognitive tasks, from multiplying large numbers to playing games such as Chess and Go. The number of such tasks will continue to grow in coming years. But computers have no hidden goals or secret motivations; they loyally follow their programmers.

Lee Sedol at Go showdown with AlphaGo
Go champion Lee Sedol looks around after losing a second game to the AlphaGo artificial intelligence program in Seoul. (Credit: Google DeepMind via YouTube)

Any harm done to humanity by computers will stem from our directives and our failure to anticipate impending societal change that looms over billions of U.S. and international workers. We’ve already witnessed the type of damage computers can inflict with the stock market “flash crashes’” resulting from unexpected feedback interactions between high-frequency automatic trading systems. Flawed programming logic can also lead to disasters.

In August 2012, Knight Capital lost $440 million when a new automated trading system executed 4 million trades on 154 stocks in just forty-five minutes. In 2003, an error in General Electric’s power monitoring software led to a massive blackout, depriving 50 million people of power. None of these systems were intelligent. They had no intention to harm humans. But they certainly did so.

While computers won’t try to harm us of their own volition, they are often used by people who intend harm. Cyber-weapons are serious and increasingly widespread. Software tools can be used to steal information, as in the case of North Korea’s attack on Sony, or the 2014 intrusion at Anthem, which hemorrhaged personal and health data of some 80 million Americans. Because computers now control our electrical grid and other infrastructure, malevolent software can cause serious physical harm. In 2010 Iranian nuclear centrifuges were destroyed by the Stuxnet computer worm in an alleged attack by U.S. government and Israeli hackers.

Recently, James Clapper, the U.S. Director of National intelligence, ranked cyberattacks as our top national security threat, ahead of terrorism and weapons of mass destruction. Again, none of these threats involve intelligent computers deliberately seeking to assault humanity. The malice is human, and the computer merely the weapon of choice.

While the probability that an AI computer will autonomously decide to assault humanity is remote, the chances are high – near 100 percent – that a terrorist will try to direct an AI system to do so. But the difference in agency is moot. The presence of malicious humans means that we must expect an onslaught of increasingly sophisticated attacks. Furthermore, regardless of who (or what) initiates the incursion, our response is the same – improved cyber security and software defenses. AI, data mining, and machine learning methods will be a crucial part of these safeguards. Overall, we need to keep pushing the positive aspects of AI, not retreat from them.

The real AI threat stems not from nefarious actions, but rather from the opposite direction. As AI systems become more capable and more common, they will displace innumerable workers. Robots and intelligent software are outperforming humans at an increasing number of jobs.  Mid-career education and retraining may slow this displacement, but digital innovation accelerates exponentially, virtually guaranteeing that social disruption will be faster and more extensive than ever before in history.  Consider the example of self-driving cars. Currently, six percent of U.S. jobs are in trucking and transportation. What will these workers do when drivers become obsolete in 15 years?

We are already living the contradiction of automation increasing prosperity and economic output, on the one hand, and inequality, on the other. Political conservatives lament the laziness of today’s welfare recipients, but what should a population do when jobs disappear en mass? How will society respond when jobs disappear en mass? Is capitalism sustainable when labor becomes unnecessary?

In short, the biggest threat posed by AI is not the advent of autonomous machines, but of human beings losing their autonomy by being driven from the production process. We must confront the potential for serious social unrest, perhaps even revolt or revolution.

As Brynjolfsson and McAfee conclude in their provocative article Will Humans Go the Way of Horses?, “It’s time to start discussing what kind of society we should construct around a labor-light economy. How should the abundance of such an economy be shared? How can the tendency of modern capitalism to produce high levels of inequality be muted while preserving its ability to allocate resources efficiently and reward initiative and effort?”

Our worries should be not about malevolent AI, however disappointing that might be to Hollywood, but rather about the real problem of an equitable division of spoils in the new AI economy.

Daniel S. Weld is WRF/TJ Cable Professor of Computer Science & Engineering at the University of Washington, and Venture Partner at the Madrona Venture Group

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.