John Markoff
John Markoff

John Markoff is a New York Times science reporter and author of the new book, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. I interviewed him recently at Seattle’s Town Hall, and we’re replaying the conversation in this special edition of the GeekWire Podcast.

Markoff is also one of the featured speakers next week at the GeekWire Summit in Seattle, appearing in a session with Microsoft Research chief Peter Lee. Tickets are available here, and there’s a special offer for listeners in this week’s podcast.

Listen to the conversation from Town Hall below, and continue reading for an editing transcript of the interview.

Todd Bishop: We’re going to have a great conversation and I know that this is a topic that many of you are intensely interested in. Welcome to Seattle, John. Welcome back.

geekwiresummit-logo
John Markoff will be speaking at the GeekWire Summit in a session with Microsoft Research chief Peter Lee. Click here for tickets.

John Markoff:  Thank you. Great to be here.

Todd Bishop: We’re going to get into the future of robotics in a lot of ways and the coming wave of automation that John writes about in his book. I wanted to start with something very simple and something that really gets to the heart of the current state of artificial intelligence and that’s something that a lot of us already have in our pockets. I wanted to use this as a baseline, John, to basically get a sense for where we are and where we’re headed. I’m curious how many people in the crowd currently talk regularly to Siri or Alexa or maybe even Cortana? Okay, a few hands. Do any of you who raised your hands feel like you have a personal relationship with any of those personalities?

John, when you look at what Apple and Microsoft and Amazon and companies like that are doing currently with digital assistants, what does the current state of this type of technology say about where we’re headed? Can you paint a picture from today into the future?

John Markoff: Yeah. First, how many of you saw the movie “Her” from a little over than a year ago? Yeah, OK. The question is how close are we to “Her?” I think you’re asking. A lonely young man had a relationship with an operating system I guess is the simple idea of the plot.

It’s not in the book but just recently I wrote a piece about a Microsoft experiment in China with another program called Xiaoice, which stands for Little Bing. XiaoIce is fascinating to me because unlike Cortana, which is Microsoft’s program, or Siri, Xiaoice is intended to be a conversationalist. What’s different, and what’s interesting, about Xiao … A couple of things are interesting about XiaoIce. This is just typed mostly on cell phones because there are many more cell phones than there are personal computers in China. There are 20 million users there are 10 million intense users of XiaoIce, meaning they have multiple conversations every day and the conversations consist of multiple interactions, a dozen or more. They even have defined this conversation time, which is called toilet time. The young people, young demographic, interact with the program late at night.

Todd Bishop and John Markoff at Town Hall
Todd Bishop and John Markoff at Town Hall

25% of the Xiaoice users have typed “I love you” to the program. 50% have said “Thank you” at some point. It even creeped out the Microsoft designers a little bit and I felt this is strange, brave new world stuff. Then I had a conversation with an IBM programmer who’s Chinese and we were talking about cultural stuff. She said, “When we come to your country, it feels like it’s quiet.” She said, “In China our relationships are so densely interwoven that you don’t have time by yourself.” Her view of Xiaoice was it was a private space where you could be alone with your thoughts. If you look at it from that point maybe there really is a culturally relative way of using these programs.

From a technical point of view, and to respond to your question of where we are with these programs, what Microsoft did that was different is they stripmined the entire Chinese social web for all the QA pairs. They found questions and answers. Many of you may have played with earlier chatbot programs and they break down very quickly. Even the best programs that are in the annual Turing tests, they don’t do that well. You can very easily tell that they’re not very good. They have a relatively real answer to almost anything you can say to it because somebody’s had the same interaction before somewhere on the web. The quality of the conversation changes and it’s topical. It’s in Chinese so I haven’t been able to try it, but the Times reporter in Hong Kong, who wrote this piece with me, had some fairly interesting conversations with it. I think you could see a step function in the quality of interaction.

You haven’t asked this question but what does it mean about the future?

Todd Bishop: Absolutely, yes. What does it mean?

John Markoff: It’s one of the areas that I’m most interested in because I think that increasingly many of our social interactions that are commercial are going to be with machines instead of humans. If you think about where our economy has grown since the second World War, a lot of it’s been people who do things via the telephone. Everything from technical support to sales, operator assistance, all of that. Machines are going to be doing that reasonably effectively and I think that’s one of the areas where they’ll sweep into the economy first.

Todd Bishop: The core question in this book, if I could attempt to recast it, is are the robots here to complement us or to conquer us? Are they here to enhance us or to extinguish us? If not literally kill us then at least displace us and our current value in society. One of the things you make clear is that the people designing these systems, many of them don’t think of things on that level, they’re thinking what can we achieve technically? What can we achieve economically in pursuit of automation? This sounds like a recipe for the Robot Armageddon, doesn’t it?

John Markoff: That’s certainly a possible path to go down. What I was trying to say is that it’s a designer’s choice. There is a broad deterministic view in society that the machines are designing themselves and that’s just not what’s happening. The machines are not designing themselves and they’re not going to design themselves in our lifetime. All of these questions about what our relationship to the machines will be are things that are designed by humans, so it’s a human question. The book is trying to answer the question of what the human responsibility is and I identified these two communities. There is an AI community and then the other community is now referred to as human computer interaction community. They tend to see the world from the human out. The answer is obviously somewhere in squaring that circle, using AI technology for humans rather than to replace humans.

google
Google’s self-driving car.

Todd Bishop: Let’s shift to another current technology, and you’ve had some first-hand experience with this, let’s talk about self-driving cars. I know this is one of your favorite topics, this is Google’s. One of their variations of the self-driving car. Tell us about the “handoff problem” and the significance of that challenge.

John Markoff: Google has done a spectacular job in their self-driving car program, which emerged publicly around 2010, and they now have driven more than a million miles without a robot-caused accident. They’ve had about 14 accidents and in each case a human driver has done something wrong and hit the robot. That is an engineering miracle. They really set out to do this in a very precise way. Initially they were driving Priuses, they had a fleet of Priuses, they drove I think close to half a million miles before I stumbled across the program and it became public. They were driving cars on the public roads, largely at night, with two human drivers in the car overseeing the robotic control. They pushed that about as far as they could and they really were able to do a tremendous number of things.

They began to find things out about the problems facing self-driving cars. One of them, that I think hasn’t gotten a lot of publicity, is how dynamic the roadways are. Roads move all the time, whole bridges move, and if you’re building a fleet of self-driving cars you have to know what’s going on on the roadway all the time and that’s a huge challenge going forward.

The other is this … It’s called the hand-off problem and they ran across it when, in the early part of last year, they began to basically give the cars to their employees and let them commute in them. Before they had professional drivers, now they gave the keys to some Google software engineer and he took it home at night. They instrumented them so they could see exactly what they were doing in the car, and they found that there was a lot of distracted behavior, as you might expect, up to and including falling asleep. Falling asleep is a real problem for a self-driving car that basically, at a certain point, has to hand off control because it encounters a situation that it can’t deal with. At that point it passes it back to the human and says, “You take the wheel.” If the human is asleep, or playing World of Warcraft, or whatever else, it isn’t going to happen in a quarter of a second. They realized that that was something they couldn’t solve and they switched to these cars.

This car doesn’t have a steering wheel, it doesn’t have an accelerator, it doesn’t have a brake. It has a little display in it so they can advertise you while you’re on the road. The other thing you got to know about this car is that’s a soft foam plastic front and it’s a plastic windshield. What could possibly go wrong? If they do hit a pedestrian, the car is limited to 25 miles an hour and it probably won’t kill them, which is pretty amazing. Nothing’s perfect. I actually think that this might be a workable transportation solution in an urban core defined area, limited to 25 miles an hour. The average speed of vehicles in San Francisco and New York is 17 and 18 miles an hour, so 25 miles an hour is just fine. There have been studies that show that the cost of these kinds of fleet compared to conventional human-driven taxi fleets are tremendously lower. Then you have to ask the deeper social question, if it’s a short trip, should you take this car or should you walk? That’s a deeper question than that.

Todd Bishop: I have a 4-year-old daughter. She’ll be starting to drive in 12 years. What are the chances that she will be driving a fully or riding in a fully autonomous vehicle at that point?

John Markoff: This is controversial but my sense is zero. Toyota came into the equation by saying we’re funding Stanford and MIT with $50 million to do research not on self-driving cars but on what they call intelligent cars. Their model, their view, is they want to keep the human in the loop and they want to give you a little guardian angel or a driver’s ed teacher that will sit over your shoulder and when you screw up, they’ll make the car do the right thing. Which is also a very hard problem but it’s probably a more solvable problem than the other one.

The question is when will we see trucks driving where they’re being tested in Nevada without human drivers? When will you be able to take the human driver completely out of the loop? There are just so many edge cases and challenging problems for them to solve, let alone regulatory problems, let alone liability problems, that I think there’s almost no chance that … There will be a lot of research, Uber will continue to invest a lot of money, Google has their car but … they have a car that might work in a campus setting or in a retirement community or some place like that. I think that the Toyota model is a much more realistic one. Once again, this is my dichotomy between AI, taking the human out of the loop, and IA, intelligence augmentation, augmenting the human. I think that that’s a reasonable solution.

Todd Bishop: You have Google with their self-driving cars, Apple has Siri, Amazon has Alexa, Microsoft has …

John Markoff: Cortana.

Todd Bishop:  I guess Clippy, too, at the very beginning.

John Markoff: Clippy hasn’t come back, they had a funeral for Clippy.

Todd Bishop: I’m sure there’s a Terminator movie in there somewhere with Clippy coming back. Do you feel like any existing tech company has the edge right now in artificial intelligence? Who could be the giant of AI in terms of the next big platform?

John Markoff: In Silicon Valley right now there’s tremendous competition for those talents. You see who’s acquiring them and Google has a vast investment and they have, particularly in the fields where they have dramatic impact, machine learning-related fields, Google really cornered the market. They’ve also cornered the market on some of the best roboticists in the world. Andy Rubin, who is a Google executive and roboticist, went around the world for almost a year and bought at least 13 companies before leaving the company. They got this amazing base of talent. I can’t tell you how many AI/robotics startups there are in Silicon Valley and it’s so hard to handicap them. You can see that there’s a race.

Todd Bishop: It seems Google has more access to the collective human consciousness than any other company. When you take that and you add it to their work in robotics and self-driving cars, it feels like they would be the company that would take the lead here.

John Markoff: We’ve seen this movie before. It was Microsoft and before that it was IBM. They have mindshare, they clearly have that particular monopoly that gives them this immense fire hose of capital. I believe Google’s problem is where to put $1 billion a week, that’s what Alphabet is about. They recently split up into all these companies for perhaps legal reasons in terms of antitrust challenges they may face in the future. They’ve got this immense war chest to invest and they can basically do whatever they want in terms of technology. Although the vast amount of their revenue still comes to that choke point. We’ve seen this movie. Microsoft had a monopoly and as the center of gravity in the computing universe moved they weren’t able to follow it quickly enough and the center is now in Silicon Valley.

Q: What about Microsoft’s chances for becoming a big player in artificial intelligence, machine learning? You look at what they’re doing with things like Cortana, but also the research that you cited, and then blended reality, holograms with HoloLens.

John Markoff: Two different things. They have some very interesting strategies to deploy AI technologies in the cloud. The operating system in the days when Microsoft created their monopoly was about a file system that talked to the information on your disk, okay let’s translate that and go up a level in the stack. Instead of giving you basically calls to the information on the disk, we’ll give you calls to AI functionality. I know that they’re doing that, they’re pursuing that, that’s an interesting strategy.

The HoloLens thing is, I think, even more deeply interesting because … I walk around the streets of San Francisco and I swear to God half the population is walking around looking down at the palm of their hand. That can’t be the final state of human evolution, there has to be something else. Augmented reality is something that interests me greatly. Whether it’s going to come in three years or 23 years is another question but clearly if you could basically overlay the physical world with computer generated imagery in some compelling way that’s probably a possible direction. I know Google thinks that’s true, Microsoft thinks that’s true, I don’t quite know what Apple thinks about that. I would find that an interesting direction.

Todd Bishop:  Just closing off the discussion here about the current landscape, how worried should we be about all the robots, drones, and spaceships that Jeff Bezos has at his disposal?

John Markoff: I was at NASA Ames probably five months ago, or four months ago, and right now the air traffic control system deals with that world about 400 feet above the ground, or maybe it’s higher, I’m not quite sure where the lower bound is. They’re actively working on research in which they described to me a world which the sky is dark with drones. They want to be able to do air traffic control in that space between the current world and one level down. From a technological point of view it might be possible, what kind of a world would that be to live in?

Todd Bishop: The regulatory hurdles that Amazon is facing just to test these things, it speaks to a real challenge. You’re not just facing technological and economic challenges, you’re facing a big regulatory regime that is going to prevent this stuff from happening, if you want it to happen.

Andy Rubin
Andy Rubin

John Markoff: Yeah. The challenges in getting there are immense. I thought it was so interesting, after Jeff Bezos rolled out the drone on “60 Minutes,” two days later the Google robot strategy came out and Google. This was never public but Andy Rubin, when he was buying robot companies, was sketching this vision. Google is competing directly in some parts of the country with Amazon. They have their own warehouses, they have their own delivery fleet, and their strategy so far has been to ally themselves with Amazon’s enemies. They’ll deliver the goods for the brick and mortar companies that are in competition with Amazon. The vision that Andy’s sketching out is the Google car drives up to your door and the Google robot hops off the back bumper and runs the package up to your door. Actually I think that’s a slightly more plausible vision than dealing with the drone that’s going to fall out of the sky. I just can’t make this work in my mind even if you could physically do it. Then they start talking about face recognition so it can see that it’s delivering the person to the … I can’t see society going for that in a big way.

That said, the stuff that’s going on in the warehouse world … Amazon is literally trying to give you something before you think you want it. That’s their goal is that the package will be there and you’ll think yeah, I wanted to get that. The logistics supply chain that stretches all the way back to China is really quite fascinating. Not only do they have these 5 million square foot warehouses outside of urban areas, but increasingly they’re putting warehouses right in the urban areas, small, tiny warehouses that can bring the products right next to you so you can get it in a couple of hours rather than in a day. Then it becomes this complicated logistics equation where energy and transport are all part of it. My guess is that that small warehouse that’s really close to you will also be highly automated.

Todd Bishop: Speaking of warehouses you actually shared some videos with me that we wanted to … We wanted to watch one of them here, the brief clip that shows where some of this is headed. Tell us about when this robot comes out. Tell us about this particular robot.

https://www.youtube.com/watch?v=BCO244wouRs

John Markoff: The problem is to be able to load and unload boxes from trucks, which is not a huge job category but it is something …

Todd Bishop: I worked at UPS in college and I think I worked with the guy they modeled this after.

John Markoff: Okay. Low light, unstructured objects of different sizes. The really cool thing about this demo is the thing that makes it possible is the sensor that was developed for the Xbox. It’s called a structured light and imaging tool and it basically allows them to take a six-sided object and reliably … When the price of computer vision fell from thousands of dollars to hundreds of dollars all of the sudden you could do different things. There’s some AI here and there’s some sensors.

Then you go and you look at the world out there and you see that there aren’t a lot of people who do this and it’s a pretty crummy job, you have to move packages of up to 70 pounds, and humans basically move one package about every six seconds. This was a little company that spun out of a research laboratory called Willow Garage. Industrial Perception was started by some roboticists and some AI and some machine learning people about three years ago. When they were bought by Google, I think two years ago now, they had gotten below six seconds to four seconds and they thought they could go to moving a box every two seconds.

I’m mixed about it. As long as you can retrain the workers and give them decent jobs I’m not sure it’s bad if that job category goes away.

A Kiva robot in action
A Kiva robot in action

Todd Bishop: To be clear this is not Amazon, but this is representative of the challenge that companies such as Amazon face. Right now Amazon has their Kiva robots going around the warehouse, they’re essentially shelves on robotic platforms, and they deliver them to the warehouse workers, the fulfillment center workers, to use Amazon’s phrase, and then the workers themselves pick the packages. The key challenge right now for Amazon is can they figure out how to get robots to identify and pick oddly shaped boxes.

John Markoff: In warehouses broadly you divide the task into case pick and piece pick. In piece pick you actually have to identify the product and be able to manipulate it, case pick they’re large cases. I’ve been in warehouses that are doing case picking and the old model is you have forklifts and pallet jacks and you have a guy who drives them and he wears a set of headphones that speak to him in up to five languages and these guys just wail around and pick out these cases and take them and move.

I was in a food warehouse, a grocery warehouse, which half of it was the old model and the other half, it was a giant Pachinko machine. You had … This was probably a 10-story building but they had this array of aisles that were probably 20 levels high and 21 wide, and the packages were almost organized like data in a computer. The cases, rather than being stacked on top of each other, were laid out along aisles. Each aisle they had this little go-kart and it would scream down the aisle at up to 30 or 40 miles an hour and it had a little set of forks that would come out when it got to the right case and it would race back to this front loading area and it would put it in this Rube Goldberg-like loader and it would come down into this. This is actually technology that’s been around for a long time. They would stack the cases in exactly the right way so it was all organized by computer so that when it got to the grocery store the guy who was unloading them would unload them in the right order so that they would go directly into the right part of the store. Then they wrap them like this and automated that part. Half of it was people intensive, other half no people at all. That’s case pick.

Piece pick: Kiva is one of a number of startups that tried to deal with the fact that the problem that they can’t do in the computing world yet, in the AI world, is dexterous manipulation and vision. They can’t reliably recognize what the item is and they can’t reliably grab it. What Kiva did was it decided rather than having the people run through the warehouse, as they do in the old style Amazon and gather things for the box, they would bring the objects to the people just at the right time and they’d put them in the box and the box would go off. The joke was, of course, as they moved to the next stage of automation and got rid of the people when they can actually do those things, they’d finally bring air conditioning to the warehouses because the robots, the computers, needed air conditioning as opposed to the people. I don’t know if that’s true, that’s a joke.

Todd Bishop: They could get rid of the toilets.

John Markoff: Get rid of the toilets, yeah, that’s true.

Todd Bishop: You write in the book, and this is an important example, that Amazon’s Kiva robots are clearly an interim solution toward the ultimate goal of building completely automated warehouses.  I actually did some reporting on this and wrote about that particular angle when your book came out. Amazon’s response, in part, was, “Our fulfillment centers are a symphony of robotics, software of people and of high-tech computer science algorithms — machine learning everywhere, and our employees are key to the process.” Is that BS?

John Markoff: No, it’s not. I’m sure that the economics of delivering this supply chain will ultimately push people out of the process. I just think the whole question of job replacement is much more nuanced than this current wave of anxiety realizes. Before I came over here I was playing with one of these wonderful websites that’s taken the arcs for data. There was a group at Oxford that did this study about which jobs could go away in the world, and you can go through and you can look at a job and they have a percentage of what is the likelihood of a particular job to be automated. I looked and it’s such a narrow — actually it’s just BS. For example, one of the jobs that they claimed was 98% likely to be automated was manicurists, pedicurists. I looked at it and I thought are you out of your mind? First of all, I don’t believe that robots will easily do that.

Let’s do a thought experiment. One job that could be easily automated today is the job of a barista. You could conceive of a Starbucks without people. How many of you would go into that Starbucks? It’s called the automat. (Several people raise their hands.) OK. It would be a smaller fraction of the population.

Todd Bishop: Is that Howard Schultz back there?

John Markoff: You can make a coffee machine, it would probably make a latte as good as a barista, but it’s just not going to happen. The whole model is around the human contact. People go into the stores for that human contact and I think that’s a much more important factor than the people who say no human jobs in 45 years are dealing with.

Todd Bishop: Give us your basic take on jobs overall. Will there be a net gain or loss? This is one of the fundamental questions about artificial intelligence.

John Markoff: I’ve been reading widely, as widely as I can, from the economic literature and I’ve come to the conclusion that nobody has a clue. This is a reporter’s field day. You can go from the International Federation of Robotics on one side who argues that we are on the cusp of the biggest job renaissance in history, to Moshe Vardi, who’s a Rice computer scientist, who argues that all human jobs will be obsolete by 2045. Which group is right?

I have to admit I started on this side, on the anxiety side, because I began reporting in 2010 on white collar automation that was moving up the stack in terms of skills. One of the most interesting early examples I saw was the impact on $35 an hour paralegals and $400 an hour attorneys who are being displaced by e-discovery software that could demonstrably do a better job of reading documents than humans. I thought that’s really interesting. You can look at doctors and lawyers and all of that. I had my hair on fire for a long time saying okay, jobpocalypse, it’s real, it’s happening.

Then I had a conversation in particular with Danny Kahneman, who’s an economist, and I was sketching out this view of China and the coming of robotics and social disruption. He said, “You don’t get it. In China,” he said, “the robots are going to come just in time.”

I said, “Excuse me? What?”

He said, “Yeah, China is an aging population, they have a one child policy, and at a certain point the Chinese workforce is going to start to shrink. They’re going to need robots.” If you look at Japan, where the population is imploding, they’re going to lose a third of their population over the next 30 or 40 years. Korea, aging. Europe, aging very quickly, they’re spending a billion to try to develop an elder care robot. By the year 2020, for the first time in history, there will be more people over 65 years of age than under five, for the first time in history.

Everybody’s looking at this as a snapshot rather than a dynamic picture. It’s incredibly risky to make these snapshot projects. Poor Jeremy Rifkin. 1995, an economist who wrote a book called “The End of Work.” From 1995 to 2005, in America, the economy grew as strong as it ever has. It’s funny because now Jeremy is running around claiming victory because now we’re anxious again. The fact is there are 140 million people working in America today, more than ever in history. Not to say there’s nothing to worry about. One of the things I’ve been trying to explore is the relationship between inequality, the ends of the workforce, and there is a fair amount of evidence that … Perhaps not this new wave of technologies but existing automation technologies have taken out the middle of the economy. When you read the economic literature there’s even a debate about polarization. It’s really a fascinating time.

I’ve been looking at the wave of new hires. Now the economy is growing again and if this polarization trend was true, you’d think it would show up in the new jobs and what’s emerging and it’s a much murkier picture. It’s not easy to pull things apart.

Todd Bishop: Another core question in all of this is essentially whether machines will one day become sentient, self-aware beings. Will they experience human emotions and achieve a human level of consciousness? After all of the reporting that you did on this book, what’s your take on that question?

John Markoff: What timeframe?

Todd Bishop: 2045, how about that one?

John Markoff: 2045? Zero.

Todd Bishop: Zero in that timeframe.

John Markoff: Absolutely zero. This has been brought up by people like Elon Musk who talked about the risk of summoning the demon. Stephen Hawking has raised this possibility that AI technology is an existential threat to humanity. Bill Gates has chimed in with concern about AI. My initial reaction, if you go to the people in the field, real AI researchers, and you say, “How are we doing?” The reality is we’re doing terribly. We’ve made some real progress in perception. Machine learning, computers, are being to see, they’re beginning to speak and listen, they are not beginning to think. There is not a lot of progress that’s made in cognition. The striking thing … I look at this as a sociologist. What’s remarkable about the AI field is this is a field that has overpromised and underdelivered historically since its very inception. What’s different now? They’re overpromising again.

1956, John McCarthy coins the term artificial intelligence, they considered it a summer project. 1958, the Perceptron, the first neural net, was invented. They thought there would be thinking machines in a year. 1963, McCarthy writes the first DARPA proposal to create the Stanford AI Lab. He thinks that building a thinking machine is a 10-year project. There’s a pattern here that people should recognize.

I was just, I told you, at the Allen Institute for AI Research where they’re much more grounded. They’re working on building machines that can actually take tests, can take SAT tests, and things like that. What I was seeing is the challenge. If you’re going to take a test, even at the fourth or fifth grade level, something like 40 or 50% of the questions include an image. It’s not just a natural language processing question, you actually have to be able to understand what that diagram says and interpret it and relate it to the question. We just made the tiniest little progress in doing that. What was so funny is they gave me a demonstration on how poorly we’re doing in recognizing arrows. We can’t recognize arrows. In a lot of these diagrams there’s a pointer, they have trouble recognizing the arrow. The science fiction has gotten so far ahead of what the reality of things are.

When you start to see whether there are going to be self-driving cars … I think at the same time the fact that they’ve raised the issue is actually a good thing because there’s another issue besides self-aware machines. There’s a question of autonomous machines. We are beginning to give machines decision making powers. That’s a shift.

Todd Bishop: In some cases, in very dangerous situations like missiles.

John Markoff: Yes. If machines are making killing decisions, you’ve got some very deep questions you need to answer. All throughout … As we take decision making authority and delegate it to software, there are going to be consequences. People aren’t thinking about the consequences. The discussion about autonomy I think is a great discussion for society to have and to think about and we by and large don’t.

It’s come up in the self-driving car discussion in interesting ways. The philosophers have been talking about this thing called the trolley problem since the 1960’s. The trolley problem is it was originally stated by a philosopher, I’ll butcher it, but you’re coming down a railroad track and you’re about to run into another train and kill 100 people. If you throw the lever you can go down this other track and you’ll kill three people. What should you do? That’s actually, for the designers of self-driving cars, they’re going to have to work on that problem. Cars are going to have to make decisions where different consequences will happen.

Todd Bishop: Was anybody killed in this one here?

car
Photo courtesy John Markoff

John Markoff: No, I was in this car. I actually crashed in a robot which is really … For a reporter, when you can be in a crash like this and you can walk away from it, this is one of the best things that can happen to you. That’s Sebastian Thrun and is the Stanford Stanley that won the second DARPA robotic challenge. We were driving along a gravel road in Arizona. Actually we all had crash helmets on, he’s taken his crash helmet off. It was starting to get boring and all of the sudden we went over a swale and went down into a valley and came back up and there was a branch. The LIDAR, the sensors on the front of the car, swept over the branch and Sebastian Thrun, who was the Google roboticist who … Later he went to Google and started the Google robotics car effort, he ran the Stanley project. He had this big red button and the car was off the road before he could hit the red button. What’s amazing about this is on either side there were these two gigantic boulder piles and we managed to go into this bush and didn’t even set off the airbags. They straightened out the LIDARs and we took off again.

Todd Bishop:  You alluded to “Her” earlier, the movie, do you feel like any of these movies or any other movie out there has most accurately captured the future of our society of artificial intelligence? Is there one movie of yours that’s a favorite?

movies

John Markoff: “Blade Runner.” Actually this comes up because people talk about is AI an existential threat. My glib answer is if I was going to pick an existential threat it wouldn’t be AI, it would be genetic engineering. Tools like gene drive and CRISPR/Cas9, which are going to make editing the germline easy are a real existential threat. There’s one out there I just don’t think it’s AI. “Ex Machina,” fembots with pleasure centers, I don’t know.

Todd Bishop:  That was the Google proxy in many ways.

John Markoff: Yeah, that’s true. Hal. he book is largely a collection of connected stories about people who design systems. What was so striking to me is how many people saw “Space Odyssey” and decided to go into AI because they wanted to build Hal.

Todd Bishop:  Really?

John Markoff: Yes. Really well known roboticists basically saw it and said, “Hey, I want to do that.” Science fiction has a huge influence on the design of these systems.

Todd Bishop: If you look at movies and science fiction, this would seem to be preordained that this would become our world someday with machines, if not running it, at least playing an integral role. In some ways is it a self-fulfilling prophecy in that way?

John Markoff: Yeah, I think there’s some element of that. The two designers of Siri, Adam Cheyer and Tom Gruber, were both deeply influenced by the Apple vision video called “Knowledge Navigator,” that was introduced in the mid-80s.

Todd Bishop: Which had its roots in John Sculley. Am I remembering that right?

John Markoff: No, Sculley commissioned it. He asked Alan Kay to come up with a modern version of the Dynabook. The book traces it all the way back to this crazy cyberneticist Gordon Pask, but that’s a long story.

Audience Q&A

Audience member: I have a couple dystopian type of questions. One is about China. I’m curious if you think that there’s a bottleneck on the production of our magical devices at the end of whatever polluted scenario we have in China? I’ve seen some just horrific pictures and stuff.

John Markoff: Like we might lose our ability to get our gadgets? Is that what you’re saying?

Audience member: Yeah, if they run out of places to throw the toxicity. Then the other question is what you think of the Ashley Madison stuff and particularly the engineers that were creating the bots that were communicating with real individuals.

Todd Bishop: What was that stat? It was some astronomical number.

John Markoff: Yeah, there were no women there.

Todd Bishop: People who were signing up to pay for the service, in large part, were doing it because they were lured by a chatbot.

John Markoff: It’s depressing, that’s what I think about Ashley Madison. The whole thing is depressing. It was an IQ test too. About China, I tried very hard in 2012 to get to China and because there’s this quarrel between the Chinese government and my newspaper I couldn’t get a visa to go there. I really wanted to see what was going on there. The whole …

Todd Bishop: Wait, you’re kidding me. They wouldn’t let you in because you’re the New York Times?

John Markoff: Yeah. The Times did some reporting that really pissed off the Chinese government. I might be able to get in as a tourist, I might be able to get in on a business visa, but I can’t get in on a journalistic visa.

Todd Bishop: Sorry, continue.

John Markoff: I was just going to say to the point about dystopian futures, a guy I know pretty well actually built Apple’s iPhone manufacturing operation in China. It’s horrific. That’s what we reported on in that series. Part of it was on labor but part of it was also on the environmental practices that go on there. If you take the big job assembly operations, like Foxconn, they’re pretty well regulated and organized. There aren’t many horrible things around them. They have this just in time supply chain all around them of mom and pop corporations and there the practices are just abominable. That’s just the way the system is. I don’t know what will change it.

Todd Bishop: To the question, is it putting a ceiling on the potential for hardware innovations?

John Markoff: Silicon Valley has become a design center. The system works but there is a cost, there’s a really grim cost. It used to be in Silicon Valley. When I first started working as a reporter in Silicon Valley, they were making things in Silicon Valley, they were making semiconductors. Initially the Valley had the same terrible environmental practices, however it was invisible pollution, you couldn’t see it. There were stories of taking these ketone sticks and waving them in the air in trailer parks downwind from semiconductor factories and they would turn color because there was so much pollutant. Basically we moved all of that manufacturing to Asia or to Oregon or New Mexico. Sorry.

Question 2: Hi John, this is a topic you raised before about the impact on unemployment. Up until this point it’s been largely brawn that’s been replaced rather than brains. Even writers are getting replaced and so forth. It’s easy to come up also with some examples of jobs which one thought 10 years ago would never be replaced being replaced. Now you’re counterargument, the more optimistic side, was a demographic argument. However, the countries which are still growing population and don’t have the European or Japanese situation, where are the jobs going to come … Where are the replacement jobs going to come from?

John Markoff:  Let me be direct, I don’t have a good answer. There could be this jobpocalypse scenario. However it’s so funny to see these Silicon Valley guys, who are seeing this big scenario, are basically recapitulating Marx. Marx laid all of this out way over a century ago. Underconsumption, overproduction, that was the model, that’s what we’re talking about again. I started as a Marxist, I’ve come around the Keynes. Keynes in the 1930’s said technology destroys jobs it doesn’t destroy work. The pie gets bigger. I can give you specific examples and I’m not …

Audience member: Keynes actually predicted a 15-hour work week.

markoffJohn Markoff: It’s true, Keynes hasn’t been perfect. On the other hand the pie has grow and, by and large, more people are working than ever before. Technology has not eliminated all labor. The classic example. Three of the books that have been written wringing their hands all cite this dichotomy between Instagram, 13 programmers who created a digital photo sharing site, and Kodak, this chemical, old world, 140,000 worker bankrupt company. When you look at it, that’s not what happened. First of all, Instagram did not kill Kodak. Kodak killed Kodak. Kodak put a gun to its head and pulled the trigger many times by making strategic errors and then it died. The proof of that is Fuji, which was Kodak’s chemical competitor, made it across the chasm just fine thank you.

It’s even more complicated than that. Instagram could not exist until there was a mature internet. How many jobs were created by the rise of the mature internet? The numbers are probably over 2.5 million, many of them good jobs. I go back to what job wasn’t around 10 years ago and the first one that comes to mind is search engine optimization. Not a particularly good example but there are tens of thousands of people doing it.

I’m waiting to see what happens but I don’t feel that the people who are predicting a dark end have any monopoly on being accurate either. I just think that we’re going into virgin territory and we’re going to find out. The horror stories have no root in what I see just practically.

Question 3: There’s a really great article that came out a couple months ago called “Life Below the API.” If you haven’t read it before, it talks about … For example, take an Uber driver. In a traditional business a couple of years ago, someone could start off on the factory floor, work their way up to supervisor, work their way up to general manager. There’s a career path for people in those traditional industries. If you start looking at modern day, take your everyday Uber driver, your Uber driver isn’t going to say, “Wow, you’re a great Uber driver, you’re now a software engineer, write this API.” There’s a disconnect, at least in the modern economy, centered around people who provide service and people who can provide the service that provides the service.

There’s two ways of looking at it, particularly in how AI can help here, but it can also be hurtful too. It’s a question of does AI make the world better because individual actors within these systems are suddenly magnificently more productive? They know who you are, there’s this futuristic Star Trek “technology is friendly” attitude towards it. There’s also another one which is we have become the robots, we are being told by software what to do, what to think and where to go.

John Markoff: You will be assimilated. … What gives me some hope … I keep ending up sounding optimistic. I don’t know if I’m actually as optimistic as I sound. On several levels … Let me talk about this startup in San Francisco called Momentum Machines, started by this young computer scientist who’s name is Alex Vardakostas, and Alex is building a machine to make the perfect hamburger. When he first came out that he was doing this, there was a lot of reaction that he was not only … Not only was the person at the front of the shop at McDonald’s going to be automated, but the person, the fry cook, at the back would be automated. That’s not what he wants to do. He basically … You guys don’t have Blue Bottle up here but in San Francisco there’s a really high-end latte, perfectly crafted coffee. He wants to do the same thing for hamburgers. He wants to create an automated factory where you’ll order your perfect hamburger and get it. It will be served by a human, a concierge. He realizes the concierge job is not a particularly good one.

He, as a young entrepreneurial capitalist, has this notion that he’s going to create a social bargain. You will only work as a concierge for him for two years. His contract with you, for coming to work for him, will be to allow you to basically go to school while you’re working for him. He’ll support your schooling so you’ll go on to something that will be more skilled. It’s a small example but I see it happening with Starbucks and Peet’s too. As we’re moving from an economy where a person like me might work for 30 years in the same job to an economy where you’ll work for six months and then you’ll do something else, we obviously have to change the educational system.

That’s what Sebastian Thrun, the guy who crashed that car, or was in that crashed car, went on to do at Udacity. He’s found an interesting little market niche with these things he’s calling nanodegrees. I think the model is constant retraining. I think that society should support that. We should socialize education basically because it’s the feed stock of a capitalist economy to have a skilled workforce that can be flexible. I start to see bits and pieces of this happening, but I can also see the dark vision that you paint as well I have to confess.

Todd Bishop: John Markoff, if people in 50 years were to look back on your book, if somebody was to find it on the bookshelf of their grandpa at the summer house, how would you hope people would look at your book in that perspective?

John Markoff: I would hope they would see it and think that it was not obviously wrong. That’s always my goal. I was really interested in pulling apart the threads of why people do what they do. I was looking at the decisions that the designers make and that’s where my hope lies, really. These are tools. My sense is that even if you put a couple billion transistors into a hammer it’s still a hammer and it’s a human designed tool. It has all of those qualities that we put into these things that we create. In a sense it’s a reason for great optimism.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.