AI common-sense problem
Which object would fit through the doorway? The elephant vs. basketball choice is an example of the common-sense questions that pose a challenge for artificial intelligence programs. (AI2 Illustration)

Microsoft co-founder Paul Allen’s new $125 million initiative to give artificial intelligence programs more common sense has another goal that’s closer to home: making AI safer for humans.

That’s the way Oren Etzioni, the CEO of the Seattle-based Allen Institute for Artificial Intelligence, explained it in an exclusive interview with GeekWire about Project Alexandria.

Read more: Paul Allen to invest $125M for new ‘common-sense AI’ project

Alexandria is a long-term research effort to set benchmarks for common-sense AI, develop crowdsourcing methods to learn how humans use common sense, build a repository of common-sense knowledge, and use that knowledge to build better AI tools for tasks ranging from machine translation to computer vision.

The project is named after the ancient Egyptian library of Alexandria, which was the premier repository of human knowledge in its day. That library went up in flames — a fate that Etzioni’s institute, known as AI2, hopes won’t befall its AI Alexandria.

Most AI projects have focused on narrow slices of knowledge — such as how to master chess, poker or the game of Go, or how to keep a car on the road. Project Alexandria, in contrast, will focus on more general questions: “Could an elephant fit through this doorway?” … “What would you typically find in a trash can?” … “Will this action be harmful to humans?”

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence.

The idea of making AI systems smarter often raises the specter of robots taking over the world, as famously shown in the “Terminator” movies. But Etzioni said developing common-sense AI is a requirement for avoiding that kind of Hollywood nightmare.

“How can you expect an AI system not to harm, like Asimov’s First Law of “Do No Harm,” if it doesn’t know what harm is?” he said.

Etzioni said the common-sense benchmarks established through Project Alexandria will be open-source, but he also expects the project to spawn ideas for profitable AI startups and collaborations.

One of the early opportunities is likely to open up at the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, which has proposed spending $6.2 million during the next fiscal year on a program it calls Machine Common Sense.

“We want to be a part of that,” Etzioni said. And that only makes sense.

Here are a couple of edited excerpts from today’s Q&A. For the full interview, check out our Soundcloud audio clip.

GeekWire: Tell us what’s up with Project Alexandria. Why is this a new initiative for AI2? 

Oren Etzioni: “It’s really our broadest and most ambitious undertaking. As you know, we have a variety of projects like Semantic Scholar, which is scientific search; and our incubator, which is about AI startups. You know the drill. One of the things that we’ve learned for a lot of these projects is that a lot of the systems are brittle, due to a lack of common sense.

“One question you may ask is, ‘What exactly do you mean by common-sense knowledge?’ I think the best definition that we’ve been able to come up with is that common-sense knowledge is what virtually every person has and virtually every machine does not. What we have realized is that it’s actually essential for building the next level of AI systems.”

Q:  You’ve spoken in the past about artificial general intelligence, or AGI, and the idea that your average third-grader could do things that we couldn’t dream of having an AI system do. Is this what you’re talking about? And if so, is this something we should be concerned about – because in the past, you’ve said that we don’t need to worry about Terminator coming to get us because we don’t know how to do AGI very well.

A: “First of all, yes, this is a step in that direction. It’s far from the whole story about AGI. Where we are today is that we have these very narrow intelligences. I like to call them AI savants. They can do something like Go extremely well, but each one has to be rebuilt and retrained from scratch.

“To go to systems that are less brittle and more robust, but also just broader, we do need this background knowledge, this common-sense knowledge. This does not mean that we’ll be 50 or 80 percent of the way to AGI.

“This is a very ambitious long-term research project. In fact, what we’re starting with is just building a benchmark so we can assess progress on this front empirically. I do think it’s an exciting project, and I think it shows Paul Allen’s vision, engaging in this, where a lot of the community is still thinking about the next iteration for supervised deep-learning methods. But it doesn’t really change the picture as far as AGI.

“If we think about this topic of AI safety – how do we make sure that these systems that I do think we’ll eventually build don’t harm us as people – one of the concerns is, do they even know what ‘harm’ is? How can you expect an AI system not to harm, like Asimov’s First Law of ‘Do No Harm,’ if it doesn’t know what harm is?

“Well, the notion of harm is very complex and ambiguous, and requires a lot of common sense to figure out. It’s not an ‘on-off’ thing, but if we’re able to build more and more common-sense capabilities into our AI systems, that’s a step toward making them safer, because they can make sense of what’s harmful and what’s not.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.