WSU Professor Diane Cook is one of the university’s researchers working to create a test for measuring AI as part of a project funded by DARPA. (WSU Photo)

Artificial intelligence can do a lot of impressive things, like find snow leopards among Himalayan grasses captured by remote cameras, maneuver self-driving cars through traffic, and defeat world-class opponents in the game Go.

But are these systems actually intelligent, as humans perceive the concept?

Researchers at Washington State University in Pullman are developing an IQ test to challenge AI systems to see what they really know.

“We have AI systems out there that are getting really good at a variety of tasks,” said WSU regents professor Diane Cook. But those feats tend to be narrow within each system. “Is it really intelligent because it’s just learned to do that one task?”

Cook and Larry Holder, both of whom are professors in WSU’s School of Electrical Engineering and Computer Science, recently received a $1 million grant that will run up to five years to tackle the question. The money comes from the U.S. military’s Defense Advanced Research Projects Agency, or DARPA.

The video game ViZDOom is one of the tools that Washington State University professors are using to test artificial intelligence. (WSU Image)

The funding began a month ago, and the researchers are starting with basic questions about the scope of intelligence, which could include recognizing images, understanding and generating natural language, reasoning, and using planning in problem solving. The scientists want to use rigorous measures, such as the ability to respond to novel experiences and transfer knowledge to different situations. They also want to test for bias in a machine’s knowledge; bias can lead to racial, gender and other forms of discrimination, depending on the algorithm’s application.

It’s a difficult task to define and measure intelligence. Just look at how hard it has been to come up with effective standardized tests to measure the full range of smarts for students or job applicants.

“If you’re trying to see if your machine has general intelligence, you have to define what you mean by general intelligence and make sure your test is really testing that,” said Melanie Mitchell, a Portland State University professor in the Department of Computer Science who is not part of the DARPA project.

One of the challenges in the field is the way in which machines learn. Mitchell gave an example of a student in her lab who was teaching a program to recognize photos that contain animals. It appeared to be learning the skill until the researchers realized that it wasn’t the image of the creature that the algorithm was keying into, but rather the background blurriness. It turned out that the animals were typically in focus while the background was not, while landscape-only scenes had crisp backgrounds.

“A lot of misunderstanding is that the machine learned to do a certain thing like play Go or recognizing objects, so we assume it learned it in the same way we do,” Mitchell said. “We’re surprised when it didn’t learn in the way we do, and it can’t transfer its knowledge.”

The WSU project is part of DARPA’s Science of Artificial Intelligence and Learning for Open-world Novelty (SAIL-ON) program.

“For AI systems to effectively partner with humans across a spectrum of military applications, intelligent machines need to graduate from closed-world problem solving within confined boundaries to open-world challenges characterized by fluid and novel situations,” says the SAIL-ON website. The program’s goal is to create and test high-performing AI systems to meet the military’s needs.

WSU Professor Larry Holder is working on a project to test AI systems. (WSU Photo)

There are other organizations working to expand and understand AI abilities. In the Northwest that includes Seattle’s Allen Institute for Artificial Intelligence (AI2) and the AI group at the University of Washington’s Paul G. Allen School of Computer Science and Engineering. In September, AI2 announced that it had built an AI program called Aristo that is smart enough to pass an eighth-grade, multiple-choice science test.

WSU’s Holder has an Artificial Intelligence Quotient or “AIQ” website with some initial tests for AI developers to quiz their systems. The site is a publicly available tool that will also provide data to the researchers.

“We are focused on testing and improving systems that can be more general-purpose, like a robot assistant that can help you with many of your day-to-day tasks,” Holder said in a prepared release.

The WSU scientists aim to create a test that will grade AI technology according the difficulty of the problems it can solve. Initial plans for tests include video games, answering multiple choice problems and solving a Rubik’s cube.

“It’s an opportunity,” said Cook, “to get back the grassroots and say what AI is.”

Editor's Note: Funding for GeekWire's Impact Series is provided by the Singh Family Foundation in support of public service journalism. GeekWire editors and reporters operate independently and maintain full editorial control over the content.
Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.