paulallen2The Paul G. Allen Family Foundation today awarded $5.7 million to seven researchers in the artificial intelligence field as part of the most recent Allen Distinguished Investigator (ADI) Program grant.

The researchers, who are working on machine reading, diagram interpretation and reasoning, and spatial and temporal reasoning, hail from four universities around the globe — four of them work at the University of Washington.

“The Allen Distinguished Investigator program has become a platform for scientists and researchers to push the boundaries on the conventional and test the limits of how we think about our existence and the world as we know it,” Dune Ives, co-manager of The Paul G. Allen Family Foundation, said in a statement. “We are only beginning to grasp how deep intelligence works. We hope these grants serve as a valuable catalyst for one day making artificial intelligence a reality.”

[Related: The next battleground for Amazon, Microsoft, Facebook and Google: Artificial Intelligence]

The ADI program started in 2010 and this marks the first commitment to researchers in the artificial intelligence field. The focus on AI topics for 2014 is related to the vision of the new Allen Institute for AI, a multi-million dollar effort created by Allen and led by CEO Oren Etzioni that could have huge implications for the region’s tech industry and, more importantly, society as a whole. Etzioni, a former UW computer science professor and veteran entrepreneur, began work at the institute in September 2013.

However, the ADI Program is a distinct from the new Allen Institute for AI and is fully funded and operated by Allen’s foundation.

Here are the recipients, with descriptions from the foundation:

Devi Parikh, Virginia Tech

The vast majority of human interaction with the world is guided by common sense. We use common sense to understand objects in our visual world – such as birds flying and balls moving after being kicked. How do we impart this common sense to machines? Machines today cannot learn common sense directly from the visual world because they cannot accurately perform detailed visual recognition in images and video. In this project, Parikh proposes to simplify the visual world for machines by leveraging abstract scenes to teach machines common sense.

Maneesh Agrawala, University of California and Jeffrey Heer, University of Washington

For hundreds of years, humans have communicated through visualizations. While the world has changed, we continue to communicate complex ideas and tell stories through visuals. Today, charts and graphs are ubiquitous forms of graphics, appearing in scientific papers, textbooks, reports, news articles and webpages. While people can easily interpret data from charts and graphs, machines do not have the same ability. Agrawala and Heer will develop computational models for interpreting these visualizations and diagrams. Once machines are better able to “read” these diagrams, they can extract useful data and relationships to drive improved information applications.

Sebastian Riedel, University College London

Machines have two ways to store knowledge and reason with it. The first is logic – using symbols and rules, and the second is vectors – sequences of real numbers. Both logic and vectors have benefits and limitations. Logic is very expressive, and a good tool to prove statements. Vectors are highly scalable. Riedel will investigate an approach where machines convert symbolic knowledge, read from text and other sources, into vector form, and then approximate the behavior of logic through algebraic operations. Ultimately, this approach will enable machines to pass high-school science exams or perform automatic fact checking.

Ali Farhadi, University of Washington and Hannaneh Hajishirzi, University of Washington

Farhadi and Hajishirzi’s project seeks to teach computers to interpret diagrams the same way children are taught in school. Diagram understanding is an essential skill for children since textbooks and exam questions use diagrams to convey important information that is otherwise difficult to convey in text. Children gradually learn to interpret diagrams and extend their knowledge and reasoning skills as they proceed to higher grades. For computers, diagram interpretation is an essential element in automatically understanding textbooks and answering science questions. The cornerstone of this project is its Spoon Feed Learning framework (SPEL), which marries principles of child education and machine learning. SPEL gradually learns diagrammatic and relevant real-world knowledge from textbooks (starting from pre-school) and uses what it’s learned at each grade to learn and collect new knowledge in the next, more complex grade. SPEL takes advantage of coupling automatic visual identification, textual alignment, and reasoning across different levels of complexity.

Luke Zettlemoyer, University of Washington

The vast majority of knowledge and information we as humans have accumulated is in text form. Computers currently are not able to figure out how to translate that data into action. Zettlemoyer is building a new class of semantic parsing algorithms for the extraction of scientific knowledge in STEM domains, such as biology and chemistry. This knowledge will support the design of next-generation, automated question-answering (QA) systems. Whereas existing QA systems, including IBM’s Watson system for Jeopardy, have been very successful, they are typically limited to factual question answering. In contrast, Zettlemoyer work aims to, in the long term, enable a machine to automatically read any text book, extract all of the knowledge it contains, and then use this information to pass a college-level exam on the subject matter.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.