Google CEO Sundar Pichai speaks at Google I/O 2017
Google CEO Sundar Pichai speaks at Google I/O 2017. (Screenshot)

When Google first promised that it wouldn’t be evil, the world was a simpler place. On Thursday, Google released guidelines for how it would use artificial intelligence technology in its own applications and for customers of its cloud products, disavowing the use of its technology in weapons designed primarily to injure human beings.

The guidelines come after an internal and external backlash to the use of artificial intelligence technology in a contract Google signed last year with the Department of Defense, known as Project Maven. Google continued to defend that contract Thursday, with Google Cloud CEO Diane Greene noting that the “contract involved drone video footage and low-res object identification using AI, saving lives was the overarching intent.” But it confirmed that it will not pursue another contract under Project Maven, while it intends to honor the current deal: “I would like to be unequivocal that Google Cloud honors its contracts,” Greene wrote.

In the broader scope of its AI research and applications, CEO Sundar Pichai laid out seven principles that Google said it would follow when creating AI technology, promising among other things that it would be “socially beneficial” and would “be built and tested for safety. “We recognize that these same technologies also raise important challenges that we need to address clearly, thoughtfully, and affirmatively,” he wrote.

Pichai listed four areas in which he said Google would not “not design or deploy AI.” Here are the guidelines, some of which have caveats large enough to drive an autonomous tank through, on that list :

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

In her post, Greene said Google would continue to work with the government in certain areas, such as cybersecurity. But if the Pentagon continues to insist that it wants a single cloud vendor to build it a next-generation cloud-computing system that would also cover troops in battle, Google would now appear to be out of the running for the JEDI contract, which could be worth as much as $10 billion over a decade.

Artificial intelligence has been the hottest area of cloud computing over the last year or so, and we’ll explore several topics related to AI at our GeekWire Cloud Tech Summit on June 27th, led by Apple’s Carlos Guestrin, senior director of AI and machine learning, as well as five AI-specific tech talks on this evolving area. But the race among big cloud vendors to position themselves as having the most capable AI technology hasn’t exactly helped their prospects in certain quarters this year.

Google’s guidelines come two weeks after Amazon Web Services defended its own use of image-recognition technology powered by AI in services sold to law enforcement agencies around the country, which also raised some eyebrows among privacy advocates. Earlier this year, a Microsoft researcher said the company was turning down business from companies that wanted to use its AI in certain ways, but it refused to say what guidelines were governing those decisions.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.