Google’s principles on AI

(link)

This is a post from Google’s official blog, authored by Google CEO Sundar Pichai on 7 June 2018, a month after the release of Google Duplex and several months prior to their ethics board disaster.

The post begins with a non-exhaustive list of how Google uses AI:

Commercially

  • Gmail’s spam filter and autocomplete
  • Google Assistant’s voice-to-text and virtual assistant
  • Google Photos’s tagging/classification and highlights

Non-commercial

The first two refers to how others use Google’s Tensorflow library, rather than Google’s direct efforts.

  • Prediction of wildfire risk
  • Livestock monitoring on farms
  • Google’s work on cancer and ocular diagnosis

Pichai then follows with a set of objectives, taboo pursuits and long-term vision

Objectives

  1. Be socially beneficial. Here, Pichai talks about “transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment”. He also refers back to Google’s core function as a search engine and how Google has often open-sourced their technologies.
  2. Avoid creating or reinforcing unfair bias. “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
  3. Be built and tested for safety.
  4. Be accountable to people. This includes notions of explanability and human-in-the-loop and AI as assistants rather than decision-makers, “subject to appropriate human direction and control”.
  5. Incorporate privacy design principles. This refers to privacy safeguards, transparency and control over data.
  6. Uphold high standards of scientific excellence. Here Pichai uses the phrase “responsibly share”, which is interesting given the OpenAI GPT2 debate.
  7. Be made available for uses that accord with these principles. This refers to development and release of technology, which takes into consideration the following factors: primary use, adaptability for harm, uniqueness, scale, Google’s involvement

AI applications we will not pursue i.e. taboo

  1. Technologies likely to cause harm except where benefits substantially outweigh risks
  2. Weapons
  3. Surveillance violating international norms
  4. Technologies violating international law and human rights.

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas.

This follows on the heels of Google’s non-renewal for Project Maven.

Long-term

This is more of a conclusion but it is important to note that Pichai explicitly mentions “room for many voices in this conversation” and “multidisciplinary approaches”.