FLI’s Asilomar AI Principles

(link)

This is one of the earlier publications on AI ethics after the advent of deep learning. Conceived by the Future of Life Institute, these 23 principles were developed during the 2017 FLI Beneficial AI conference. Of note, the principles have been translated to Chinese, German, Japanese and Russian, which implies its international target audience. The nature of the principles should be understood in context - FLI was founded to mitigate existential risks - specifically existential risks due to artificial intelligence. As such, these principles generally take a long-term big-picture perspective.

The 23 principles are divided into three categories - research, ethics and values, and longer-term issues.

Research

  1. Research Goal - emphasis on beneficial, as opposed to undirected, intelligence
  2. Research Funding - these refers to funding research into multidisciplinary issues, including:
    • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
    • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
    • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
    • What set of values should AI be aligned with, and what legal and ethical status should it have?
  3. Science-Policy Link - exchange between AI researchers and policymakers
  4. Research Culture - exchange, trust and transparency between AI practitioners
  5. Race Avoidance - active cooperation to avoid compromising safety

Ethics and Values

  1. Safety
  2. Failure Transparency - traceability in case of failure or harm
  3. Judicial Transparency - explainability when used in judicial systems
  4. Responsibility - recognizing AI practitioners as stakeholders and held accountable to their systems
  5. Value Alignment
  6. Human Values
  7. Personal Privacy - “People should have the right to access, manage and control the data they generate”
  8. Liberty and Privacy - autonomy and freedom
  9. Shared Benefit
  10. Shared Prosperity
  11. Human Control - human control over decisions or delegation of decisions and human-chosen objectives
  12. Non-subversion
  13. AI Arms Race

Longer-term Issues

These refer to primarily existential and catastrophic crises due to AI.

  1. Capability Caution
  2. Importance
  3. Risks
  4. Recursive Self-Improvement
  5. Common Good