Montréal Declaration for a Responsible Development of Artificial Intelligence

(link; report)

This is a very extensive report spanning over 300 pages from the Montréal Declaration, an initiative from the Université de Montréal, home of the prestigious Montreal Institute for Learning Algorithms (MILA). The Montréal Declaration is based on a co-construction process, meaning that individuals from all backgrounds are invited to provide input. The notes here will be based only on the 10 principles listed in the frontmatter of the Declaration. However, Part 2 of the report also makes for a fascinating read as an overview and a comparison of seven ethical guidelines; deserves multiple readings.

The 10 Principles

Well-being

The development and use of artificial intelligence systems (AIS) must permit the growth of the well-being of all sentient beings.

It is interesting that this principle uses the term “sentient beings”, which goes beyond the human-specific definition of well-being in other guidelines.

Respect for Autonomy

AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.

This principle explicitly discourages oppressive surveillance and evaluation, misinformation and impersonation of humans that might cause confusion. It also focuses on empowerment, education and accessibility.

Protection of Privacy and Intimacy

Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).

This principle pertains to the common notions of data privacy and security and informed consent. It also relates to the previous principle in terms of control over data, the right to abstain or disconnect and the protection of the integrity of personal identity.

Solidarity

The development of AIS must be compatible with maintaining the bonds of solidarity among people and generations.

This principle focuses on human relationships, which are emphasized as irreplaceable in certain cases. An interesting note here is the mention of not encouraging cruel behavior toward robots designed to resemble human beings or non-human animals. This principle also touches on fair distribution of risks.

Democratic Participation

AIS must meet intelligibility, justifiability, and accessibility criteria, and must be subjected to democratic scrutiny, debate, and control.

This principle pertains to explainability and accessibility, as well as openness in cases of failure, breaches or abuse. The point of appropriate disclosure is also mentioned here: “code […] must be accessible to all, with the exception of algorithms that present a high risk of serious danger if misused”. This principle also talks about participatory mechanisms in the development of AIS, as well as the right of individuals to know if they are interacting with, or subjected to the decision of, an AIS. Finally, it notes that AI research should remain open.

Equity

The development and use of AIS must contribute to the creation of a just and equitable society.

This principle touches on algorithmic bias and proposes that AIS should actively eliminate power disparities. The principle also considers the notion of equity in the AIS life cycle, including data processing, power usage and natural resources extraction. An interesting note here is the explicit reference to digital activities as a form of “labor that contributes to the function of algorithms and creates value”. Finally, the principle reiterates that AI resources, including tools, knowledge, research and datasets, should remain accessible to all.

Diversity Inclusion

The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.

This principle touches on several different aspects of diversity:

  • Discourage standardization of behaviors and opinions through AIS
  • An implicit suggestion of Diversity by Design
  • Diversity in AI development environments
  • Discourage restrictions via user profiling
  • Diversity of ideas and opinions
  • Diversity of AIS offerings

Caution / Prudence

Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them.

This places responsibility on AI practitioners to foresee the consequences of their creations. It also considers again the concept of appropriate disclosure. It proposes “strict reliability, security, and integrity requirements” that “must be open to the relevant public authorities and stakeholders”. This also includes data privacy and security and public disclosure of failures, breaches or abuse.

Responsibility

The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made.

This principle explicitly states, “Only human beings can be held responsible for decisions […]”. However, it also considers that when an AIS unexpectedly malfunctions after appropriate tests, “it is not reasonable to place blame on the people involved in its development or use”.

Sustainable Development

The development and use of AIS must be carried out so as to ensure a strong environmental sustainability of the planet.

This principle looks at the entire lifecycle of an AIS, including: data collection, data storage (energy usage of data centres), energy usage in training and inference, the hardware used in storage, training and inference etc.