Three Laws of Robotics

(link)

This is perhaps the first formal documentation of laws for AIs and robots, explicitly introduced in Isaac Asimov’s 1942 short story Runaround. Some guidelines have referenced the Three Laws, often in observation that they are adequate in light of current trends in AI.

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In his later works, Asimov also added a Zeroth Law, pertaining to humanity instead of individual humans.

Zeroth Law A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In several ways, the Laws do align with current principles.

The First Law aligns with most principles in its rejection of the development of AI in lethal autonomous weapons systems. In fact, the First Law uses the term “injury”, which suggests a rejection of even non-lethal weapons. Furthermore, the second half of the First Law suggests that AI could be used in defensive applications.

When further explored in other short stories such as “Liar!”, the First Law can also be taken to include psychological harm, which then includes notions of autonomy, bias, discrimination, privacy protection, data security and misinformation.

The Zeroth Law also considers social good and well-being of the entire humanity, which can be taken to include concepts of sustainability and environmental well-being.

On the other hand, some principles are missing, such as robustness, transparency, explainability and accountability. This could be partly attributed to the implied consciousness of robots and AIs in Asimov’s works, which would allow them to be held accountable for their actions, as well as to be responsible for explaining their decisions.