Often, when we first fall in love, the person of our affection seems to be perfect. But the happy honeymoon is cut short when we realize they are not that perfect. Turns out, they've got annoying habits. They wake up with bad breath. They burp. And oh my god their farts smell just as bad as ours.


When you realize the love of your life is human.

In the same way, our honeymoon with artificial intelligence (AI) is quickly giving way to a realization that AI is not perfect. Turns out, AI is not neutral. It is not necessarily right or fair. The recommendations of AI systems can be just as sexist or racist as any human.

When you realize the AI in your life is biased.

There are so many ways that AI can go wrong. There are so many guidelines from governments, companies, non-governmental organizations (NGOs). There are so many new algorithms, datasets and papers on ethical AI. It can all be a bit hard to take in, so this guide is here to help.

Send. Help. Please.

At the moment, the guide is targeted at AI practitioners and assumes some understanding of AI technologies. This mainly includes researchers and engineers. But it may also be useful for anyone helping to implement or recommend AI solutions.

The current version of the guide focuses on algorithmic bias. Future work will include other AI-related problems such as black boxes, privacy violations, ghost work and misinformation.

Here are some questions this guide tries to answer at different stages of the AI system lifecycle.


Taking Up the Project
  • What are the different ways to define fairness?
  • When is AI not the answer?
Collecting Data
  • What are possible sources of bias in datasets?
  • What are some open-source datasets that are more diverse?
Training
  • What are possible sources of bias in the training process?
  • What's wrong with using pre-trained models and external datasets?
Deployment and Maintenance
  • What should our client and users know?
  • What are bias-related concerns when deploying an AI system?