Research Proposal
An outline of Project Asimov, including a brief review of existing literature, methodology, potential challenges and timeline.
Contents
- Research Background
- Research Angle
- Research Design
- Potential Challenges
- Timeline
- tl;dr
- Appendix - List of possible case studies
- References
Research Background
Several artificial intelligence (AI) ethics guidelines and frameworks have been published recently, amidst concerns over misuse of the technology and neglect of potentially adverse consequences. Examples include the European Commission’s Ethics Guidelines for Trustworthy AI (European Commission, 2019), the Organisation for Economic Co-operation and Development’s (OECD) Recommendation of the Council on Artificial Intelligence (OECD, 2019), USA’s Preparing for the Future of Artificial Intelligence (NSTC, 2016), China’s Beijing AI Principles (人工智能北京共识) (BAAI, 2019) and Singapore’s AI Governance Framework (PDPC, 2019).
These documents establish the foundations for future legislative actions. However, they are often dry and frequently lack illustrative examples. Policymakers may be used to the nature of such formal texts, but they are unrelatable for many AI practitioners and general consumers. Unfortunately, given the pervasive nature of AI technologies, it is critical for both AI practitioners and general users to be aware of related ethical problems. As such, there is a need for a layperson guide to AI and its ethical pitfalls, which translates the formal guidelines into simple relatable terms.
Research Angle
Technology and social science are often perceived as mutually exclusive fields. This poses a challenge in the case of AI ethics, where the two fields are deeply entwined. The technical nature of the technology discourages social scientists from understanding further. On the other hand, AI researchers and engineers are often more comfortable with understanding formulas and codes than the social ramifications of their work. As such, this research project aims to bridge this gap by crafting a relatable guide to AI ethics, targeted at AI practitioners and in the Singapore context.
This project will draw on my previous experiences as an AI researcher and build from two main categories of literature.
- Popular non-fiction written about AI ethics This includes books such as Cathy O’ Neil’s Weapons of Math Destruction (2016) and Virginia Eubanks’s Automating Inequality (2018). It also encompasses popular articles such as ProPublica’s Machine Bias (Angwin et al., 2016). The popularity of these works are important as it attests to the relatability of their writing.
- Formal guidelines for ethical AI This includes the guidelines published by international organizations and governments (see Research Background), as well as by commercial and non-governmental entities, such as Google, Microsoft, the Future of Life Institute and the AI Now Institute. Reference to these official publications helps to ground the project and ensures alignment and compliance with existing values, guidelines and laws.
Refer to Appendix for a more complete working bibliography.
This guide aims to differ from existing works (popular non-fiction and formal guidelines) in the following ways:
- Simple man-on-the-street language for both technical and ethical concepts
- Extensive use of real-world case-studies
- Interactive examples to illustrate key concepts (see for example Bret Victor’s Explorable Explanations (2011))
- Documentation of useful tools, datasets and other resources
I intend to begin with the AI ethics subtopic known as algorithmic bias and depending on the time available, extend the guide to other ethical concerns in AI systems, such as privacy, explainability, accountability and media fabrication. If time permits, a second guide will target general consumers.
Research Design - Case Studies
The research methodology of this project will be primarily based on the multiple inductive case study method (Eisenhardt and Graebner, 2007). I will analyze case studies of:
- AI technologies that have demonstrated algorithmic bias and their attempted solutions,
- Formal guidelines (international, national, commercial, NGO) related to AI ethics.
The specific list of case studies remains a work-in-progress but here are some main themes.
- Use of facial recognition technology in law enforcement
- Gender Shades (Buolamwini, 2018), notflawless.ai (Algorithmic Justice League, 2018), America Under Watch (Garvie and Moy, 2019), Garbage In Garbage Out (Garvie, 2019), Google’s Inclusive Images (Doshi, 2018)
- Singapore’s AI Governance Framework (PDPC, 2019) compared to other nation’s guidelines
- China’s Beijing AI Principles (人工智能北京共识) (BAAI, 2019), France’s Villani Report (Villani et al., 2018), USA’s Preparing for the Future of Artificial Intelligence (NSTC, 2016)
- Singapore’s AI Governance Framework (PDPC, 2019) compared to other sectors
- Commercial - Google’s Principles on AI (Pichai, 2018), Tencent’s ARCC (Si, 2018)
- Non-governmental - FLI’s Asilomar AI Principles (FLI, 2017), AI Now’s 2018 report (Whittaker et al., 2018)
- International - OECD’s Recommendation of the Council on Artificial Intelligence (OECD, 2019), European Commission’s Ethics Guidelines for Trustworthy AI (European Commission, 2019)
These case studies are meant to help craft the guide by:
- Collecting illustrative examples of algorithmic bias
- Discussing lessons learned from problems and attempted solutions
- Collecting best practices and tools, datasets and resources
- Comparing Singapore’s frameworks to that of other nations and organizations to better shape the guide for the Singapore context
Potential Challenges
Staying Up-to-date
AI ethics seems to be the topic of the moment, with tons of new articles about recent guidelines, new ethics boards and committees, misuse of AI systems, unexpected problems and novel research. This poses a challenge for any researcher interested in characterizing the phenomenon, due to the flood of news everyday and the rapidly changing landscape.
Mitigation - I will be making use of my collection of AI-related newsletters, that I have curated in my course as an AI researcher, to highlight pertinent AI-related stories across different fields. I will also be focusing on more well-established cases (such as facial recognition), to prevent being overwhelmed.
Subjectivity of Ethical Concepts
The concept of ethics is notoriously debatable, with opinions ranging from the nihilistic to the extremely conservative. Furthermore, ethics in the context of AI may be affected by new developments in AI research and applications.
Mitigation - I will be relying primarily on the formal definitions provided in reports by official bodies such as the UN, the European Commission, the OECD or national governments. I will attempt to integrate the multiple definitions from these sources and identify common ground to build from.
Timeline
Weeks 1 to 2 - Groundwork
29 May - Research project description- Literature review
- AI ethics
- Technology adoption
- Technology design
- Explainers, playbooks and primers
- Scoping out deliverables
- Outline contents for guides
- How do we want the target audience to use the materials?
- What specific lessons do we want them to take away?
Weeks 3 to 4
10 June - IRP proposal (3-4 pages double-spaced)- 12 June - Class presentation (10min max)
- 14 June - Comparisons of frameworks for definitions, key guidelines and recommendations
- Draft outline of guide including
- Key definitions
- Case studies to use
- Demos to build
- Tools, datasets and resources
- Refine timeline after more detailed outline is complete
Weeks 5 to 8
- 5 July - Literature review (for peer review)
- Complete key definitions (with input from framework review)
- Writing up case studies and building of demos on a per subtopic basis
Weeks 8 to 12
- 12 July - Summary of findings (for peer review)
- 19 July - Summary of contributions (for peer review)
- 26 July - Report introduction (for peer review)
- 2/9 August - Report conclusion (for peer review)
- Complete case studies and demos
- Review of present tools, datasets and resources
- Iterate draft guide with AI practitioners
Weeks 13 to 14
- 14 August - Class presentation
- 18 August - Final submission
TL;DR
In short, AI technologies can cause complicated and dangerous ethical problems that are not obvious. This project is an attempt at demystifying these problems for AI practitioners, beginning with algorithmic bias.
Appendix - Working Bibliography
Italicized titles indicate literature that I have finished reading.
Popular Non-fiction
- Weapons of Math Destruction (O’Neil, 2016)
- Artificial Unintelligence (Broussard, 2018)
- Automating Inequality (Eubanks, 2018)
- Algorithms of Oppression (Noble, 2018)
- ProPublica’s Machine Bias (Angwin et al., 2016)
Case Studies of Frameworks
International
- European Commission’s Ethics Guidelines for Trustworthy AI (European Commission, 2019)
- OECD’s Recommendation of the Council on Artificial Intelligence (OECD, 2019)
- WEF’s AI Governance: A Holistic Approach to Implement Ethics into AI (WEF, 2019)
- UNU’s The New Geopolitics of Converging Risks (Pauwels, 2019)
National
- Singapore’s AI Governance Framework (PDPC, 2019)
- France’s Villani Report (Villani et al., 2018)
- China’s Beijing AI Principles (人工智能北京共识) (BAAI, 2019)
- Canada’s Algorithmic Impact Assessment questionnaire (Government of Canada, 2019)
- USA’s Preparing for the Future of Artificial Intelligence (NSTC, 2016)
- UK’s Ready, Willing and Able? - Select Committee on Artificial Intelligence Report 2017-19 (House of Lords, 2018)
Commercial
- Google’s Principles on AI (Pichai, 2018)
- Tencent’s ARCC (Si, 2018)
- Microsoft’s The Future Computed (Microsoft, 2018)
Non-governmental Non-commercial
- Montréal Declaration for a Responsible Development of Artificial Intelligence (Montréal Declaration, 2018)
- FLI’s Asilomar AI Principles (FLI, 2017)
- ERLC’s Artificial Intelligence: An Evangelical Statement of Principles (ERLC, 2019)
- Berkman Klein Center’s Artificial Intelligence & Human Rights (Raso et al., 2018)
- Access Now’s Human Rights in the Age of Artificial Intelligence (Access Now, 2018)
- AI Now’s 2018 report (Whittaker et al., 2018)
References
Access Now. (2018). Human Rights in the Age of Artificial Intelligence. Retrieved from https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf
Algorithmic Justice League. (2018). notflawless.ai. Retrieved from https://www.notflawless.ai/
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Beijing Academy of Artificial Intelligence (BAAI). (2019). Beijing AI Principles (人工智能北京共识). Retrieved from https://www.baai.ac.cn/blog/beijing-ai-principles
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.
Buolamwini, J. (2018). Gender Shades. Retrieved from http://gendershades.org/
Doshi, T. (2018). Introducing the Inclusive Images Competition. Google AI Blog. Retrieved from https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html
Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of management journal, 50(1), 25-32.
Ethics & Religious Liberty Commission (ERLC). (2019). Artificial Intelligence: An Evangelical Statement of Principles. Retrieved from https://erlc.com/resource-library/statements/artificial-intelligence-an-evangelical-statement-of-principles
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
European Commission. (2019). Ethics Guidelines for Trustworthy AI. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
Future of Life Institute (FLI). (2017). Asilomar AI Principles. Retrieved from https://futureoflife.org/ai-principles/
Garvie, C. & Moy, L. M. (2019). America Under Watch. Georgetown Law Center on Privacy & Technology. Retrieved from https://www.americaunderwatch.com/
Garvie, C. (2019). Garbage In, Garbage Out. Georgetown Law Center on Privacy & Technology. Retrieved from https://www.flawedfacedata.com/
Government of Canada. (2019). Algorithmic Impact Assessment. Retrieved from https://canada-ca.github.io/aia-eia-js/
House of Lords. (2018). AI in the UK: ready, willing and able?. The Authority of the House of Lords. Retrieved from https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
Microsoft. (2018). The Future Computed: Artificial Intelligence and its role in society. Retrieved from https://news.microsoft.com/uploads/2018/01/The-Future-Computed.pdf
Montréal Declaration. (2018). Montréal Declaration for a Responsible Development of Artificial Intelligence. Retrieved from https://www.montrealdeclaration-responsibleai.com/
National Science and Technology Council (NSTC). (2016). Preparing for the Future of Artificial Intelligence. Retrieved from https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. nyu Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
Organisation for Economic Co-operation and Development (OECD). (2019). Recommendation of the Council on Artificial Intelligence. Retrieved from https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Pauwels, E. (2019). The New Geopolitics of Converging Risks: The UN and Prevention in the Era of AI. United Nations University Centre for Policy Research. Retrieved from https://i.unu.edu/media/cpr.unu.edu/attachment/3472/PauwelsAIGeopolitics.pdf
Personal Data Protection Commission (PDPC). (2019). A Proposed Model Artificial Intelligence Governance Framework. Retrieved from https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/A-Proposed-Model-AI-Governance-Framework-January-2019.pdf
Pichai, S. (2018). AI at Google: our principles. Google Blog. Retrieved from https://www.blog.google/technology/ai/ai-principles/
Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C. & Kim, L. (2018). Artificial Intelligence & Human Rights: Opportunities and Risks. Berkman Klein Center. Retrieved from https://cyber.harvard.edu/publication/2018/artificial-intelligence-human-rights
Si, J. (2018). Towards and Ethical Framework for Artificial Intelligence. Retrieved from https://mp.weixin.qq.com/s/_CbBsrjrTbRkKjUNdmhuqQ
Victor, B. (2011). Explorable Explanations. Retrieved from http://worrydream.com/ExplorableExplanations/
Villani, C., Schoenauer, M., Bonnet, Y., Berthet, C., Cornut, A. C., Levin, F., Rondepierre, B., Biabiany-Rosier, S. (2018). For a Meaningful Artificial Intelligence. The Villani Mission. Retrieved from https://www.aiforhumanity.fr/en/
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S. M., Richardson, R., Schultz, J. & Schwartz, O. (2018). AI Now Report 2018. AI Now Institute. Retrieved from https://ainowinstitute.org/AI_Now_2018_Report.pdf
World Economic Forum (WEF). (2019). AI Governance: A Holistic Approach to Implement Ethics into AI. Retrieved from https://weforum.my.salesforce.com/sfc/p/#b0000000GycE/a/0X000000cPl1/i.8ZWL2HIR_kAnvckyqVA.nVVgrWIS4LCM1ueGy.gBc
This is a final research project for the MSc. Urban Science, Policy and Planning (MUSPP) degree at the Singapore University of Technology and Design (SUTD).