Fairness in design : a tool for guidance for ethical artificial intelligence design
As artificial intelligence (AI) becomes increasingly widely applied, societies have recognized the need for proper governance for its responsible usage. An important dimension of responsible AI is fairness. AI systems were once thought to be impartial and fair in their decisions, but studies have sh...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Research |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154153 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | As artificial intelligence (AI) becomes increasingly widely applied, societies have recognized the need for proper governance for its responsible usage. An important dimension of responsible AI is fairness. AI systems were once thought to be impartial and fair in their decisions, but studies have shown that biases and discrimination are able to creep into the data and model to affect outcomes even causing harm. Due to the multi-faceted nature of the notions of fairness, it is challenging for AI solution designers to envision potential fairness issues at the design stage. Furthermore, there are currently limited methodologies available for them to incorporate fairness values into their designs.
In this thesis, we present the Fairness in Design (FID) methodology and tool that aim to address the gap. It is available in both physical and online format. The tool provides AI solution designers with a workflow that allows them to surface fairness concerns, navigate complex ethical choices around fairness, and overcome blind spots and team biases. We have tested the methodology on 10 AI design teams (n = 24) and the results are supportive of our hypotheses. Not only 67% of the participants would recommend our physical methodology tool to their friend or colleague, but also 79% of the participants indicated that they are interested in using the tool in their future projects. This tool has the potential to add value to the ethical AI field and can be expanded to support other ethical AI dimensions such as privacy preservation and explainability. |
---|