Web development for AI fairness tool

As Artificial Intelligence (AI) progresses rapidly and the prevalence of AI system impacting our lives immensely, we begin to realise that these systems are not as impartial as we thought them to be. Even though AI systems are machines that are supposed to make logical decisions, biases and un...

全面介紹

Saved in:
書目詳細資料
主要作者: Chen, Jiaying
其他作者: Yu Han
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2021
主題:
在線閱讀:https://hdl.handle.net/10356/148100
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:As Artificial Intelligence (AI) progresses rapidly and the prevalence of AI system impacting our lives immensely, we begin to realise that these systems are not as impartial as we thought them to be. Even though AI systems are machines that are supposed to make logical decisions, biases and unfairness can still creep into the algorithms resulting in harmful outcomes. This pushes us to re-evaluate the design metrics for creating such systems and focus more on integrating human values in the system to ensure fairness. However, even when the awareness of the need for ethical AI systems is high, there are currently limited systematic methodologies for engineers to understand about the human value of fairness, causing a barrier for them to implement it in their designs. This project would be focused on developing a methodology application to bridge the research gap by assisting product teams to bring awareness about fairness concerns, navigate complex ethical choices around fairness, and overcome blind spots and potential team biases. It can also help them to stimulate perspective thinking from multiple parties and stakeholders. This application aims to lower the bar to add fairness into the design discussion so that more design teams can make better and more informed decisions for fairness in their application scenarios.