An ecosystem approach to ethical AI and data use: Experimental reflections

While we have witnessed a rapid growth of ethics documents meant to guide artificial intelligence (AI) development, the promotion of AI ethics has nonetheless proceeded with little input from AI practitioners themselves. Given the proliferation of AI for Social Good initiatives, this is an emerging...

Full description

Saved in:
Bibliographic Details
Main Authors: FINDLAY, Mark, SEAH, Josephine
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2020
Subjects:
Online Access:https://ink.library.smu.edu.sg/sol_research/3262
https://ink.library.smu.edu.sg/context/sol_research/article/5220/viewcontent/Ecosystem_Approach_to_Ethical_AI_and_Data_Use_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:While we have witnessed a rapid growth of ethics documents meant to guide artificial intelligence (AI) development, the promotion of AI ethics has nonetheless proceeded with little input from AI practitioners themselves. Given the proliferation of AI for Social Good initiatives, this is an emerging gap that needs to be addressed in order to develop more meaningful ethical approaches to AI use and development. This paper offers a methodology-a 'shared fairness' approach-aimed at identifying AI practitioners' needs when it comes to confronting and resolving ethical challenges and to find a third space where their operational language can be married with that of the more abstract principles that presently remain at the periphery of their work experiences. We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility: this methodology is centred around conversations intended to elicit practitioners perceived ethical attribution and distribution over key value-laden operational decisions, to identify when these decisions arise and what ethical challenges they confront, and to engage in a language of ethics and responsibility which enables practitioners to internalise ethical responsibility. The methodology bridges responsibility imbalances that rest in structural decision-making power and elite technical knowledge, by commencing with personal, facilitated conversations, returning the ethical discourse to those meant to give it meaning at the sharp end of the ecosystem. Our primary contribution is to add to the recent literature seeking to bring AI practitioners' experiences to the fore by offering a methodology for understanding how ethics manifests as a relational and interdependent sociotechnical practice in their work.