A testbed for ethical artificial intelligence

As Artificial Intelligence (AI) starts to root itself into various daily tasks of our lives, it is necessary to ponder if they can truly help us make proper decisions when it comes to situations involve moral ethics. As there is a research gap regarding AI on ethics, this project aims to study the...

Full description

Saved in:
Bibliographic Details
Main Author: Heng, Kian Tat
Other Authors: Yu Han
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/154790
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154790
record_format dspace
spelling sg-ntu-dr.10356-1547902022-01-10T03:40:47Z A testbed for ethical artificial intelligence Heng, Kian Tat Yu Han School of Computer Science and Engineering Zhang Jie Huang han.yu@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence As Artificial Intelligence (AI) starts to root itself into various daily tasks of our lives, it is necessary to ponder if they can truly help us make proper decisions when it comes to situations involve moral ethics. As there is a research gap regarding AI on ethics, this project aims to study the ability of AI to make decisions when faced with moral dilemma situations, where the AI agent is given two choices which both end in unfavorable outcomes. A warlike game simulation is created in Unity Engine where a turret, controlled by the AI agent trained with Reinforcement Learning (RL), is tasked to decide whether to eliminate enemies at the cost of innocent civilian lives, or sometimes animals. Three different agents were trained to produce different decisions for different scenarios, using worthValues, a float system designed to measure the worth of each ‘person’ in the simulation. This report documents down the training procedures, from designing to optimizing the RL techniques such as the reward methods. This report also records down the results of evaluation of the AI agents in designed scenarios, comparing them to human results. The evaluation concludes that the decisions of the agent which most likely to sacrifice civilians for eliminating enemies is most similar to those of humans. The report then ends with the suggestion of exploring more measuring systems for worth of humans, as well as adding complexity to the scenario to reflect better the actual situations encounter in real life. Bachelor of Engineering (Computer Science) 2022-01-10T03:40:47Z 2022-01-10T03:40:47Z 2021 Final Year Project (FYP) Heng, K. T. (2021). A testbed for ethical artificial intelligence. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/154790 https://hdl.handle.net/10356/154790 en SCSE20-0810 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Heng, Kian Tat
A testbed for ethical artificial intelligence
description As Artificial Intelligence (AI) starts to root itself into various daily tasks of our lives, it is necessary to ponder if they can truly help us make proper decisions when it comes to situations involve moral ethics. As there is a research gap regarding AI on ethics, this project aims to study the ability of AI to make decisions when faced with moral dilemma situations, where the AI agent is given two choices which both end in unfavorable outcomes. A warlike game simulation is created in Unity Engine where a turret, controlled by the AI agent trained with Reinforcement Learning (RL), is tasked to decide whether to eliminate enemies at the cost of innocent civilian lives, or sometimes animals. Three different agents were trained to produce different decisions for different scenarios, using worthValues, a float system designed to measure the worth of each ‘person’ in the simulation. This report documents down the training procedures, from designing to optimizing the RL techniques such as the reward methods. This report also records down the results of evaluation of the AI agents in designed scenarios, comparing them to human results. The evaluation concludes that the decisions of the agent which most likely to sacrifice civilians for eliminating enemies is most similar to those of humans. The report then ends with the suggestion of exploring more measuring systems for worth of humans, as well as adding complexity to the scenario to reflect better the actual situations encounter in real life.
author2 Yu Han
author_facet Yu Han
Heng, Kian Tat
format Final Year Project
author Heng, Kian Tat
author_sort Heng, Kian Tat
title A testbed for ethical artificial intelligence
title_short A testbed for ethical artificial intelligence
title_full A testbed for ethical artificial intelligence
title_fullStr A testbed for ethical artificial intelligence
title_full_unstemmed A testbed for ethical artificial intelligence
title_sort testbed for ethical artificial intelligence
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/154790
_version_ 1722355354987134976