A testbed for ethical artificial intelligence

As Artificial Intelligence (AI) starts to root itself into various daily tasks of our lives, it is necessary to ponder if they can truly help us make proper decisions when it comes to situations involve moral ethics. As there is a research gap regarding AI on ethics, this project aims to study the...

Full description

Saved in:
Bibliographic Details
Main Author: Heng, Kian Tat
Other Authors: Yu Han
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/154790
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As Artificial Intelligence (AI) starts to root itself into various daily tasks of our lives, it is necessary to ponder if they can truly help us make proper decisions when it comes to situations involve moral ethics. As there is a research gap regarding AI on ethics, this project aims to study the ability of AI to make decisions when faced with moral dilemma situations, where the AI agent is given two choices which both end in unfavorable outcomes. A warlike game simulation is created in Unity Engine where a turret, controlled by the AI agent trained with Reinforcement Learning (RL), is tasked to decide whether to eliminate enemies at the cost of innocent civilian lives, or sometimes animals. Three different agents were trained to produce different decisions for different scenarios, using worthValues, a float system designed to measure the worth of each ‘person’ in the simulation. This report documents down the training procedures, from designing to optimizing the RL techniques such as the reward methods. This report also records down the results of evaluation of the AI agents in designed scenarios, comparing them to human results. The evaluation concludes that the decisions of the agent which most likely to sacrifice civilians for eliminating enemies is most similar to those of humans. The report then ends with the suggestion of exploring more measuring systems for worth of humans, as well as adding complexity to the scenario to reflect better the actual situations encounter in real life.