Problems on the implementations of artificial moral agents and the singularity: The is-ought problem as a case study
This works evaluates the ongoing efforts geared towards the creation and development of artificial moral agents (AMAs), and how these relate to the singularity. Artificial moral agents are artificial intelligent systems that are capable of moral reasoning, judgment, and decision-making. In addition,...
Saved in:
Main Author: | |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2015
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/etd_doctoral/1054 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
Summary: | This works evaluates the ongoing efforts geared towards the creation and development of artificial moral agents (AMAs), and how these relate to the singularity. Artificial moral agents are artificial intelligent systems that are capable of moral reasoning, judgment, and decision-making. In addition, they form part and parcel of what artificial intelligence theorists have called as artificial general intelligence (AGI), which are systems that are intelligent in most, if not all, aspects of human cognition. Given that one of the central human cognitive abilities in the capability to reason about moral issues, AGIs should, therefore, include the intellectual activity of moral reasoning. Many conceptions of AMAs have been proffered, and some have identified three possible routes to model AMAs, namely: the top-down or direct programming track, bottom-up or developmental approaches, and the hybrid of these two. This work examines the philosophical tenability of these routes, in light of how they account for moral reasoning. It argues that these approaches are challenged by one of the most enduring problems in moral philosophy, a problem dubbed as the is-ought problem in moral reasoning. |
---|