Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents

An autonomous intelligent agent situated in an environment needs to know what the preferred states of its environment should be so that it can work towards achieving them. I refer to these preferred states as “preferences” of the agent. The preferences an agent selects to achieve at a given time are...

Full description

Saved in:
Bibliographic Details
Main Author: Rafique, Umair
Other Authors: Huang Shell Ying
Format: Theses and Dissertations
Language:English
Published: 2013
Subjects:
Online Access:http://hdl.handle.net/10356/51340
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-51340
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Rafique, Umair
Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
description An autonomous intelligent agent situated in an environment needs to know what the preferred states of its environment should be so that it can work towards achieving them. I refer to these preferred states as “preferences” of the agent. The preferences an agent selects to achieve at a given time are called its “goals” while this selection process is called “goal adoption”. In essence goal adoption for an autonomous intelligent agent is about determining 1) which preference to adopt as a goal at a given time and 2) when to adopt a goal. The popular approach to goal adoption is to assign static utility values to preferences and then adopt the one with the highest utility as goal at a given time. However, there are two problems with this approach. Firstly a preference can be useful to a varying degree depending upon the situation the agent is in. For example, the preference to play your favorite game will be more attractive if you have not done so for a while compared to when you have just played it. The static utility value of a preference cannot represent its usefulness in such different situations. Secondly, such an approach does not specify “when” an agent should adopt a goal. In this thesis I propose an approach to goal adoption where I model preferences of an agent as the means to satisfy its motivations, where a motivation represents the internal state of the agent with respect to a certain feeling. The satisfaction level of motivations of an agent is affected by changes in its situation and by its own activities. The collective state of an agent’s motivations at a given time determines how satisfied the agent is at that time. Then I represent the utility of a preference by how much increase in the satisfaction level of the agent it can bring if it is achieved at a given time. This approach is able to adjust the utility of a preference according to the situation the agent is in as the utility of a preference at a given time in this approach depends on the state of the motivations it contributes towards satisfying which in turn is affected by the changes in the situation of the agent. This approach also provides an answer to the problem of “when to adopt a goal” as “when the agent needs to get an increase in its satisfaction”. I propose a complete framework for modelling preferences and motivations and propose a complete goal adoption strategy based on this framework. Using a simulation involving a few intelligent agents, I show that this approach is able to properly represent the utility of a preference in any given situation and provides an adequate solution to the problem of goal adoption. Another important skill for an autonomous intelligent agent is to generate new goals for itself beyond its programmed goals. This is referred to as “goal generation” in the existing literature. Goal generation is generally taken to be the same as goal adoption considering that goal adoption can “generate” goals from preferences. However since goal adoption is mere adoption of already existing preferences, it does not serve the basic purpose behind goal generation, i.e. increasing the autonomy of the agent by making it less dependent on its programmed knowledge. Hence to achieve greater autonomy, an agent must be able to generate new preferences instead. In this thesis I propose and evaluate an approach called “Preference Learning” using which an agent can learn preferences for new (previously unknown) objects based on analogy between these new objects and the objects it has known preferences for. The goals of an autonomous intelligent agent only tell it “what” to do. Reasoning about “how” to do it is what is referred to as commonsense reasoning in the existing literature. The main issue in this regard is representing actions and reasoning about them. In logic based approaches to automated commonsense reasoning, actions are represented as state-transition functions, which bring transition from the state they are executed in to the state where their effects hold. The “Fluent preconditions” of an action play the role of specifying the “context” where execution of an action in a certain state of fluent preconditions leads to certain effects. Hence to deduce if executing an action would lead to its effects, it must be known if its fluent preconditions hold. Knowing when a fluent precondition, based on a fluent representing some quantity, holds requires knowing the “appropriate” value of the quantity at which the fluent holds. Although general practice is to assume that such “appropriate” value of a quantity is somehow known, in many cases there is no practical way to find it. Eventually no deduction can be made based on an action which involves any such fluent precondition. One way to determine the “appropriate” value of the quantity at which the fluent under consideration holds is by trying different values for the quantity and looking for the effects of the action. Such learning however is not supported by logic based approaches for two reasons: 1) there is no relationship between fluent preconditions and the effects of an action hence it cannot be told what will happen to the effects if a certain quantity in the conditions is changed; 2) actions are not grounded into the physical environment of the acting agent so the agent cannot “perform” these actions to learn the “appropriate” value of a quantity by trying. In this thesis I propose a new approach to represent actions in the form of “events” that allows such unknown conditions to be specified within the event representation. During execution such unknown conditions are learnt systematically by trying. Such learning is possible only because events are grounded in the environment of the agent allowing it to try and see the effects of its actions while learning unknown conditions. Using a well-known non-trivial commonsense reasoning problem “cracking an egg” as a running example and providing a complete solution to it, I show that the proposed approach is able to handle all different cases related to this problem which existing logic-based approaches fail to cope with. The results from a simulation, where an artificial agent using this approach is deployed into a simulated physical environment to solve the “cracking an egg” problem, confirm that using this approach the agent can correctly learn all unknown conditions and is able to perform its actions successfully.
author2 Huang Shell Ying
author_facet Huang Shell Ying
Rafique, Umair
format Theses and Dissertations
author Rafique, Umair
author_sort Rafique, Umair
title Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
title_short Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
title_full Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
title_fullStr Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
title_full_unstemmed Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
title_sort goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents
publishDate 2013
url http://hdl.handle.net/10356/51340
_version_ 1759856758613868544
spelling sg-ntu-dr.10356-513402023-03-04T00:34:47Z Goal adoption, preference generation and commonsense reasoning in autonomous intelligent agents Rafique, Umair Huang Shell Ying School of Computer Engineering Centre for Computational Intelligence DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence An autonomous intelligent agent situated in an environment needs to know what the preferred states of its environment should be so that it can work towards achieving them. I refer to these preferred states as “preferences” of the agent. The preferences an agent selects to achieve at a given time are called its “goals” while this selection process is called “goal adoption”. In essence goal adoption for an autonomous intelligent agent is about determining 1) which preference to adopt as a goal at a given time and 2) when to adopt a goal. The popular approach to goal adoption is to assign static utility values to preferences and then adopt the one with the highest utility as goal at a given time. However, there are two problems with this approach. Firstly a preference can be useful to a varying degree depending upon the situation the agent is in. For example, the preference to play your favorite game will be more attractive if you have not done so for a while compared to when you have just played it. The static utility value of a preference cannot represent its usefulness in such different situations. Secondly, such an approach does not specify “when” an agent should adopt a goal. In this thesis I propose an approach to goal adoption where I model preferences of an agent as the means to satisfy its motivations, where a motivation represents the internal state of the agent with respect to a certain feeling. The satisfaction level of motivations of an agent is affected by changes in its situation and by its own activities. The collective state of an agent’s motivations at a given time determines how satisfied the agent is at that time. Then I represent the utility of a preference by how much increase in the satisfaction level of the agent it can bring if it is achieved at a given time. This approach is able to adjust the utility of a preference according to the situation the agent is in as the utility of a preference at a given time in this approach depends on the state of the motivations it contributes towards satisfying which in turn is affected by the changes in the situation of the agent. This approach also provides an answer to the problem of “when to adopt a goal” as “when the agent needs to get an increase in its satisfaction”. I propose a complete framework for modelling preferences and motivations and propose a complete goal adoption strategy based on this framework. Using a simulation involving a few intelligent agents, I show that this approach is able to properly represent the utility of a preference in any given situation and provides an adequate solution to the problem of goal adoption. Another important skill for an autonomous intelligent agent is to generate new goals for itself beyond its programmed goals. This is referred to as “goal generation” in the existing literature. Goal generation is generally taken to be the same as goal adoption considering that goal adoption can “generate” goals from preferences. However since goal adoption is mere adoption of already existing preferences, it does not serve the basic purpose behind goal generation, i.e. increasing the autonomy of the agent by making it less dependent on its programmed knowledge. Hence to achieve greater autonomy, an agent must be able to generate new preferences instead. In this thesis I propose and evaluate an approach called “Preference Learning” using which an agent can learn preferences for new (previously unknown) objects based on analogy between these new objects and the objects it has known preferences for. The goals of an autonomous intelligent agent only tell it “what” to do. Reasoning about “how” to do it is what is referred to as commonsense reasoning in the existing literature. The main issue in this regard is representing actions and reasoning about them. In logic based approaches to automated commonsense reasoning, actions are represented as state-transition functions, which bring transition from the state they are executed in to the state where their effects hold. The “Fluent preconditions” of an action play the role of specifying the “context” where execution of an action in a certain state of fluent preconditions leads to certain effects. Hence to deduce if executing an action would lead to its effects, it must be known if its fluent preconditions hold. Knowing when a fluent precondition, based on a fluent representing some quantity, holds requires knowing the “appropriate” value of the quantity at which the fluent holds. Although general practice is to assume that such “appropriate” value of a quantity is somehow known, in many cases there is no practical way to find it. Eventually no deduction can be made based on an action which involves any such fluent precondition. One way to determine the “appropriate” value of the quantity at which the fluent under consideration holds is by trying different values for the quantity and looking for the effects of the action. Such learning however is not supported by logic based approaches for two reasons: 1) there is no relationship between fluent preconditions and the effects of an action hence it cannot be told what will happen to the effects if a certain quantity in the conditions is changed; 2) actions are not grounded into the physical environment of the acting agent so the agent cannot “perform” these actions to learn the “appropriate” value of a quantity by trying. In this thesis I propose a new approach to represent actions in the form of “events” that allows such unknown conditions to be specified within the event representation. During execution such unknown conditions are learnt systematically by trying. Such learning is possible only because events are grounded in the environment of the agent allowing it to try and see the effects of its actions while learning unknown conditions. Using a well-known non-trivial commonsense reasoning problem “cracking an egg” as a running example and providing a complete solution to it, I show that the proposed approach is able to handle all different cases related to this problem which existing logic-based approaches fail to cope with. The results from a simulation, where an artificial agent using this approach is deployed into a simulated physical environment to solve the “cracking an egg” problem, confirm that using this approach the agent can correctly learn all unknown conditions and is able to perform its actions successfully. Doctor of Philosophy (SCE) 2013-03-28T07:29:24Z 2013-03-28T07:29:24Z 2013 2013 Thesis http://hdl.handle.net/10356/51340 en 198 p. application/pdf