Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion
Advanced robots are desired to improve their reasoning ability in increasingly practical situations to quickly respond to urgent challenges such as search and rescue, public security, and autonomous driving. Advances in sensor technology and intelligent algorithms are accelerating the development of...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/84521 http://hdl.handle.net/10220/50120 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-84521 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Yang, Chule Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
description |
Advanced robots are desired to improve their reasoning ability in increasingly practical situations to quickly respond to urgent challenges such as search and rescue, public security, and autonomous driving. Advances in sensor technology and intelligent algorithms are accelerating the development of autonomous systems, allowing them to interact with dynamic environments more independently and intelligently, completing relatively complicated tasks. Nevertheless, the safety and security problems are yet to be comprehensively addressed. Currently, many efforts have been made to solve this problem by improving the unimodal perception ability of intelligent systems. Methods using surveillance cameras to collect large amounts of visual data and process them using deep networks. Various types of range scanner can sense and reconstruct the surrounding environment through precise distance measurements. However, in the above cases, there may be a lack of training data, fixed cameras cannot adequately and dynamically model the surroundings, unimodal analysis is hard to capture both appearance and depth information. Therefore, higher levels of perception and reasoning abilities are required to handle unexpected conditions and scenarios efficiently. In such cases, mobile robots should be fully aware of the dynamic environment, prioritize, and continuously analyze critical situations, share information, and collaboration under limited computing power and resources. Furthermore, considering the robots are operating alongside humans, reliable and interpretable robot behavior is highly preferred for more natural interactions.
This study aims to develop reasoning strategies for autonomous robots to improve robotic intelligence under different assumptions and conditions. The framework allows one or several robots to address safety or security problems in a targeted manner in partially known environments, where time, dynamics, and interactions play a significant role. The objective is to identify the unique individual whose activity, in space and time, appear different from the other people in the scene. The activities considered not only the motion pattern, but also the object associated with this person. Unlike traditional abnormal detection, the uniqueness reasoning in this work does not require a predefined expected pattern. A modular analysis is also applied to decompose the entire reasoning process into submodules such as perception, analysis, and inference, to achieve algorithmic flexibility and interpretability. Probabilistic modeling is used to associate spatial relationships between semantic labels and their temporal changes. The relationships are described in either instant spatial spaces (semantic-interaction feature) or spatiotemporal spaces (spatiotemporal feature), and it provides essential information for robot navigation, situation awareness, and task planning. Unlike existing research, this is the first time that such a mission is implemented on mobile robot platforms by using the probabilistic fusion of multimodal information.
Considering how evidence information is obtained and linked in the process of reasoning, we study the problem of uniqueness reasoning at three levels. The first level is called knowledge-based single robot uniqueness reasoning and runs on a single robot platform. At this level, we focus on a knowledge-based approach, where knowledge is provided in terms of how motion and an object associated with a person can be used to infer his/her uniqueness, as determined by human expertise. Because such a method has advantages in dealing with structured missions and with limited training data. Qualitative rules are applied to extract human activities from observations, and then the uniqueness of each person is judged by associating the prior knowledge with the performed activities. Experiments were conducted in indoor environments with various degrees of occlusion, and the experimental results showed the rationality and accuracy of the reasoning.
However, though the knowledge-based method can successfully address specific tasks, it lacks flexibility and generality to adapt to different situations. The assumption of prior knowledge should be replaced by general analytical modeling. Thus, the second level is analytics-based single robot uniqueness reasoning. For one thing, it applies quantitative analysis to model human activities from observations rather than using qualitative rules. For another, instead of leveraging prior knowledge, the distribution of the features characterizing a human activity is constructed and the activity of an individual is analyzed within the distribution to determine if it is unique. Experiments were conducted in both indoor and urban outdoor environments. The experimental results demonstrated the rationality, accuracy, and efficiency of the reasoning.
Moreover, since many applications require to work in an unstructured and large-scale environment as well as a long-term operation, the sensing and reasoning capabilities of a single robot are limited by occlusion, sensor failure, and narrow field of view. Robots are desired to share information and perform collaborative reasoning. Therefore, if say the first two levels are mainly using sensory data to implement the algorithm, the third level is more focused on applying the algorithm to the robot platform. The third level is analytics-based multi-robot uniqueness reasoning, in which a multi-robot system is adopted as the platform. By obtaining the relative pose between robots, local observations can be transformed and transmitted among the multi-robot system. Since it is a distributed system, each robot still first performs its local uniqueness reasoning in the case of long-distance and poor communication. Meanwhile, the system performs a re-judgment based on the shared global observation and then shares the judgment results back to each robot. Experiments were carried out in various unstructured environments and contained day and night situations. The experimental results showed the rationality, accuracy, and robustness of the reasoning. This is the first time that reasoning is categorized by considering the way of obtaining and linking evidence information. This study has potential applications in a variety of situations, and it can serve as a guide for further robotic motion planning and human-robot interaction. |
author2 |
Wang Dan Wei |
author_facet |
Wang Dan Wei Yang, Chule |
format |
Theses and Dissertations |
author |
Yang, Chule |
author_sort |
Yang, Chule |
title |
Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
title_short |
Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
title_full |
Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
title_fullStr |
Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
title_full_unstemmed |
Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
title_sort |
multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion |
publishDate |
2019 |
url |
https://hdl.handle.net/10356/84521 http://hdl.handle.net/10220/50120 |
_version_ |
1772827974940229632 |
spelling |
sg-ntu-dr.10356-845212023-07-04T17:18:23Z Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion Yang, Chule Wang Dan Wei School of Electrical and Electronic Engineering Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Advanced robots are desired to improve their reasoning ability in increasingly practical situations to quickly respond to urgent challenges such as search and rescue, public security, and autonomous driving. Advances in sensor technology and intelligent algorithms are accelerating the development of autonomous systems, allowing them to interact with dynamic environments more independently and intelligently, completing relatively complicated tasks. Nevertheless, the safety and security problems are yet to be comprehensively addressed. Currently, many efforts have been made to solve this problem by improving the unimodal perception ability of intelligent systems. Methods using surveillance cameras to collect large amounts of visual data and process them using deep networks. Various types of range scanner can sense and reconstruct the surrounding environment through precise distance measurements. However, in the above cases, there may be a lack of training data, fixed cameras cannot adequately and dynamically model the surroundings, unimodal analysis is hard to capture both appearance and depth information. Therefore, higher levels of perception and reasoning abilities are required to handle unexpected conditions and scenarios efficiently. In such cases, mobile robots should be fully aware of the dynamic environment, prioritize, and continuously analyze critical situations, share information, and collaboration under limited computing power and resources. Furthermore, considering the robots are operating alongside humans, reliable and interpretable robot behavior is highly preferred for more natural interactions. This study aims to develop reasoning strategies for autonomous robots to improve robotic intelligence under different assumptions and conditions. The framework allows one or several robots to address safety or security problems in a targeted manner in partially known environments, where time, dynamics, and interactions play a significant role. The objective is to identify the unique individual whose activity, in space and time, appear different from the other people in the scene. The activities considered not only the motion pattern, but also the object associated with this person. Unlike traditional abnormal detection, the uniqueness reasoning in this work does not require a predefined expected pattern. A modular analysis is also applied to decompose the entire reasoning process into submodules such as perception, analysis, and inference, to achieve algorithmic flexibility and interpretability. Probabilistic modeling is used to associate spatial relationships between semantic labels and their temporal changes. The relationships are described in either instant spatial spaces (semantic-interaction feature) or spatiotemporal spaces (spatiotemporal feature), and it provides essential information for robot navigation, situation awareness, and task planning. Unlike existing research, this is the first time that such a mission is implemented on mobile robot platforms by using the probabilistic fusion of multimodal information. Considering how evidence information is obtained and linked in the process of reasoning, we study the problem of uniqueness reasoning at three levels. The first level is called knowledge-based single robot uniqueness reasoning and runs on a single robot platform. At this level, we focus on a knowledge-based approach, where knowledge is provided in terms of how motion and an object associated with a person can be used to infer his/her uniqueness, as determined by human expertise. Because such a method has advantages in dealing with structured missions and with limited training data. Qualitative rules are applied to extract human activities from observations, and then the uniqueness of each person is judged by associating the prior knowledge with the performed activities. Experiments were conducted in indoor environments with various degrees of occlusion, and the experimental results showed the rationality and accuracy of the reasoning. However, though the knowledge-based method can successfully address specific tasks, it lacks flexibility and generality to adapt to different situations. The assumption of prior knowledge should be replaced by general analytical modeling. Thus, the second level is analytics-based single robot uniqueness reasoning. For one thing, it applies quantitative analysis to model human activities from observations rather than using qualitative rules. For another, instead of leveraging prior knowledge, the distribution of the features characterizing a human activity is constructed and the activity of an individual is analyzed within the distribution to determine if it is unique. Experiments were conducted in both indoor and urban outdoor environments. The experimental results demonstrated the rationality, accuracy, and efficiency of the reasoning. Moreover, since many applications require to work in an unstructured and large-scale environment as well as a long-term operation, the sensing and reasoning capabilities of a single robot are limited by occlusion, sensor failure, and narrow field of view. Robots are desired to share information and perform collaborative reasoning. Therefore, if say the first two levels are mainly using sensory data to implement the algorithm, the third level is more focused on applying the algorithm to the robot platform. The third level is analytics-based multi-robot uniqueness reasoning, in which a multi-robot system is adopted as the platform. By obtaining the relative pose between robots, local observations can be transformed and transmitted among the multi-robot system. Since it is a distributed system, each robot still first performs its local uniqueness reasoning in the case of long-distance and poor communication. Meanwhile, the system performs a re-judgment based on the shared global observation and then shares the judgment results back to each robot. Experiments were carried out in various unstructured environments and contained day and night situations. The experimental results showed the rationality, accuracy, and robustness of the reasoning. This is the first time that reasoning is categorized by considering the way of obtaining and linking evidence information. This study has potential applications in a variety of situations, and it can serve as a guide for further robotic motion planning and human-robot interaction. Doctor of Philosophy 2019-10-09T12:44:38Z 2019-12-06T15:46:26Z 2019-10-09T12:44:38Z 2019-12-06T15:46:26Z 2019 Thesis Yang, C. (2019). Multi-level probabilistic uniqueness reasoning of autonomous robots based on spatial-semantic fusion. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/84521 http://hdl.handle.net/10220/50120 10.32657/10356/84521 en 143 p. application/pdf |