Human-robot teaming and coordination in dynamic environments

As robots that sharing environments with humans proliferate, human-robot teamwork is becoming increasingly important. It is foreseeable that there will be more and more teams which are composed of humans and robots engaged in daily work. The integration of the appropriate decision-making process is...

全面介紹

Saved in:
書目詳細資料
主要作者: Liu, Xiangyu
其他作者: Wang Dan Wei
格式: Thesis-Master by Coursework
語言:English
出版: Nanyang Technological University 2020
主題:
在線閱讀:https://hdl.handle.net/10356/141172
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:As robots that sharing environments with humans proliferate, human-robot teamwork is becoming increasingly important. It is foreseeable that there will be more and more teams which are composed of humans and robots engaged in daily work. The integration of the appropriate decision-making process is an essential part of the design and development of an autonomous robot [1]. However, the simple decision trees cannot let robots to make complex decisions to satisfy the requirements of human-robot teaming. Letting human and robots work as a team is not just let human to control the robot directly, as there will be a degradation of trust which has bad influence on the work efficiency if the expectations of human do not match robots’ actions. If robots can understand the activities and intents of human, then it may be possible for a person to cooperate with robots in a natural manner, just like the way he/she works with a human team. This thesis proposes a system letting robots to understand human’s pose and depending on the information hidden in human’s pose to take relative action in real-time. Our system provides two options to target different hardware systems: the first one is a more accurate model but the speed is slower, it is suit for powerful computational computers; the second model is quicker but the accuracy is relatively low, it is a compact model which is suit for general computer. In order to add more application scenarios, we proposed a method that extract human pose from thermal images. In addition, we collected plenty of training data and trained a MLP neural network to classify several poses used to interact with robots. The MLP neural network performs well in many test environments.