Understanding human-object interaction in RGB-D videos for human robot interaction
Detecting small hand-held objects plays a critical role for human-robot interaction, because the hand-held objects often reveal the intention of the human, e.g., use a cell phone to make a call or use a cup to drink, thus helps the robots understand the human behavior and response accordingly. Exist...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/142068 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-142068 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1420682020-06-15T08:24:40Z Understanding human-object interaction in RGB-D videos for human robot interaction Fang, Zhiwen Yuan, Junsong Thalmann, Nadia Magnenat CGI 2018: Computer Graphics International 2018 Institute for Media Innovation (IMI) Engineering::Computer science and engineering Human-robot Interaction Handheld Object Detection Detecting small hand-held objects plays a critical role for human-robot interaction, because the hand-held objects often reveal the intention of the human, e.g., use a cell phone to make a call or use a cup to drink, thus helps the robots understand the human behavior and response accordingly. Existing solutions relying on wearable sensor to detect hand-held objects often comprise the user experiences thus may not be preferred. With the development of commodity RGB-D sensors, e.g., Microsoft Kinect II, RGB and depth information have been used for the understanding of human actions and recognizing objects. Motivated by the previous success, we propose to detect hand-held objects using RGB-D sensor. However, instead of performing object detection alone, we propose to leverage human body pose as the context to achieve robust hand-held object detection in RGB-D videos. Our system demonstrates a person can interact with a humanoid social robot with hand-held object such as a cell phone or a cup. Experimental evaluations validate the effectiveness of this proposed method. NRF (Natl Research Foundation, S’pore) 2020-06-15T07:44:21Z 2020-06-15T07:44:21Z 2018 Conference Paper Fang, Z., Yuan, J., & Thalmann, N. M. (2018). Understanding human-object interaction in RGB-D videos for human robot interaction. CGI 2018: Proceedings of Computer Graphics International 2018, 163-167. doi:10.1145/3208159.3208192 https://hdl.handle.net/10356/142068 10.1145/3208159.3208192 163 167 en © 2018 Association for Computing Machinery. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Human-robot Interaction Handheld Object Detection |
spellingShingle |
Engineering::Computer science and engineering Human-robot Interaction Handheld Object Detection Fang, Zhiwen Yuan, Junsong Thalmann, Nadia Magnenat Understanding human-object interaction in RGB-D videos for human robot interaction |
description |
Detecting small hand-held objects plays a critical role for human-robot interaction, because the hand-held objects often reveal the intention of the human, e.g., use a cell phone to make a call or use a cup to drink, thus helps the robots understand the human behavior and response accordingly. Existing solutions relying on wearable sensor to detect hand-held objects often comprise the user experiences thus may not be preferred. With the development of commodity RGB-D sensors, e.g., Microsoft Kinect II, RGB and depth information have been used for the understanding of human actions and recognizing objects. Motivated by the previous success, we propose to detect hand-held objects using RGB-D sensor. However, instead of performing object detection alone, we propose to leverage human body pose as the context to achieve robust hand-held object detection in RGB-D videos. Our system demonstrates a person can interact with a humanoid social robot with hand-held object such as a cell phone or a cup. Experimental evaluations validate the effectiveness of this proposed method. |
author2 |
CGI 2018: Computer Graphics International 2018 |
author_facet |
CGI 2018: Computer Graphics International 2018 Fang, Zhiwen Yuan, Junsong Thalmann, Nadia Magnenat |
format |
Conference or Workshop Item |
author |
Fang, Zhiwen Yuan, Junsong Thalmann, Nadia Magnenat |
author_sort |
Fang, Zhiwen |
title |
Understanding human-object interaction in RGB-D videos for human robot interaction |
title_short |
Understanding human-object interaction in RGB-D videos for human robot interaction |
title_full |
Understanding human-object interaction in RGB-D videos for human robot interaction |
title_fullStr |
Understanding human-object interaction in RGB-D videos for human robot interaction |
title_full_unstemmed |
Understanding human-object interaction in RGB-D videos for human robot interaction |
title_sort |
understanding human-object interaction in rgb-d videos for human robot interaction |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/142068 |
_version_ |
1681057505579368448 |