Understanding human-object interaction in RGB-D videos for human robot interaction
Detecting small hand-held objects plays a critical role for human-robot interaction, because the hand-held objects often reveal the intention of the human, e.g., use a cell phone to make a call or use a cup to drink, thus helps the robots understand the human behavior and response accordingly. Exist...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/142068 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Detecting small hand-held objects plays a critical role for human-robot interaction, because the hand-held objects often reveal the intention of the human, e.g., use a cell phone to make a call or use a cup to drink, thus helps the robots understand the human behavior and response accordingly. Existing solutions relying on wearable sensor to detect hand-held objects often comprise the user experiences thus may not be preferred. With the development of commodity RGB-D sensors, e.g., Microsoft Kinect II, RGB and depth information have been used for the understanding of human actions and recognizing objects. Motivated by the previous success, we propose to detect hand-held objects using RGB-D sensor. However, instead of performing object detection alone, we propose to leverage human body pose as the context to achieve robust hand-held object detection in RGB-D videos. Our system demonstrates a person can interact with a humanoid social robot with hand-held object such as a cell phone or a cup. Experimental evaluations validate the effectiveness of this proposed method. |
---|