Robust 3D hand pose estimation from single depth images using multi-view CNNs

Articulated hand pose estimation is one of core technologies in human-computer interaction. Despite the recent progress, most existing methods still cannot achieve satisfactory performance, partly due to the difficulty of the embedded high-dimensional nonlinear regression problem. Most existing data...

Full description

Saved in:
Bibliographic Details
Main Authors: Ge, Liuhao, Liang, Hui, Yuan, Junsong, Thalmann, Daniel
Other Authors: Interdisciplinary Graduate School (IGS)
Format: Article
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/140529
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-140529
record_format dspace
spelling sg-ntu-dr.10356-1405292020-05-30T06:15:27Z Robust 3D hand pose estimation from single depth images using multi-view CNNs Ge, Liuhao Liang, Hui Yuan, Junsong Thalmann, Daniel Interdisciplinary Graduate School (IGS) Institute for Media Innovation (IMI) Engineering::Computer science and engineering Three-dimensional Displays Heating Systems Articulated hand pose estimation is one of core technologies in human-computer interaction. Despite the recent progress, most existing methods still cannot achieve satisfactory performance, partly due to the difficulty of the embedded high-dimensional nonlinear regression problem. Most existing data-driven methods directly regress 3D hand pose from 2D depth image, which cannot fully utilize the depth information. In this paper, we propose a novel multi-view convolutional neural network (CNN)-based approach for 3D hand pose estimation. To better exploit 3D information in the depth image, we project the point cloud generated from the query depth image onto multiple views of two projection settings and integrate them for more robust estimation. Multi-view CNNs are trained to learn the mapping from projected images to heat-maps, which reflect probability distributions of joints on each view. These multi-view heat-maps are then fused to estimate the optimal 3D hand pose with learned pose priors, and the unreliable information in multi-view heat-maps is suppressed using a view selection method. Experimental results show that the proposed method is superior to the state-of-the-art methods on two challenging data sets. Furthermore, a cross-data set experiment also validates that our proposed approach has good generalization ability. NRF (Natl Research Foundation, S’pore) MOE (Min. of Education, S’pore) Accepted version 2020-05-30T06:11:41Z 2020-05-30T06:11:41Z 2018 Journal Article Ge, L., Liang, H., Yuan, J., & Thalmann, D. (2018). Robust 3D hand pose estimation from single depth images using multi-view CNNs. IEEE Transactions on Image Processing, 27(9), 4422-4436. doi:10.1109/TIP.2018.2834824 1057-7149 https://hdl.handle.net/10356/140529 10.1109/TIP.2018.2834824 9 27 4422 4436 en MOE2015-T2-2-114 IEEE Transactions on Image Processing © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TIP.2018.2834824 application/pdf
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Three-dimensional Displays
Heating Systems
spellingShingle Engineering::Computer science and engineering
Three-dimensional Displays
Heating Systems
Ge, Liuhao
Liang, Hui
Yuan, Junsong
Thalmann, Daniel
Robust 3D hand pose estimation from single depth images using multi-view CNNs
description Articulated hand pose estimation is one of core technologies in human-computer interaction. Despite the recent progress, most existing methods still cannot achieve satisfactory performance, partly due to the difficulty of the embedded high-dimensional nonlinear regression problem. Most existing data-driven methods directly regress 3D hand pose from 2D depth image, which cannot fully utilize the depth information. In this paper, we propose a novel multi-view convolutional neural network (CNN)-based approach for 3D hand pose estimation. To better exploit 3D information in the depth image, we project the point cloud generated from the query depth image onto multiple views of two projection settings and integrate them for more robust estimation. Multi-view CNNs are trained to learn the mapping from projected images to heat-maps, which reflect probability distributions of joints on each view. These multi-view heat-maps are then fused to estimate the optimal 3D hand pose with learned pose priors, and the unreliable information in multi-view heat-maps is suppressed using a view selection method. Experimental results show that the proposed method is superior to the state-of-the-art methods on two challenging data sets. Furthermore, a cross-data set experiment also validates that our proposed approach has good generalization ability.
author2 Interdisciplinary Graduate School (IGS)
author_facet Interdisciplinary Graduate School (IGS)
Ge, Liuhao
Liang, Hui
Yuan, Junsong
Thalmann, Daniel
format Article
author Ge, Liuhao
Liang, Hui
Yuan, Junsong
Thalmann, Daniel
author_sort Ge, Liuhao
title Robust 3D hand pose estimation from single depth images using multi-view CNNs
title_short Robust 3D hand pose estimation from single depth images using multi-view CNNs
title_full Robust 3D hand pose estimation from single depth images using multi-view CNNs
title_fullStr Robust 3D hand pose estimation from single depth images using multi-view CNNs
title_full_unstemmed Robust 3D hand pose estimation from single depth images using multi-view CNNs
title_sort robust 3d hand pose estimation from single depth images using multi-view cnns
publishDate 2020
url https://hdl.handle.net/10356/140529
_version_ 1681056524978356224