3D hand estimation under egocentic vision
This dissertation introduces a novel two-stage transformer-based model for 3D hand pose estimation, specifically designed for egocentric conditions. The proposed architecture integrates a FastViT-ma36 backbone in the first stage, which efficiently extracts features from monocular RGB images. In the...
Saved in:
Main Author: | Zhu, Yixiang |
---|---|
Other Authors: | Yap Kim Hui |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182401 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Egocentric hand pose estimation and distance recovery in a single RGB image
by: Liang, Hui, et al.
Published: (2016) -
Hand PointNet : 3D hand pose estimation using point sets
by: Ge, Liuhao, et al.
Published: (2018) -
Weakly-supervised 3D hand pose estimation from monocular RGB images
by: Cai, Yujun, et al.
Published: (2020) -
NON-PARAMETRIC 3D HAND SHAPE RECONSTRUCTION FROM MONOCULAR IMAGE
by: YU ZIWEI
Published: (2024) -
Real-time 3D hand pose estimation with 3D convolutional neural networks
by: Ge, Liuhao, et al.
Published: (2019)