Towards a model-based 3D marker-less human motion capture
This research proposes a novel framework for capturing 3D human motion from video images using a model-based approach. Existing commercial motion capture methods that place markers onto the human will cause hindrance to the human performers. Our approach does not require any markers. Our contributio...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2008
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/13603 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-13603 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-136032023-03-04T00:46:15Z Towards a model-based 3D marker-less human motion capture Quah, Chee Kwang Seah Hock Soon School of Computer Engineering Andre Gagalowicz DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision This research proposes a novel framework for capturing 3D human motion from video images using a model-based approach. Existing commercial motion capture methods that place markers onto the human will cause hindrance to the human performers. Our approach does not require any markers. Our contributions consisted of the two main phases: (1) to construct a 3D human puppet model that is very similar to the subject, and (2) to follow the motion of the subject using this 3D model. The human model and movements have to be accurate so that we can obtain quantitative data for applications such as bio-mechanical analysis. A substantial amount of work has been emphasized on building the 3D human model, as the accuracy and reliability of the motion tracking depend very much on it. The reconstruction of 3D human model is facilitated by a generic geometrical human model consisting of the external skin and its internal skeleton. The output is an accurate external skin of the subject with its estimated internal skeleton. This approach uses several cameras and does not require prior camera calibration. First, the camera calibration and 3D reconstruction take place simultaneously to produce the intermediate 3D model once the characteristic points between the generic model and the real one are registered. Then, we automatically matched the silhouette curves of the intermediate model and the real subject to yield a better 3D human model. Our setup requires no prior calibration and moderate human interaction, its operation is simple, inexpensive and efficient as compared to the existing 3D laser body scanners and computer imaging methods. Our human motion tracking algorithm starts by the automatic learning of the colour/texture onto the puppet model from its initial pre-positioned posture. Then, our computation will synthesize the 3D puppet movements such that it minimizes the image differences between the synthesized movements and real athlete’s motion. This is realized by using a simulated annealing algorithm to search iterative for the optimal posture represented by the joint kinematics with the various degrees of freedom. The joint kinematics then drive the skin of the model puppet to produce the synthesized image, which will be used to compare with the real image. The image rendering for the motion synthesis is the most computationally intensive module and it is sped up by using a graphics processor unit (GPU). With our results, we demonstrated that we are able to track the motion of the arms, which are usually highly articulated and quatitatively small in the images. The advantages of our method are: (1) it does not require image segmentation, (2) it copes with occlusion, and (3) it is able to operate in highly cluttered environments. DOCTOR OF PHILOSOPHY (SCE) 2008-10-06T03:52:29Z 2008-10-20T09:58:20Z 2008-10-06T03:52:29Z 2008-10-20T09:58:20Z 2008 2008 Thesis Quah, C. K. (2008). Towards a model-based 3D marker-less human motion capture. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/13603 10.32657/10356/13603 en 195 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Quah, Chee Kwang Towards a model-based 3D marker-less human motion capture |
description |
This research proposes a novel framework for capturing 3D human motion from video images using a model-based approach. Existing commercial motion capture methods that place markers onto the human will cause hindrance to the human performers. Our approach does not require any markers. Our contributions consisted of the two main phases: (1) to construct a 3D human puppet model that is very similar to the subject, and (2) to follow the motion of the subject using this 3D model. The human model and movements have to be accurate so that we can obtain quantitative data for applications such as bio-mechanical analysis. A substantial amount of work has been emphasized on building the 3D human model, as the accuracy and reliability of the motion tracking depend very much on it. The reconstruction of 3D human model is facilitated by a generic geometrical human model consisting of the external skin and its internal skeleton. The output is an accurate external skin of the subject with its estimated internal skeleton. This approach uses several cameras and does not require prior camera calibration. First, the camera calibration and 3D reconstruction take place simultaneously to produce the intermediate 3D model once the characteristic points between the generic model and the real one are registered. Then, we automatically matched the silhouette curves of the intermediate model and the real subject to yield a better 3D human model. Our setup requires no prior calibration and moderate human interaction, its operation is simple, inexpensive and efficient as compared to the existing 3D laser body scanners and computer imaging methods. Our human motion tracking algorithm starts by the automatic learning of the colour/texture onto the puppet model from its initial pre-positioned posture. Then, our computation will synthesize the 3D puppet movements such that it minimizes the image differences between the synthesized movements and real athlete’s motion. This is realized by using a simulated annealing algorithm to search iterative for the optimal posture represented by the joint kinematics with the various degrees of freedom. The joint kinematics then drive the skin of the model puppet to produce the synthesized image, which will be used to compare with the real image. The image rendering for the motion synthesis is the most computationally intensive module and it is sped up by using a graphics processor unit (GPU). With our results, we demonstrated that we are able to track the motion of the arms, which are usually highly articulated and quatitatively small in the images. The advantages of our method are: (1) it does not require image segmentation, (2) it copes with occlusion, and (3) it is able to operate in highly cluttered environments. |
author2 |
Seah Hock Soon |
author_facet |
Seah Hock Soon Quah, Chee Kwang |
format |
Theses and Dissertations |
author |
Quah, Chee Kwang |
author_sort |
Quah, Chee Kwang |
title |
Towards a model-based 3D marker-less human motion capture |
title_short |
Towards a model-based 3D marker-less human motion capture |
title_full |
Towards a model-based 3D marker-less human motion capture |
title_fullStr |
Towards a model-based 3D marker-less human motion capture |
title_full_unstemmed |
Towards a model-based 3D marker-less human motion capture |
title_sort |
towards a model-based 3d marker-less human motion capture |
publishDate |
2008 |
url |
https://hdl.handle.net/10356/13603 |
_version_ |
1759857534526554112 |