Animate your avatar: learning conditional human motion prior
This research project focuses on the concept of virtual humans and aims to enable natural language control of 3D avatars, allowing them to perform human-like movements that are coherent with their surrounding environment. To achieve this goal, the project proposes to learn a "conditional"...
Saved in:
Main Author: | Singh, Ananya |
---|---|
Other Authors: | Liu Ziwei |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165971 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Further development of a human tracking and motion interaction capability for a robotic avatar
by: Lee, Yong Liang
Published: (2014) -
Human animation from motion recognition, analysis and optimization
by: Zhao, Jianhui.
Published: (2008) -
Facial expression retargeting from human to avatar made easy
by: Zhang, Juyong, et al.
Published: (2022) -
Facial motion prior networks for facial expression recognition
by: Chen, Yuedong, et al.
Published: (2020) -
3D motion editing for animation
by: Chee, Yi Yong
Published: (2017)