A unified 3D human motion synthesis model via conditional variational auto-encoder

We present a unified and flexible framework to address the generalized problem of 3D motion synthesis that covers the tasks of motion prediction, completion, interpolation, and spatial-temporal recovery. Since these tasks have different input constraints and various fidelity and diversity requiremen...

Full description

Saved in:
Bibliographic Details
Main Authors: Cai, Yujun, Wang, Yiwei, Zhu, Yiheng, Cham, Tat-Jen, Cai, Jianfei, Yuan, Junsong, Liu, Jun, Zheng, Chuanxia, Yan, Sijie, Ding, Henghui, Shen, Xiaohui, Liu, Ding, Thalmann, Nadia Magnenat
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172651
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-172651
record_format dspace
spelling sg-ntu-dr.10356-1726512023-12-19T04:35:26Z A unified 3D human motion synthesis model via conditional variational auto-encoder Cai, Yujun Wang, Yiwei Zhu, Yiheng Cham, Tat-Jen Cai, Jianfei Yuan, Junsong Liu, Jun Zheng, Chuanxia Yan, Sijie Ding, Henghui Shen, Xiaohui Liu, Ding Thalmann, Nadia Magnenat School of Computer Science and Engineering 2021 IEEE/CVF International Conference on Computer Vision (ICCV) Institute for Media Innovation (IMI) Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Gestures and Body Pose Image and Video Synthesis We present a unified and flexible framework to address the generalized problem of 3D motion synthesis that covers the tasks of motion prediction, completion, interpolation, and spatial-temporal recovery. Since these tasks have different input constraints and various fidelity and diversity requirements, most existing approaches only cater to a specific task or use different architectures to address various tasks. Here we propose a unified framework based on Conditional Variational Auto-Encoder (CVAE), where we treat any arbitrary input as a masked motion series. Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series. To further allow the flexibility of manipulating the motion style of the generated series, we design an Action-Adaptive Modulation (AAM) to propagate the given semantic guidance through the whole sequence. We also introduce a cross-attention mechanism to exploit distant relations among decoder and encoder features for better realism and global consistency. We conducted extensive experiments on Human 3.6M and CMU-Mocap. The results show that our method produces coherent and realistic results for various motion synthesis tasks, with the synthesized motions distinctly adapted by the given action labels. Nanyang Technological University National Research Foundation (NRF) This research is supported by Institute for Media Innovation, Nanyang Technological University (IMI-NTU) and the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. This research is also supported in part by Monash FIT Start-up Grant and SenseTime Gift Fund, National Science Foundation Grant CNS1951952 and SUTD project PIE-SGP-Al-2020-02. 2023-12-19T04:35:26Z 2023-12-19T04:35:26Z 2022 Conference Paper Cai, Y., Wang, Y., Zhu, Y., Cham, T., Cai, J., Yuan, J., Liu, J., Zheng, C., Yan, S., Ding, H., Shen, X., Liu, D. & Thalmann, N. M. (2022). A unified 3D human motion synthesis model via conditional variational auto-encoder. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11625-11635. https://dx.doi.org/10.1109/ICCV48922.2021.01144 9781665428125 https://hdl.handle.net/10356/172651 10.1109/ICCV48922.2021.01144 2-s2.0-85113641917 11625 11635 en © 2021 IEEE. All rights reserved.
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Gestures and Body Pose
Image and Video Synthesis
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Gestures and Body Pose
Image and Video Synthesis
Cai, Yujun
Wang, Yiwei
Zhu, Yiheng
Cham, Tat-Jen
Cai, Jianfei
Yuan, Junsong
Liu, Jun
Zheng, Chuanxia
Yan, Sijie
Ding, Henghui
Shen, Xiaohui
Liu, Ding
Thalmann, Nadia Magnenat
A unified 3D human motion synthesis model via conditional variational auto-encoder
description We present a unified and flexible framework to address the generalized problem of 3D motion synthesis that covers the tasks of motion prediction, completion, interpolation, and spatial-temporal recovery. Since these tasks have different input constraints and various fidelity and diversity requirements, most existing approaches only cater to a specific task or use different architectures to address various tasks. Here we propose a unified framework based on Conditional Variational Auto-Encoder (CVAE), where we treat any arbitrary input as a masked motion series. Notably, by considering this problem as a conditional generation process, we estimate a parametric distribution of the missing regions based on the input conditions, from which to sample and synthesize the full motion series. To further allow the flexibility of manipulating the motion style of the generated series, we design an Action-Adaptive Modulation (AAM) to propagate the given semantic guidance through the whole sequence. We also introduce a cross-attention mechanism to exploit distant relations among decoder and encoder features for better realism and global consistency. We conducted extensive experiments on Human 3.6M and CMU-Mocap. The results show that our method produces coherent and realistic results for various motion synthesis tasks, with the synthesized motions distinctly adapted by the given action labels.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Cai, Yujun
Wang, Yiwei
Zhu, Yiheng
Cham, Tat-Jen
Cai, Jianfei
Yuan, Junsong
Liu, Jun
Zheng, Chuanxia
Yan, Sijie
Ding, Henghui
Shen, Xiaohui
Liu, Ding
Thalmann, Nadia Magnenat
format Conference or Workshop Item
author Cai, Yujun
Wang, Yiwei
Zhu, Yiheng
Cham, Tat-Jen
Cai, Jianfei
Yuan, Junsong
Liu, Jun
Zheng, Chuanxia
Yan, Sijie
Ding, Henghui
Shen, Xiaohui
Liu, Ding
Thalmann, Nadia Magnenat
author_sort Cai, Yujun
title A unified 3D human motion synthesis model via conditional variational auto-encoder
title_short A unified 3D human motion synthesis model via conditional variational auto-encoder
title_full A unified 3D human motion synthesis model via conditional variational auto-encoder
title_fullStr A unified 3D human motion synthesis model via conditional variational auto-encoder
title_full_unstemmed A unified 3D human motion synthesis model via conditional variational auto-encoder
title_sort unified 3d human motion synthesis model via conditional variational auto-encoder
publishDate 2023
url https://hdl.handle.net/10356/172651
_version_ 1787136722523389952