Highly controllable motion generation model

Text-To-Motion generation has emerged as a promising area of research in deep learning, with potential applications in video games, animation and virtual reality systems. However, the adoption of these technologies is still limited due to the predefined skeletal prior. Thus, manual effort is requ...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Alviento, Adrian Nicolas Belleza
مؤلفون آخرون: Liu Ziwei
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2025
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/184400
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
id sg-ntu-dr.10356-184400
record_format dspace
spelling sg-ntu-dr.10356-1844002025-04-29T06:06:01Z Highly controllable motion generation model Alviento, Adrian Nicolas Belleza Liu Ziwei Ong Yew Soon College of Computing and Data Science ziwei.liu@ntu.edu.sg, ASYSOng@ntu.edu.sg Computer and Information Science Text-to-motion generation Text-to-3D human generation Motion retargeting Automated animation pipeline Text-To-Motion generation has emerged as a promising area of research in deep learning, with potential applications in video games, animation and virtual reality systems. However, the adoption of these technologies is still limited due to the predefined skeletal prior. Thus, manual effort is required to rig the desired target meshes with a compatible skeleton. On the other hand, recent advancements in 3D-Human generation have demonstrated the capability to produce detailed and realistic 3D character models from textual inputs. The gap between motion and 3D-human generation is a compelling area of research. A pipeline that can automate the transfer of motion to generated 3D- human model will significantly simplify the workflow of generating 3D animations for the laypersons. This project reviews the state-of-the-art (SOTA) approaches in motion and 3D-Human generation, as well as methods in ensuring seamless compatibility between them. We propose a pipeline that integrates both models to enable automated and user-friendly workflows for creating 3D animations whilst ensuring compatibility with popular 3D software platforms like Unreal Engine and Blender. Bachelor's degree 2025-04-29T06:06:01Z 2025-04-29T06:06:01Z 2025 Final Year Project (FYP) Alviento, A. N. B. (2025). Highly controllable motion generation model. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/184400 https://hdl.handle.net/10356/184400 en CCDS24-0037 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Text-to-motion generation
Text-to-3D human generation
Motion retargeting
Automated animation pipeline
spellingShingle Computer and Information Science
Text-to-motion generation
Text-to-3D human generation
Motion retargeting
Automated animation pipeline
Alviento, Adrian Nicolas Belleza
Highly controllable motion generation model
description Text-To-Motion generation has emerged as a promising area of research in deep learning, with potential applications in video games, animation and virtual reality systems. However, the adoption of these technologies is still limited due to the predefined skeletal prior. Thus, manual effort is required to rig the desired target meshes with a compatible skeleton. On the other hand, recent advancements in 3D-Human generation have demonstrated the capability to produce detailed and realistic 3D character models from textual inputs. The gap between motion and 3D-human generation is a compelling area of research. A pipeline that can automate the transfer of motion to generated 3D- human model will significantly simplify the workflow of generating 3D animations for the laypersons. This project reviews the state-of-the-art (SOTA) approaches in motion and 3D-Human generation, as well as methods in ensuring seamless compatibility between them. We propose a pipeline that integrates both models to enable automated and user-friendly workflows for creating 3D animations whilst ensuring compatibility with popular 3D software platforms like Unreal Engine and Blender.
author2 Liu Ziwei
author_facet Liu Ziwei
Alviento, Adrian Nicolas Belleza
format Final Year Project
author Alviento, Adrian Nicolas Belleza
author_sort Alviento, Adrian Nicolas Belleza
title Highly controllable motion generation model
title_short Highly controllable motion generation model
title_full Highly controllable motion generation model
title_fullStr Highly controllable motion generation model
title_full_unstemmed Highly controllable motion generation model
title_sort highly controllable motion generation model
publisher Nanyang Technological University
publishDate 2025
url https://hdl.handle.net/10356/184400
_version_ 1831146386997903360