Behavior imitation for manipulator control and grasping with deep reinforcement learning
The existing Motion Imitation models typically require expert data obtained through MoCap devices, but the vast amount of training data needed is difficult to acquire, necessitating substantial investments of financial resources, manpower, and time. This project combines 3D human pose estimation...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177492 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-177492 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1774922024-06-01T16:52:32Z Behavior imitation for manipulator control and grasping with deep reinforcement learning Liu, Qiyuan Lyu Chen Wen Bihan School of Mechanical and Aerospace Engineering lyuchen@ntu.edu.sg, bihan.wen@ntu.edu.sg Engineering Motion Imitation, Imitation Learning, Deep Reinforcement Learning, 3D Human Pose Estimation, Motion Retargeting, Inverse Kenimatics, PyBullet Simulation The existing Motion Imitation models typically require expert data obtained through MoCap devices, but the vast amount of training data needed is difficult to acquire, necessitating substantial investments of financial resources, manpower, and time. This project combines 3D human pose estimation with reinforcement learning, proposing a novel model that simplifies Motion Imitation into a prediction problem of joint angle values in reinforcement learning. This significantly reduces the reliance on vast amounts of training data, enabling the agent to learn an imitation policy from just a few seconds of video and exhibit strong generalization capabilities. It can quickly apply the learned policy to imitate human arm motions in unfamiliar videos. The model first extracts skeletal motions of human arms from a given video using 3D human pose estimation. These extracted arm motions are then morphologically retargeted onto a robotic manipulator. Subsequently, the retargeted motions are used to generate reference motions. Finally, these reference motions are used to formulate a reinforcement learning problem, enabling the agent to learn policy for imitating human arm motions. This project excels at imitation tasks and demonstrates robust transferability, accurately imitating human arm motions from other unfamiliar videos. This project provides a lightweight, convenient, efficient, and accurate Motion Imitation model. While simplifying the complex process of Motion Imitation, it achieves notably outstanding performance. Bachelor's degree 2024-05-29T02:01:19Z 2024-05-29T02:01:19Z 2024 Final Year Project (FYP) Liu, Q. (2024). Behavior imitation for manipulator control and grasping with deep reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177492 https://hdl.handle.net/10356/177492 en C141 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Motion Imitation, Imitation Learning, Deep Reinforcement Learning, 3D Human Pose Estimation, Motion Retargeting, Inverse Kenimatics, PyBullet Simulation |
spellingShingle |
Engineering Motion Imitation, Imitation Learning, Deep Reinforcement Learning, 3D Human Pose Estimation, Motion Retargeting, Inverse Kenimatics, PyBullet Simulation Liu, Qiyuan Behavior imitation for manipulator control and grasping with deep reinforcement learning |
description |
The existing Motion Imitation models typically require expert data obtained through MoCap
devices, but the vast amount of training data needed is difficult to acquire, necessitating
substantial investments of financial resources, manpower, and time. This project combines 3D
human pose estimation with reinforcement learning, proposing a novel model that simplifies
Motion Imitation into a prediction problem of joint angle values in reinforcement learning.
This significantly reduces the reliance on vast amounts of training data, enabling the agent
to learn an imitation policy from just a few seconds of video and exhibit strong generalization
capabilities. It can quickly apply the learned policy to imitate human arm motions in unfamiliar
videos. The model first extracts skeletal motions of human arms from a given video using 3D
human pose estimation. These extracted arm motions are then morphologically retargeted onto
a robotic manipulator. Subsequently, the retargeted motions are used to generate reference
motions. Finally, these reference motions are used to formulate a reinforcement learning
problem, enabling the agent to learn policy for imitating human arm motions. This project
excels at imitation tasks and demonstrates robust transferability, accurately imitating human
arm motions from other unfamiliar videos. This project provides a lightweight, convenient,
efficient, and accurate Motion Imitation model. While simplifying the complex process of
Motion Imitation, it achieves notably outstanding performance. |
author2 |
Lyu Chen |
author_facet |
Lyu Chen Liu, Qiyuan |
format |
Final Year Project |
author |
Liu, Qiyuan |
author_sort |
Liu, Qiyuan |
title |
Behavior imitation for manipulator control and grasping with deep reinforcement learning |
title_short |
Behavior imitation for manipulator control and grasping with deep reinforcement learning |
title_full |
Behavior imitation for manipulator control and grasping with deep reinforcement learning |
title_fullStr |
Behavior imitation for manipulator control and grasping with deep reinforcement learning |
title_full_unstemmed |
Behavior imitation for manipulator control and grasping with deep reinforcement learning |
title_sort |
behavior imitation for manipulator control and grasping with deep reinforcement learning |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/177492 |
_version_ |
1800916370033999872 |