Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots
The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing syst...
Saved in:
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/169537 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-169537 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1695372023-07-28T15:35:54Z Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots Yang, Sibo Garg, Neha P. Gao, Ruobin Yuan, Meng Noronha, Bernardo Ang, Wei Tech Accoto, Dino School of Mechanical and Aerospace Engineering School of Computer Science and Engineering Rehabilitation Research Institute of Singapore (RRIS) Engineering::Mechanical engineering Engineering::Computer science and engineering Upper Limb Assistive Robots Wearable Sensors The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots. Agency for Science, Technology and Research (A*STAR) Published version This work was partially supported by the grant “Intelligent Human–robot interface for upper limb wearable robots” (Award Number SERC1922500046, A*STAR, Singapore). 2023-07-24T01:36:55Z 2023-07-24T01:36:55Z 2023 Journal Article Yang, S., Garg, N. P., Gao, R., Yuan, M., Noronha, B., Ang, W. T. & Accoto, D. (2023). Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots. Sensors, 23(6), 2998-. https://dx.doi.org/10.3390/s23062998 1424-8220 https://hdl.handle.net/10356/169537 10.3390/s23062998 36991709 2-s2.0-85151204948 6 23 2998 en SERC 1922500046 Sensors © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Mechanical engineering Engineering::Computer science and engineering Upper Limb Assistive Robots Wearable Sensors |
spellingShingle |
Engineering::Mechanical engineering Engineering::Computer science and engineering Upper Limb Assistive Robots Wearable Sensors Yang, Sibo Garg, Neha P. Gao, Ruobin Yuan, Meng Noronha, Bernardo Ang, Wei Tech Accoto, Dino Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
description |
The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Yang, Sibo Garg, Neha P. Gao, Ruobin Yuan, Meng Noronha, Bernardo Ang, Wei Tech Accoto, Dino |
format |
Article |
author |
Yang, Sibo Garg, Neha P. Gao, Ruobin Yuan, Meng Noronha, Bernardo Ang, Wei Tech Accoto, Dino |
author_sort |
Yang, Sibo |
title |
Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
title_short |
Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
title_full |
Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
title_fullStr |
Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
title_full_unstemmed |
Learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
title_sort |
learning-based motion-intention prediction for end-point control of upper-limb-assistive robots |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/169537 |
_version_ |
1773551206103252992 |