Controlling high DoF robotic arm with 2 DoF joystick and environment context
Robotic arm can aid patients with motor disabilities to carry out tasks such as picking objects or feeding, which can be difficult to perform with bare hands when bounded to wheelchair. However, it is difficult to operate/teleoperate a high degree of freedom robotic arm using a low dof joystick, a c...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158905 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Robotic arm can aid patients with motor disabilities to carry out tasks such as picking objects or feeding, which can be difficult to perform with bare hands when bounded to wheelchair. However, it is difficult to operate/teleoperate a high degree of freedom robotic arm using a low dof joystick, a common and intuitive tool used to control wheelchairs. Recent works have proposed approaches based on human intention prediction through joystick input and converting the low dof joystick input by human into high dof robot arm action. Through human intention prediction and environment context understating, such approaches have shown significant improvement in task completion time and user satisfaction. However, experiments in these works have been conducted in very simplistic scenarios. 3 objects which can be grasped from the side are placed at a distance from each other and the gripper poses to grasp them are pre-determined.
In this work, we first tested the existing work in more realistic settings with objects placed close to each other. The grasp poses are generated from the point cloud of the scene instead of being pre-determined, thus relaxing the requirement of knowing objects to be grasped in advance. Through testing in practical settings, we identified that the assistance provided is not sufficient when the object can be grasped using both top and side grasps. We proposed and implemented two solutions based on the idea of differentiating between top and side grasps, with varying degrees of automation.
To investigate which solution is preferred by the users and whether the proposed solutions provide more intuitive teleoperation than the existing works and manual teleoperation, we conducted a user study. The experiment showed that the users are generally able to grasp the object quicker and easier with top grasps using our proposed solutions. However, surprisingly the solution with more automation is less preferred by the users, especially in the cluttered environment. We discuss various insights gained while developing this system and conducting human subject experiments and suggest ways for further improvements. |
---|