Towards haptic intelligence in robots by learning from demonstration

Programming robots to perform contact tasks as fluidly and intelligently as humans do is a difficult task. The interaction forces applied and sensed by the robot, i.e., the haptics of the task, combined with the kinematics of the task fully describe a successful strategy. However, these kinematic-ha...

Full description

Saved in:
Bibliographic Details
Main Author: Turlapati Sri Harsha
Other Authors: Domenico Campolo
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158704
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-158704
record_format dspace
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Mechanical engineering::Robots
spellingShingle Engineering::Mechanical engineering::Robots
Turlapati Sri Harsha
Towards haptic intelligence in robots by learning from demonstration
description Programming robots to perform contact tasks as fluidly and intelligently as humans do is a difficult task. The interaction forces applied and sensed by the robot, i.e., the haptics of the task, combined with the kinematics of the task fully describe a successful strategy. However, these kinematic-haptic correspondences vary based on the materials of surfaces in contact, their geometry, making or breaking of contact, and several other task-specific variables, which makes it hard to develop pre-programmed / feedforward robotic approaches. Recording the haptic data from human demonstrations helps addressing the quantitative aspect of this problem with the research challenge now being the understanding the contact task using the collected data. In pursuit of this larger goal, this thesis proposes frameworks and algorithms that will use data collected from human demonstrations of (i) haptic exploration and (ii) dual arm manipulation tasks for improving spatial perception in robots and facilitate learning of contact tasks. Spatial perception using the sense of touch is interconnected with geometry. This is evident from how we are able to look for objects in the dark, wield them intelligently without having to know where we are holding them and even form a mental image of an object by running our hands along its surface. In the first part of this thesis, we build on this latter idea of performing haptic exploration of surfaces with a focus on extracting the geometry of fine features. We propose a novel quantity, haptic mismatch that identifies and refines missing geometric features based on the discrepancy in force feedback from actual interactions with an object. Specifically, the contribution here is the viewing of triangular meshes of objects, i.e., the internal model extracted from their 3D scans, not only as a source of geometric information, but also of haptic feedback. The haptic mismatch computed as a difference between the actual haptic feedback and that predicted by the internal model is key in refining the internal model locally. Application of this proposed method in refining and then detecting missing geometric features like holes, edges, slots in object scans is presented. Another aspect of haptic intelligence is in the learning of contact tasks from human demonstrations. The paradigm of Learning from Demonstration (LfD) helps avoid complex coding on the user-end as well as encodes the human skill beneficial in completing the task and in generalising to newer actions. Specifically in this context, we introduce our approach of haptic demonstrations via tele-operation to study the internal forces in dual arm manipulation tasks. As opposed to kinaesthetic teaching that is prevalent in LfD research, we propose to use tele-operated systems endowed with the ability of providing haptic feedback to a user while collecting demonstration data. This is motivated by the observation that classical kinaesthetic teaching by physically guiding the robot through a particular task is not sufficient for learning the haptics of the task. This is because the forces the human applies on the robot are also recorded along with the task forces, thus "corrupting" any useful strategy. With a tele-operated system, we separate the human forces from the task forces while recording both. We build a dual arm tele-robotic system and record several demonstrations of dual arm manipulation tasks like reaching for an object, grasping it, moving/rotating it around and performing assembly tasks with it. We furthermore perform repeated demonstrations of an assembly and disassembly task to statistically encode the haptics of these tasks through the master kinematics. This is validated by a successful autonomous execution of an assembly task by the slave robot when controlled in an open-loop replay by the averaged master kinematics. Finally, the geometric aspects behind haptic signals useful in contact tasks is studied with a quasi-static analysis of a dual arm manipulation task where the grasped object is moved around and rotated without losing grasp. Traditional grasping approaches build grasp matrices using geometry knowledge and use visual feedback to perform dexterous manipulation. To the best of my knowledge, no approach has been presented to infer grasp status from just the sensed forces in a grasp. In this thesis, a novel approach of using geometric aspects of a grasp configuration to infer object orientation from the sensed forces is presented. Through the quasi-static equilibrium condition, the grasping force and the contact locations are used to compute a noisy estimate of the object orientation. This further motivates the possibility of using haptic signals in the absence of vision. This is especially useful in scenarios where manipulation without vision is required, e.g., when the object is occluded by the robot or the environment. The computed object orientation from the proposed method is validated against the motion capture data of object orientation. Future scope to robustly learn dexterous manipulation from human demonstrations is laid out.
author2 Domenico Campolo
author_facet Domenico Campolo
Turlapati Sri Harsha
format Thesis-Doctor of Philosophy
author Turlapati Sri Harsha
author_sort Turlapati Sri Harsha
title Towards haptic intelligence in robots by learning from demonstration
title_short Towards haptic intelligence in robots by learning from demonstration
title_full Towards haptic intelligence in robots by learning from demonstration
title_fullStr Towards haptic intelligence in robots by learning from demonstration
title_full_unstemmed Towards haptic intelligence in robots by learning from demonstration
title_sort towards haptic intelligence in robots by learning from demonstration
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/158704
_version_ 1761782088810364928
spelling sg-ntu-dr.10356-1587042023-03-11T18:09:38Z Towards haptic intelligence in robots by learning from demonstration Turlapati Sri Harsha Domenico Campolo School of Mechanical and Aerospace Engineering Robotics Research Center d.campolo@ntu.edu.sg Engineering::Mechanical engineering::Robots Programming robots to perform contact tasks as fluidly and intelligently as humans do is a difficult task. The interaction forces applied and sensed by the robot, i.e., the haptics of the task, combined with the kinematics of the task fully describe a successful strategy. However, these kinematic-haptic correspondences vary based on the materials of surfaces in contact, their geometry, making or breaking of contact, and several other task-specific variables, which makes it hard to develop pre-programmed / feedforward robotic approaches. Recording the haptic data from human demonstrations helps addressing the quantitative aspect of this problem with the research challenge now being the understanding the contact task using the collected data. In pursuit of this larger goal, this thesis proposes frameworks and algorithms that will use data collected from human demonstrations of (i) haptic exploration and (ii) dual arm manipulation tasks for improving spatial perception in robots and facilitate learning of contact tasks. Spatial perception using the sense of touch is interconnected with geometry. This is evident from how we are able to look for objects in the dark, wield them intelligently without having to know where we are holding them and even form a mental image of an object by running our hands along its surface. In the first part of this thesis, we build on this latter idea of performing haptic exploration of surfaces with a focus on extracting the geometry of fine features. We propose a novel quantity, haptic mismatch that identifies and refines missing geometric features based on the discrepancy in force feedback from actual interactions with an object. Specifically, the contribution here is the viewing of triangular meshes of objects, i.e., the internal model extracted from their 3D scans, not only as a source of geometric information, but also of haptic feedback. The haptic mismatch computed as a difference between the actual haptic feedback and that predicted by the internal model is key in refining the internal model locally. Application of this proposed method in refining and then detecting missing geometric features like holes, edges, slots in object scans is presented. Another aspect of haptic intelligence is in the learning of contact tasks from human demonstrations. The paradigm of Learning from Demonstration (LfD) helps avoid complex coding on the user-end as well as encodes the human skill beneficial in completing the task and in generalising to newer actions. Specifically in this context, we introduce our approach of haptic demonstrations via tele-operation to study the internal forces in dual arm manipulation tasks. As opposed to kinaesthetic teaching that is prevalent in LfD research, we propose to use tele-operated systems endowed with the ability of providing haptic feedback to a user while collecting demonstration data. This is motivated by the observation that classical kinaesthetic teaching by physically guiding the robot through a particular task is not sufficient for learning the haptics of the task. This is because the forces the human applies on the robot are also recorded along with the task forces, thus "corrupting" any useful strategy. With a tele-operated system, we separate the human forces from the task forces while recording both. We build a dual arm tele-robotic system and record several demonstrations of dual arm manipulation tasks like reaching for an object, grasping it, moving/rotating it around and performing assembly tasks with it. We furthermore perform repeated demonstrations of an assembly and disassembly task to statistically encode the haptics of these tasks through the master kinematics. This is validated by a successful autonomous execution of an assembly task by the slave robot when controlled in an open-loop replay by the averaged master kinematics. Finally, the geometric aspects behind haptic signals useful in contact tasks is studied with a quasi-static analysis of a dual arm manipulation task where the grasped object is moved around and rotated without losing grasp. Traditional grasping approaches build grasp matrices using geometry knowledge and use visual feedback to perform dexterous manipulation. To the best of my knowledge, no approach has been presented to infer grasp status from just the sensed forces in a grasp. In this thesis, a novel approach of using geometric aspects of a grasp configuration to infer object orientation from the sensed forces is presented. Through the quasi-static equilibrium condition, the grasping force and the contact locations are used to compute a noisy estimate of the object orientation. This further motivates the possibility of using haptic signals in the absence of vision. This is especially useful in scenarios where manipulation without vision is required, e.g., when the object is occluded by the robot or the environment. The computed object orientation from the proposed method is validated against the motion capture data of object orientation. Future scope to robustly learn dexterous manipulation from human demonstrations is laid out. Doctor of Philosophy 2022-06-02T12:46:49Z 2022-06-02T12:46:49Z 2022 Thesis-Doctor of Philosophy Turlapati Sri Harsha (2022). Towards haptic intelligence in robots by learning from demonstration. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158704 https://hdl.handle.net/10356/158704 en MOE Tier1 grant (RG48/17), Singapore This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University