Towards haptic intelligence in robots by learning from demonstration
Programming robots to perform contact tasks as fluidly and intelligently as humans do is a difficult task. The interaction forces applied and sensed by the robot, i.e., the haptics of the task, combined with the kinematics of the task fully describe a successful strategy. However, these kinematic-ha...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158704 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Programming robots to perform contact tasks as fluidly and intelligently as humans do is a difficult task. The interaction forces applied and sensed by the robot, i.e., the haptics of the task, combined with the kinematics of the task fully describe a successful strategy. However, these kinematic-haptic correspondences vary based on the materials of surfaces in contact, their geometry, making or breaking of contact, and several other task-specific variables, which makes it hard to develop pre-programmed / feedforward robotic approaches. Recording the haptic data from human demonstrations helps addressing the quantitative aspect of this problem with the research challenge now being the understanding the contact task using the collected data. In pursuit of this larger goal, this thesis proposes frameworks and algorithms that will use data collected from human demonstrations of (i) haptic exploration and (ii) dual arm manipulation tasks for improving spatial perception in robots and facilitate learning of contact tasks.
Spatial perception using the sense of touch is interconnected with geometry. This is evident from how we are able to look for objects in the dark, wield them intelligently without having to know where we are holding them and even form a mental image of an object by running our hands along its surface. In the first part of this thesis, we build on this latter idea of performing haptic exploration of surfaces with a focus on extracting the geometry of fine features. We propose a novel quantity, haptic mismatch that identifies and refines missing geometric features based on the discrepancy in force feedback from actual interactions with an object. Specifically, the contribution here is the viewing of triangular meshes of objects, i.e., the internal model extracted from their 3D scans, not only as a source of geometric information, but also of haptic feedback. The haptic mismatch computed as a difference between the actual haptic feedback and that predicted by the internal model is key in refining the internal model locally. Application of this proposed method in refining and then detecting missing geometric features like holes, edges, slots in object scans is presented.
Another aspect of haptic intelligence is in the learning of contact tasks from human demonstrations. The paradigm of Learning from Demonstration (LfD) helps avoid complex coding on the user-end as well as encodes the human skill beneficial in completing the task and in generalising to newer actions. Specifically in this context, we introduce our approach of haptic demonstrations via tele-operation to study the internal forces in dual arm manipulation tasks. As opposed to kinaesthetic teaching that is prevalent in LfD research, we propose to use tele-operated systems endowed with the ability of providing haptic feedback to a user while collecting demonstration data. This is motivated by the observation that classical kinaesthetic teaching by physically guiding the robot through a particular task is not sufficient for learning the haptics of the task. This is because the forces the human applies on the robot are also recorded along with the task forces, thus "corrupting" any useful strategy. With a tele-operated system, we separate the human forces from the task forces while recording both. We build a dual arm tele-robotic system and record several demonstrations of dual arm manipulation tasks like reaching for an object, grasping it, moving/rotating it around and performing assembly tasks with it. We furthermore perform repeated demonstrations of an assembly and disassembly task to statistically encode the haptics of these tasks through the master kinematics. This is validated by a successful autonomous execution of an assembly task by the slave robot when controlled in an open-loop replay by the averaged master kinematics.
Finally, the geometric aspects behind haptic signals useful in contact tasks is studied with a quasi-static analysis of a dual arm manipulation task where the grasped object is moved around and rotated without losing grasp. Traditional grasping approaches build grasp matrices using geometry knowledge and use visual feedback to perform dexterous manipulation. To the best of my knowledge, no approach has been presented to infer grasp status from just the sensed forces in a grasp. In this thesis, a novel approach of using geometric aspects of a grasp configuration to infer object orientation from the sensed forces is presented. Through the quasi-static equilibrium condition, the grasping force and the contact locations are used to compute a noisy estimate of the object orientation. This further motivates the possibility of using haptic signals in the absence of vision. This is especially useful in scenarios where manipulation without vision is required, e.g., when the object is occluded by the robot or the environment. The computed object orientation from the proposed method is validated against the motion capture data of object orientation. Future scope to robustly learn dexterous manipulation from human demonstrations is laid out. |
---|