Human–robot co-manipulation during surface tooling: a general framework based on impedance control, haptic rendering and discrete geometry
Despite the advancements in machine learning and artificial intelligence, there are many tooling tasks with cognitive aspects that are rather challenging for robots to handle in full autonomy, thus still requiring a certain degree of interaction with a human operator. In this paper, we propose a the...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159676 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Despite the advancements in machine learning and artificial intelligence, there are many tooling tasks with cognitive aspects that are rather challenging for robots to handle in full autonomy, thus still requiring a certain degree of interaction with a human operator. In this paper, we propose a theoretical framework for both planning and execution of robot-surface contact tasks whereby interaction with a human operator can be accommodated to a variable degree. The starting point is the geometry of surface, which we assume known and available in a discretized format, e.g. through scanning technologies. To allow for realtime computation, rather than interacting with thousands of vertices, the robot only interacts with a single proxy, i.e. a massless virtual object constrained to ‘live on’ the surface and subject to first order viscous dynamics. The proxy and an impedance-controlled robot are then connected through tuneable and possibly viscoelastic coupling, i.e. (virtual) springs and dampers. On the one hand, the proxy slides along discrete geodesics of the surface in response to both viscoelastic coupling with the robot and to a possible external force (a virtual force which can be used to induce autonomous behaviours). On the other hand, the robot is free to move in 3D in reaction to the same viscoelastic coupling as well as to a possible external force, which includes an actual force exerted by a human operator. The proposed approach is multi-objective in the sense that different operational (autonomous/collaborative) and interactive (for contact/non-contact tasks) modalities can be realized by simply modulating the viscoelastic coupling as well as virtual and physical external forces. We believe that our proposed framework might lead to a more intuitive interfacing to robot programming, as opposed to standard coding. To this end, we also present numerical and experimental studies demonstrating path planning as well as autonomous and collaborative interaction for contact tasks with a free-form surface. |
---|