Vision-based distributed formation tracking control and coordination of multi-mobile robot systems
Multi-mobile robot coordination has vast applications in various fields, including industry, agriculture, rescue, and exploration. The collaborative efforts of multiple robots can greatly enhance work efficiency, increase system resilience, and handle complicated missions that would be challenging f...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175478 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Multi-mobile robot coordination has vast applications in various fields, including industry, agriculture, rescue, and exploration. The collaborative efforts of multiple robots can greatly enhance work efficiency, increase system resilience, and handle complicated missions that would be challenging for a single robot, such as transporting larger or heavier objects. Formation tracking control, a key part of multi-robot coordination, is focused on synchronizing robots' motion and activity while preserving their relative positioning and directionality. It has caught the eye of researchers for its effectiveness in addressing collaborative tasks. Vision sensors, particularly cameras, are widely used in formation tracking, enabling robots to identify target robots, measure relative positions, and observe the surrounding environment. These factors have spurred research momentum in the sphere of vision-based robot formation control and coordination.
Despite numerous advantages, vision sensors face several challenges in real-world formation tracking that impede robots from obtaining accurate observation of their target robots and surrounding environments, resulting in tracking failures. One of the main challenges comes from the camera's constrained field-of-view (FOV), which is unfortunately almost ignored by most existing papers. To eliminate this research gap, our first endeavor involves conducting a thorough investigation into the influence of limited FOV on the leader-follower formation tracking of mobile robots under obstacles. Here, we mount only one RGB-D camera on the follower, while most existing research employs at least two sensors, including a camera to recognize the leader and a LiDAR to observe obstacles. However, the limited FOV of RGB-D cameras introduces three significant challenges: intermittent observation of the leader, partial detection of obstacles and conflicting observations of the leader and obstacles. To overcome them, we first propose a controller that enables the follower to achieve continuous observation and formation tracking of the leader without obstacles and depth images. To address the partial detection of obstacles, we design a rotating device that allows the camera to actively extend its observation range and resolve conflicts with leader detection. Based on camera feedback and the proposed controller, a multi-objective controller is presented that enables the follower to achieve formation tracking and obstacle avoidance simultaneously, without requiring communication or leader velocity.
Moving forward, we delve into applications of multi-robot formation control specifically in cooperative object transportation. Apart from constraints from vision sensors, actual transportation tasks also introduce additional constraints, such as the absence of environment maps, global localization systems and inter-robot communication, obstacle avoidance, protection of the payload, bounded robot velocities and so forth. Likewise, these practical constraints have not received significant attention from most existing work. To address them, we start with a relatively simple scenario involving two robots with a leader-follower configuration. First, a visual servo device is installed on the follower robot to overcome limited FOV constraints. Then, the leader employs a bounded navigation method to direct the system to the destination without maps. Meanwhile, the follower utilizes a bounded multi-objective controller and a danger recovery scheme to effectively balance tasks of tracking the leader and maintaining all constraints without inter-robot communication.
Finally, we expand our vision-based cooperative transportation algorithm to accommodate multiple robots, specifically more than three, by enhancing conventional formation control. This extended algorithm addresses practical constraints and is validated using a cable-suspended payload. It effectively overcomes the limitations of distributed control in managing multiple constraints and significantly reduces complexity compared to optimization-based methods. Our transportation method consists of two key components: robot trajectory generation and trajectory tracking. Unlike most time-consuming trajectory generation methods, our approach achieves constant time-complexity without relying on global maps. In terms of trajectory tracking, our control-based method not only handles multiple constraints with ease, similar to optimization-based methods, but also reduces their time-complexity from polynomial to linear.
Overall, this thesis explores the vision-based formation tracking control problem and its practical application in cooperative object transportation. It successfully tackles the practical challenges associated with vision sensors and real-world transportation, filling a gap in multi-robot research by bridging theory and practice. All proposed algorithms are distributed and thoroughly confirmed by numerical simulations and real-world robot experiments. |
---|