Flexible and resource-efficient multi-robot collaborative visual-inertial-range localization
In multi-robot systems, two important research problems are relative localization between the robots and global localization of all robots in a common frame. Traditional methods rely on detecting inter and intra-robot loop closures, which can be restrictive operation-wise since the robot must form l...
محفوظ في:
المؤلفون الرئيسيون: | , , |
---|---|
مؤلفون آخرون: | |
التنسيق: | مقال |
اللغة: | English |
منشور في: |
2022
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/162341 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
الملخص: | In multi-robot systems, two important research problems are relative localization between the robots and global localization of all robots in a common frame. Traditional methods rely on detecting inter and intra-robot loop closures, which can be restrictive operation-wise since the robot must form loops. Ultra-wideband sensors, which provide direct distance measurements and robot ID, can replace loop closures in many applications. However, existing research on UWB-aided multi-robot state estimation often ignores the odometry drift which leads to inaccurate global position in the long run. In this work, we present a UWB-aided multi-robot localization system that does not rely on loop closure (flexible) and only requires odometry data from neighbors (resource-efficient). We propose a two-stage approach: 1) with a long sliding window, the relative transformation is refined based on range and odometry data, 2) onboard visual-inertial-range data are tightly fused in a short-term sliding window to provide more accurate local and global estimates. Simulation and real-life experiments with two quadrotors show that the system as a whole outperforms previous approaches as well as its individual parts. |
---|