A self-supervised monocular depth estimation approach based on UAV aerial images
The Unmanned Aerial Vehicles (UAVs) have gained increasing attention recently, and depth estimation is one of the essential tasks for the safe operation of UAVs, especially for drones at low altitudes. Considering the limitations of UAVs’ size and payload, innovative methods combined with deep learn...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162468 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The Unmanned Aerial Vehicles (UAVs) have gained increasing attention recently, and depth estimation is one of the essential tasks for the safe operation of UAVs, especially for drones at low altitudes. Considering the limitations of UAVs’ size and payload, innovative methods combined with deep learning techniques have taken the place of traditional sensors to become the mainstream for predicting per-pixel depth information.
Since supervised depth estimation methods require a massive amount of depth ground truth as the supervisory signal. This article proposes an unsupervised framework to tackle the issue of predicting the depth map given a sequence of monocular images. Our model can solve the problem of scale ambiguity by training the depth subnetwork jointly with the pose subnetwork. Moreover, we introduce a modified loss function that utilizes a weighted photometric loss combined with the edge-aware smoothness loss to optimize the training. The evaluation results are compared with the model without weighted loss and other unsupervised monocular depth estimation models (Monodepth and Monodepth2). Our model shows better performance than the others, indicating potential assistance in enhancing the capability of UAVs to estimate distance with the surrounding environment. |
---|