IOT based disaster detection using multi-output classification
Deep learning (DL) can learn useful insights from disaster events and detect the number of victims and activity of disaster for an efficient and timely rescue operation. Monitoring these disasters at large-scale coverage, however, requires a plethora of Internet of things (IoT) devices, which often...
Saved in:
Main Author: | |
---|---|
Format: | Final Year Project / Dissertation / Thesis |
Published: |
2022
|
Subjects: | |
Online Access: | http://eprints.utar.edu.my/5233/1/BI_1703758_Final_%2D_YI_JIE_WONG.pdf http://eprints.utar.edu.my/5233/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Tunku Abdul Rahman |
Summary: | Deep learning (DL) can learn useful insights from disaster events and detect the number of victims and activity of disaster for an efficient and timely rescue operation. Monitoring these disasters at large-scale coverage, however, requires a plethora of Internet of things (IoT) devices, which often have limited processing capacity. Furthermore, centralized training which demands the collection of multiple local datasets contained in each IoT node, is impractical for resource-constrained IoT networks. To realize the full potential of IoT, this project proposes a holistic IoT-based disaster detection framework which optimizes the performance at both training and inference levels. The starting point is the design of a YOLO-based multi-task model that could jointly perform disaster classification and victim detection. This eliminates the straightforward approach of running multiple individual DL models. Next, federated learning (FL) in combination with active learning (AL), is leveraged to enable collaborative training of a global model among IoT devices, without sharing the bandwidth-hungry data. Lastly, at the inference stage, Open Visual Inference and Neural Network Optimization (OpenVINO) toolkit is utilized to optimize the trained model for real-time implementation. Experiment results show that the multi-task model can achieve up to 0.7933 F1 score and 0.6938 average precision (AP) for disaster classification and victim detection tasks, respectively. For the OpenVINO model, the frames per second (FPS) is 16.46, resulting in more than doubled the speed achieved by the original model before model optimization and compression. |
---|