A joint clustering and routing scheme using DQN for vertical routing in 5G-based flying ad-hoc networks

Flying ad-hoc network (FANETs), which is one of the instances of 5G access networks, consists of unmanned aerial vehicles or flying nodes with scarce resources and high mobility rate. Due to the limited residual energy, frequent link disconnections and network partitions cannot be addressed by merel...

Full description

Saved in:
Bibliographic Details
Main Author: Muhammad, Fahad Khan
Format: Thesis
Published: 2021
Subjects:
Online Access:http://eprints.sunway.edu.my/2380/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Sunway University
Description
Summary:Flying ad-hoc network (FANETs), which is one of the instances of 5G access networks, consists of unmanned aerial vehicles or flying nodes with scarce resources and high mobility rate. Due to the limited residual energy, frequent link disconnections and network partitions cannot be addressed by merely increasing transmission range. Consequently, network performance degrades, such as lower throughput and higher end-to-end delay. The main motivation behind this work is to address frequent link disconnections and network partitions by selecting routes with higher residual energy and lower mobility rate across network planes in order to enhance network performances. In this research, a deep Q-network (DQN)-based vertical routing scheme is proposed to select routes with higher residual energy and lower mobility rate across network planes (i.e., macro-plane, pico-plane, and femto-plane) to enhance 5G access network, which has not been investigated in the literature. The 5G access network has a central controller (CC) and distributed controllers (DCs) in different network planes. The proposed hybrid scheme allows the CC and DCs to handle global and local information, respectively, and to exchange the information among themselves. The proposed scheme is suitable for highly dynamic ad-hoc networks, and it can be applied in catastrophic and disastrous areas to provide data communication between UAVs, monitoring and surveillance of borders, and targeted-based operations (e.g., object tracking). Vertical routing is performed over a clustered network, in which clusters are formed across different network planes providing inter-plane and inter-cluster communication. This helps to offload the data traffic across network planes in order to enhance network lifetime. DQN is a deep reinforcement learning (DRL) approach that integrates both deep learning and reinforcement learning, and it is suitable for solving high-dimensional and complex issues. Replay memory plays an important role in DQN by storing and reusing past experiences. However, due to the presence of redundant experiences and the sequence of experiences in replay memory, it has become challenging to manage experiences in the replay memory during training. An enhanced deep Q-network (E-DQN) is proposed to solve these challenges, which are prevalent in the traditional DQN. E-DQN ensures that experiences are distinctive in the replay memory, and shuffles them to prevent actions from forming sequences that can cause uniformity. The E-DQN is applied to a routing scheme to select routes with a higher residual energy and a lower mobility rate across network cells (i.e., macrocell, picocell, and femtocell) in 5G networks. This helps to prevent the premature convergence of delayed reward and loss function of the network. Simulation shows that the delayed reward and loss function of E-DQN converge. A higher learning rate can increase the convergence rate and prevent from a premature convergence and the convergence to sub-optimal actions. Compared to the traditional reinforcement learning approach, DQN-based vertical routing has shown to increase network lifetime by up to 60%, as well as reduce energy consumption by up to 20% and the rate of link breakages by up to 50%.