ADFPA – A deep reinforcement learning-based flow priority allocation scheme for throughput optimization in FANETs

Wei Jian Lau, Joanne Mun-Yee Lim, Chun Yong Chong, Nee Shen Ho, Thomas Wei Min Ooi

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

Flying ad hoc networks (FANETs) are easy to deploy and cost-efficient, however they are limited by the static protocols used in 802.11 and CSMA-based networks to support high bandwidth multi-UAV applications. This work proposes an Anticipatory Dynamic Flow Priority Allocation (ADFPA) scheme to optimize the priority levels of outgoing traffic flows for a transmitting node to maximize the total network throughput. Unlike other deep reinforcement learning (DRL)-based schemes in centralized networks, ADFPA is designed to be distributed, multi-agent, and proactive. It uses current and forecasted multi-context information to optimize the priority levels of traffic flows in a decentralized and dynamic FANET. Furthermore, a traffic flow sampling and padding algorithm is proposed so that a trained agent can be redeployed in different environments without retraining to address the practicality issue. Our evaluations show that ADFPA outperforms other state-of-the-art schemes by a maximum of 37% and 59.4% in terms of the network throughput in the single and multi-transmitting nodes environment, respectively, while achieving the best fairness amongst all schemes. These improvements translate to better data transmission capabilities in a conventional FANET, and the proposed scheme can enable the use of a FANET architecture in more demanding applications without switching to centralized solutions.

Original languageEnglish
Article number100684
Number of pages14
JournalVehicular Communications
Volume44
DOIs
Publication statusPublished - Dec 2023

Keywords

  • Anticipatory Networking
  • Reinforcement Learning
  • Unmanned Aerial Vehicle

Cite this