Repository logo
 
Loading...
Thumbnail Image
Publication

Deep Graph-based Reinforcement Learning for Joint Cruise Control and Task Offloading for Aerial Edge Internet-of-Things (EdgeIoT)

Use this identifier to reference this record.
Name:Description:Size:Format: 
ART_CISTER-TR-220602_2022.pdf683.97 KBAdobe PDF Download

Advisor(s)

Abstract(s)

This paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices. When the UAV schedules an IoT device to offload its computation task, the tasks buffered at the other unselected devices could be outdated and have to be cancelled. We investigate a new joint optimization of UAV cruise control and task offloading allocation, which maximizes tasks offloaded to the UAV, subject to the IoT device’s computation capacity and battery budget, and the UAV’s speed limit. Since the optimization contains a large solution space while the instantaneous network states are unknown to the UAV, we propose a new deep graph-based reinforcement learning framework. An advantage actor-critic (A2C) structure is developed to train the real-time continuous actions of the UAV in terms of the flight speed, heading, and the offloading schedule of the IoT device. By exploring hidden representations resulting from the network feature correlation, our framework takes advantage of graph neural networks (GNN) to supervise the training of UAV’s actions in A2C. The proposed GNN-A2C framework is implemented with Google Tensorflow. The performance analysis shows that GNN-A2C achieves fast convergence and reduces considerably the task missing rate in aerial EdgeIoT.

Description

Keywords

Unmanned aerial vehicle Aerial EdgeIoT Graph neural network Deep reinforcement learning Cruise control Task offloading

Citation

Research Projects

Organizational Units

Journal Issue