Repository logo
 
Publication

Joint Communication Scheduling and Velocity Control in Multi-UAV-Assisted Sensor Networks: A Deep Reinforcement Learning Approach

dc.contributor.authorEmami, Yousef
dc.contributor.authorWei, Bo
dc.contributor.authorLi, Kai
dc.contributor.authorNi, Wei
dc.contributor.authorTovar, Eduardo
dc.date.accessioned2021-10-13T14:30:14Z
dc.date.embargo2031
dc.date.issued2021-09-28
dc.description.abstractRecently, Unmanned Aerial Vehicle (UAV) swarm has been increasingly studied to collect data from ground sensors in remote and hostile areas. A key challenge is the joint design of the velocities and data collection schedules of the UAVs, as inadequate velocities and schedules would lead to failed transmissions and buffer overflows of sensors and, in turn, significant packet losses. In this paper, we optimize jointly the velocity controls and data collection schedules of multiple UAVs to minimize data losses, adapting to the battery levels, queue lengths and channel conditions of the ground sensors, and the trajectories of the UAVs. In the absence of the upto-date knowledge of the ground sensors' states, a Multi-UAV Deep Reinforcement Learning based Scheduling Algorithm (MADRL-SA) is proposed to allow the UAVs to asymptotically minimize the data loss of the system under the outdated knowledge of the network states at individual UAVs. Numerical results demonstrate that the proposed MADRL-SA reduces the packet loss by up to 54\% and 46\% in the considered simulation setting, as compared to an existing DRL solution with single-UAV and non-learning greedy heuristic, respectively.pt_PT
dc.description.versioninfo:eu-repo/semantics/publishedVersionpt_PT
dc.identifier.doi10.1109/TVT.2021.3110801pt_PT
dc.identifier.urihttp://hdl.handle.net/10400.22/18706
dc.language.isoengpt_PT
dc.publisherIEEEpt_PT
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9531342pt_PT
dc.subjectUnmanned aerial vehiclespt_PT
dc.subjectCommunication schedulingpt_PT
dc.subjectVelocity controlpt_PT
dc.subjectMulti-UAV Deep Reinforcement Learningpt_PT
dc.subjectDeep Q-Networkpt_PT
dc.titleJoint Communication Scheduling and Velocity Control in Multi-UAV-Assisted Sensor Networks: A Deep Reinforcement Learning Approachpt_PT
dc.title.alternative210903pt_PT
dc.typejournal article
dspace.entity.typePublication
oaire.citation.titleIEEE Transactions on Vehicular Technologypt_PT
person.familyNameTovar
person.givenNameEduardo
person.identifier.ciencia-id6017-8881-11E8
person.identifier.orcid0000-0001-8979-3876
person.identifier.scopus-author-id7006312557
rcaap.rightsembargoedAccesspt_PT
rcaap.typearticlept_PT
relation.isAuthorOfPublication80b63d8a-2e6d-484e-af3c-55849d0cb65e
relation.isAuthorOfPublication.latestForDiscovery80b63d8a-2e6d-484e-af3c-55849d0cb65e

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
ART_CISTER-TR-210903_2021.pdf
Size:
1.16 MB
Format:
Adobe Portable Document Format