Publication
Joint Communication Scheduling and Velocity Control in Multi-UAV-Assisted Sensor Networks: A Deep Reinforcement Learning Approach
dc.contributor.author | Emami, Yousef | |
dc.contributor.author | Wei, Bo | |
dc.contributor.author | Li, Kai | |
dc.contributor.author | Ni, Wei | |
dc.contributor.author | Tovar, Eduardo | |
dc.date.accessioned | 2021-10-13T14:30:14Z | |
dc.date.embargo | 2031 | |
dc.date.issued | 2021-09-28 | |
dc.description.abstract | Recently, Unmanned Aerial Vehicle (UAV) swarm has been increasingly studied to collect data from ground sensors in remote and hostile areas. A key challenge is the joint design of the velocities and data collection schedules of the UAVs, as inadequate velocities and schedules would lead to failed transmissions and buffer overflows of sensors and, in turn, significant packet losses. In this paper, we optimize jointly the velocity controls and data collection schedules of multiple UAVs to minimize data losses, adapting to the battery levels, queue lengths and channel conditions of the ground sensors, and the trajectories of the UAVs. In the absence of the upto-date knowledge of the ground sensors' states, a Multi-UAV Deep Reinforcement Learning based Scheduling Algorithm (MADRL-SA) is proposed to allow the UAVs to asymptotically minimize the data loss of the system under the outdated knowledge of the network states at individual UAVs. Numerical results demonstrate that the proposed MADRL-SA reduces the packet loss by up to 54\% and 46\% in the considered simulation setting, as compared to an existing DRL solution with single-UAV and non-learning greedy heuristic, respectively. | pt_PT |
dc.description.version | info:eu-repo/semantics/publishedVersion | pt_PT |
dc.identifier.doi | 10.1109/TVT.2021.3110801 | pt_PT |
dc.identifier.uri | http://hdl.handle.net/10400.22/18706 | |
dc.language.iso | eng | pt_PT |
dc.publisher | IEEE | pt_PT |
dc.relation.publisherversion | https://ieeexplore.ieee.org/document/9531342 | pt_PT |
dc.subject | Unmanned aerial vehicles | pt_PT |
dc.subject | Communication scheduling | pt_PT |
dc.subject | Velocity control | pt_PT |
dc.subject | Multi-UAV Deep Reinforcement Learning | pt_PT |
dc.subject | Deep Q-Network | pt_PT |
dc.title | Joint Communication Scheduling and Velocity Control in Multi-UAV-Assisted Sensor Networks: A Deep Reinforcement Learning Approach | pt_PT |
dc.title.alternative | 210903 | pt_PT |
dc.type | journal article | |
dspace.entity.type | Publication | |
oaire.citation.title | IEEE Transactions on Vehicular Technology | pt_PT |
person.familyName | Tovar | |
person.givenName | Eduardo | |
person.identifier.ciencia-id | 6017-8881-11E8 | |
person.identifier.orcid | 0000-0001-8979-3876 | |
person.identifier.scopus-author-id | 7006312557 | |
rcaap.rights | embargoedAccess | pt_PT |
rcaap.type | article | pt_PT |
relation.isAuthorOfPublication | 80b63d8a-2e6d-484e-af3c-55849d0cb65e | |
relation.isAuthorOfPublication.latestForDiscovery | 80b63d8a-2e6d-484e-af3c-55849d0cb65e |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- ART_CISTER-TR-210903_2021.pdf
- Size:
- 1.16 MB
- Format:
- Adobe Portable Document Format