Repository logo
 
Publication

LSTM-characterized Deep Reinforcement Learning for Continuous Flight Control and Resource Allocation in UAV-assisted Sensor Network

dc.contributor.authorLi, Kai
dc.contributor.authorNi, Wei
dc.contributor.authorDressler, Falko
dc.date.accessioned2021-09-10T12:32:29Z
dc.date.embargo2100
dc.date.issued2021-08-05
dc.description.abstractUnmanned aerial vehicles (UAVs) can be employed to collect sensory data in remote wireless sensor networks (WSN). Due to UAV's maneuvering, scheduling a sensor device to transmit data can overflow data buffers of the unscheduled ground devices. Moreover, lossy airborne channels can result in packet reception errors at the scheduled sensor. This paper proposes a new deep reinforcement learning based flight resource allocation framework (DeFRA) to minimize the overall data packet loss in a continuous action space. DeFRA is based on Deep Deterministic Policy Gradient (DDPG), optimally controls instantaneous headings and speeds of the UAV, and selects the ground device for data collection. Furthermore, a state characterization layer, leveraging long short-term memory (LSTM), is developed to predict network dynamics, resulting from time-varying airborne channels and energy arrivals at the ground devices. To validate the effectiveness of DeFRA, experimental data collected from a real-world UAV testbed and energy harvesting WSN are utilized to train the actions of the UAV. Numerical results demonstrate that the proposed DeFRA achieves a fast convergence while reducing the packet loss by over 15%, as compared to existing deep reinforcement learning solutions.pt_PT
dc.description.sponsorshipThis work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDP/UIDB/04234/2020); also by national funds through the FCT, under CMU Portugal partnership, within project CMU/TIC/0022/2019 (CRUAV).pt_PT
dc.description.versioninfo:eu-repo/semantics/publishedVersionpt_PT
dc.identifier.doi10.1109/JIOT.2021.3102831pt_PT
dc.identifier.urihttp://hdl.handle.net/10400.22/18346
dc.language.isoengpt_PT
dc.publisherIEEEpt_PT
dc.relationUIDP/UIDB/04234/2020pt_PT
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9507550pt_PT
dc.subjectUnmanned aerial vehiclespt_PT
dc.subjectFlight trajectorypt_PT
dc.subjectResource allocationpt_PT
dc.subjectDeep deterministic policy gradientpt_PT
dc.subjectLong short-term memorypt_PT
dc.subjectExperimental datasetspt_PT
dc.titleLSTM-characterized Deep Reinforcement Learning for Continuous Flight Control and Resource Allocation in UAV-assisted Sensor Networkpt_PT
dc.title.alternative210802pt_PT
dc.typejournal article
dspace.entity.typePublication
oaire.awardURIinfo:eu-repo/grantAgreement/FCT/3599-PPCDT/156761/PT
oaire.citation.endPage11pt_PT
oaire.citation.startPage1pt_PT
oaire.citation.titleIEEE Internet of Things Journalpt_PT
oaire.fundingStream3599-PPCDT
person.familyNameLi
person.givenNameKai
person.identifier.ciencia-idEE10-B822-16ED
person.identifier.orcid0000-0002-0517-2392
project.funder.identifierhttp://doi.org/10.13039/501100001871
project.funder.nameFundação para a Ciência e a Tecnologia
rcaap.rightsclosedAccesspt_PT
rcaap.typearticlept_PT
relation.isAuthorOfPublication21f3fb85-19c2-4c89-afcd-3acb27cedc5e
relation.isAuthorOfPublication.latestForDiscovery21f3fb85-19c2-4c89-afcd-3acb27cedc5e
relation.isProjectOfPublication35de90fc-8621-4acb-a2b4-0ced71747cd3
relation.isProjectOfPublication.latestForDiscovery35de90fc-8621-4acb-a2b4-0ced71747cd3

Files