Loading...
2 results
Search Results
Now showing 1 - 2 of 2
- Onboard Deep Deterministic Policy Gradients for Online Flight Resource Allocation of UAVsPublication . Li, Kai; Emami, Yousef; Ni, Wei; Tovar, Eduardo; Han, ZhuIn Unmanned Aerial Vehicle (UAV) enabled data collection, scheduling data transmissions of the ground nodes while controlling flight of the UAV, e.g., heading and velocity, is critical to reduce the data packet loss resulting from buffer overflows and channel fading. In this letter, a new online flight resource allocation scheme based on deep deterministic policy gradients (DDPG-FRAS) is studied to jointly optimize the flight control of the UAV and data collection scheduling along the trajectory in real time, thereby asymptotically minimizing the packet loss of the ground sensor networks. Numerical results confirm that the proposed DDPG-FRAS can gradually converge, while enlarging the buffer size can reduce the packet loss by 47.9%.
- Data-driven Flight Control of Internet-of- Drones for Sensor Data Aggregation using Multi-agent Deep Reinforcement LearningPublication . Li, Kai; Ni, Wei; Emami, Yousef; Dressler, FalkoEnergy-harvesting-powered sensors are increasingly deployed beyond the reach of terrestrial gateways, where there is often no persistent power supply. Making use of the internet of drones (IoD) for data aggregation in such environments is a promising paradigm to enhance network scalability and connectivity. The flexibility of IoD and favorable line-of-sight connections between the drones and ground nodes are exploited to improve data reception at the drones. In this article, we discuss the challenges of online flight control of IoD, where data-driven neural networks can be tailored to design the trajectories and patrol speeds of the drones and their communication schedules, preventing buffer overflows at the ground nodes. In a small-scale IoD, a multi-agent deep reinforcement learning can be developed with long short-term memory to train the continuous flight control of IoD and data aggregation scheduling, where a joint action is generated for IoD via sharing the flight control decisions among the drones. In a large-scale IoD, sharing the flight control decisions in real-time can result in communication overheads and interference. In this case, deep reinforcement learning can be trained with the second-hand visiting experiences, where the drones learn the actions of each other based on historical scheduling records maintained at the ground nodes.