Loading...
28 results
Search Results
Now showing 1 - 10 of 28
- Buffer-Aware Scheduling for UAV Relay Networks with Energy FairnessPublication . Emami, Yousef; Li, Kai; Tovar, EduardoFor assisting data communications in human-unfriendly environments, Unmanned Aerial Vehicles (UAVs) are employed to relay data for ground sensors thanks to UAVs' flexible deployment, high mobility, and line-of-sight communications. In UAV relay networks, energy efficient data relay is critical due to limited battery of the ground sensing devices. In this paper, we propose a butter-aware transmission scheduling optimization to minimize the energy consumption of the ground devices under constraints of butter overflows and energy cost fairness on the ground devices. Moreover, we show that the problem is NP-complete and propose a heuristic algorithm to approximate the optimal scheduling solution in polynomial time. The performance of the proposed algorithm is evaluated in terms of network sizes, packet arrival rates, and fairness of the energy consumption. Numerical results confirm that the proposed scheduling algorithm reduces the energy consumption of the ground devices in a fair fashion, while the butter overflow constraint holds.
- An Experimental Localization Testbed based on UWB Channel Impulse Response MeasurementsPublication . Li, Kai; Ni, Wei; Zhang, PeiIn this paper, we demonstrate a new ultra-wideband (UWB) localization testbed, which tracks a UWB tag and estimates locations of obstacles based on channel impulse response measurements. Anchor nodes that are developed with off-the-shelf Decawave DW1000 UWB transceivers are deployed to cover the area of interest. The testbed is implemented and preliminary experiments are carried out to estimate the location of the object by analyzing channel impulse response strength of the UWB tag.
- Deep Reinforcement Learning for Persistent Cruise Control in UAV-aided Data CollectionPublication . Kurunathan, John Harrison; Li, Kai; Ni, Wei; Tovar, Eduardo; Dressler, FalkoAutonomous UAV cruising is gaining attention dueto its flexible deployment in remote sensing, surveillance, andreconnaissance. A critical challenge in data collection with theautonomous UAV is the buffer overflows at the ground sensorsand packet loss due to lossy airborne channels. Trajectoryplanning of the UAV is vital to alleviate buffer overflows as wellas channel fading. In this work, we propose a Deep DeterministicPolicy Gradient based Cruise Control (DDPG-CC) to reducethe overall packet loss through online training of headings andcruise velocity of the UAV, as well as the selection of the groundsensors for data collection. Preliminary performance evaluationdemonstrates that DDPG-CC reduces the packet loss rate byunder 5% when sufficient training is provided to the UAV.
- Employing Intelligent Aerial Data Aggregators for Internet of Things: Challenges and SolutionsPublication . Li, Kai; Ni, W.; Noor, Alam; Guizani, MohsenInternet-of-Things (IoT) devices equipped with temperature and humidity sensors, and cameras are increasingly deployed to monitor remote and human-unfriendly areas, e.g., farmlands, forests, rural highways or electricity infrastructures. Aerial data aggregators, e.g., autonomous drones, provide a promising solution for collecting sensory data of the IoT devices in human-unfriendly environments, enhancing network scalability and connectivity. The flexibility of a drone and favourable line-of-sight connection between the drone and IoT devices can be exploited to improve data reception at the drone. This article first discusses challenges of the drone-assisted data aggregation in IoT networks, such as incomplete network knowledge at the drone, limited buffers of the IoT devices, and lossy wireless channels. Next, we investigate the feasibility of onboard deep reinforcement learning-based solutions to allow a drone to learn its cruise control and data collection schedule online. For deep reinforcement learning in a continuous operation domain, deep deterministic policy gradient (DDPG) is suitable to deliver effective joint cruise control and communication decision, using its outdated knowledge of the IoT devices and network states. A case study shows that the DDPG-based framework can take advantage of the continuous actions to substantially outperform existing non-learning-based alternatives.
- Proactive Eavesdropping via Jamming for Trajectory Tracking of UAVsPublication . Li, Kai; Kanhere, Salil S.; Ni, Wei; Tovar, Eduardo; Guizani, MohsenThis paper considers that a legitimate UAV tracks suspicious UAVs’ flight for preventing intended crimes and terror attacks. To enhance tracking accuracy, the legitimate UAV proactively eavesdrops suspicious UAVs’ communication via sending jamming signals. A tracking algorithm is developed for the legitimate UAV to track the suspicious flight by comprehensively utilizing eavesdropped packets, angle-of-arrival and received signal strength of the suspicious transmitter’s signal. A new co-simulation framework is implemented to combine the complementary features of optimization toolbox with channel modeling (in Matlab) and discrete event-driven mobility tracking (in NS3). Moreover, numerical results validate the proposed algorithms in terms of tracking accuracy of the suspicious UAVs’ trajectory
- Online Velocity Control and Data Capture of Drones for the Internet of Things: An Onboard Deep Reinforcement Learning ApproachPublication . Li, Kai; Ni, Wei; Tovar, Eduardo; Jamalipour, AbbasApplications of unmanned aerial vehicles (UAVs) for data collection are a promising means to extend Internet of Things (IoT) networks to remote and hostile areas and to locations where there is no access to power supplies. The adequate design of UAV velocity control and communication decision making is critical to minimize the data packet losses at ground IoT nodes that result from overflowing buffers and transmission failures. However, online velocity control and communication decision making are challenging in UAV-enabled IoT networks, due to a UAV?s lack of up-to-date knowledge about the state of the nodes, e.g., the battery energy, buffer length, and channel conditions.
- Deep Graph-based Reinforcement Learning for Joint Cruise Control and Task Offloading for Aerial Edge Internet-of-Things (EdgeIoT)Publication . Li, Kai; Ni, Wei; Yuan, Xin; Noor, Alam; Jamalipour, AbbasThis paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices. When the UAV schedules an IoT device to offload its computation task, the tasks buffered at the other unselected devices could be outdated and have to be cancelled. We investigate a new joint optimization of UAV cruise control and task offloading allocation, which maximizes tasks offloaded to the UAV, subject to the IoT device’s computation capacity and battery budget, and the UAV’s speed limit. Since the optimization contains a large solution space while the instantaneous network states are unknown to the UAV, we propose a new deep graph-based reinforcement learning framework. An advantage actor-critic (A2C) structure is developed to train the real-time continuous actions of the UAV in terms of the flight speed, heading, and the offloading schedule of the IoT device. By exploring hidden representations resulting from the network feature correlation, our framework takes advantage of graph neural networks (GNN) to supervise the training of UAV’s actions in A2C. The proposed GNN-A2C framework is implemented with Google Tensorflow. The performance analysis shows that GNN-A2C achieves fast convergence and reduces considerably the task missing rate in aerial EdgeIoT.
- Reinforcement Learning for Scheduling Wireless Powered Sensor CommunicationsPublication . Li, Kai; Ni, Wei; Abolhasan, Mehran; Tovar, EduardoIn a wireless powered sensor network, a base station transfers power to sensors by using wireless power transfer (WPT). Inadequately scheduling WPT and data transmission causes fast battery drainage and data queue overflow of some sensors who could have potentially gained high data reception. In this paper, scheduling WPT and data transmission is formulated as a Markov decision process (MDP) by jointly considering sensors’ energy consumption and data queue. In practical scenarios, the prior knowledge about battery level and data queue length in MDP is not available at the base station. We study reinforcement learning at the sensors to find a transmission scheduling strategy, minimizing data packet loss. An optimal scheduling strategy with full-state information is also investigated, assuming that the complete battery level and data queue information are well known by the base station. This presents the lower bound of the data packet loss in wireless powered sensor networks. Numerical results demonstrate that the proposed reinforcement learning scheduling algorithm significantly reduces network packet loss rate by 60%, and increases network goodput by 67%, compared to existing non-MDP greedy approaches. Moreover, comparing the optimal solutions, the performance loss due to the lack of sensors’ full-state information is less than 4.6%.
- Deep Q-Networks for Aerial Data Collection in Multi-UAV-Assisted Wireless Sensor NetworksPublication . Emami, Yousef; Wei, Bo; Li, Kai; Ni, Wei; Tovar, EduardoUnmanned Aerial Vehicles (UAVs) can collaborate to collect and relay data for ground sensors in remote and hostile areas. In multi-UAV-assisted wireless sensor networks (MA-WSN), the UAVs' movements impact on channel condition and can fail data transmission, this situation along with newly arrived data give rise to buffer overflows at the ground sensors. Thus, scheduling data transmission is of utmost importance in MA-WSN to reduce data packet losses resulting from buffer overflows and channel fading. In this paper, we investigate the optimal ground sensor selection at the UAVs to minimize data packet losses . The optimization problem is formulated as a multiagent Markov decision process, where network states consist of battery levels and data buffer lengths of the ground sensor, channel conditions, and waypoints of the UAV along the trajectory. In practice, an MA-WSN contains a large number of network states, while the up-to-date knowledge of the network states and other UAVs' sensor selection decisions is not available at each agent. We propose a Multi-UAV Deep Reinforcement Learning based Scheduling Algorithm (MUAIS) to minimize the data packet loss, where the UAVs learn the underlying patterns of the data and energy arrivals at all the ground sensors. Numerical results show that the proposed MUAIS achieves at least 46\% and 35\% lower packet loss than an optimal solution with single-UAV and an existing non-learning greedy algorithm, respectively.
- Fair Scheduling for Data Collection in Mobile Sensor Networks with Energy HarvestingPublication . Li, Kai; Yuen, Chau; Kusy, Branislav; Jurdak, Raja; Ignjatovic, Aleksandar; Kanhere, SalilWe consider the problem of data collection from a network of energy harvesting sensors, applied to tracking mobile assets in rural environments. Our application constraints favor a fair and energy-aware solution, with heavily duty-cycled sensor nodes communicating with powered base stations. We study a novel scheduling optimization problem for energy harvesting mobile sensor network, that maximizes the amount of collected data under the constraints of radio link quality and energy harvesting efficiency, while ensuring a fair data reception. We show that the problem is NP-complete and propose a heuristic algorithm to approximate the optimal scheduling solution in polynomial time. Moreover, our algorithm is flexible in handling progressive energy harvesting events, such as with solar panels, or opportunistic and bursty events, such as with Wireless Power Transfer. We use empirical link quality data, solar energy, and WPT efficiency to evaluate the proposed algorithm in extensive simulations and compare its performance to state-of-theart. We show that our algorithm achieves high data reception rates, under different fairness and node lifetime constraints.
- «
- 1 (current)
- 2
- 3
- »