Browsing by Author "Ni, Wei"
Now showing 1 - 10 of 34
Results Per Page
Sort Options
- An Experimental Localization Testbed based on UWB Channel Impulse Response MeasurementsPublication . Li, Kai; Ni, Wei; Zhang, PeiIn this paper, we demonstrate a new ultra-wideband (UWB) localization testbed, which tracks a UWB tag and estimates locations of obstacles based on channel impulse response measurements. Anchor nodes that are developed with off-the-shelf Decawave DW1000 UWB transceivers are deployed to cover the area of interest. The testbed is implemented and preliminary experiments are carried out to estimate the location of the object by analyzing channel impulse response strength of the UWB tag.
- An Experimental Study of Two-way Ranging Optimization in UWB-based Simultaneous Localization and Wall-Mapping SystemsPublication . Li, Kai; Ni, Wei; Wei, Bo; Guizani, MohsenIn this paper, we propose a new ultra-wideband (UWB)-based simultaneous localization and wall-mapping (SLAM) system, which adopts two-way ranging optimization on UWB anchor and tag nodes to track the target's real-time movement in an unknown area. The proposed UWB-based SLAM system captures time difference of arrival (TDoA) of the anchor nodes' signals over a line-of-sight propagation path and reflected paths. The real-time location of the UWB tag is estimated according to the real-time TDoA measurements. To minimize the estimation error resulting from background noise in the two-way ranging, a Least Squares Method is implemented to minimize the estimation error for the localization of a static target, while Kalman Filter is applied for the localization of a mobile target. An experimental testbed is built based on off-the-shelf UWB hardware. Experiments validate that a reflector, e.g., a wall, and the UWB tag can be located according to the two-way ranging measurement. The localization accuracy of the proposed SLAM system is also evaluated, where the difference between the estimated location and the ground truth trajectory is less than 15cm.
- Continuous Maneuver Control and Data Capture Scheduling of Autonomous Drone in Wireless Sensor NetworksPublication . Li, Kai; Ni, Wei; Dressler, FalkoThanks to flexible deployment and excellent maneuverability, autonomous drones are regarded as an effective means to enable aerial data capture in large-scale wireless sensor networks with limited to no cellular infrastructure, e.g., smart farming in a remote area. A key challenge in drone-assisted sensor networks is that the autonomous drone's maneuvering can give rise to buffer overflows at the ground sensors and unsuccessful data collection due to lossy airborne channels. In this paper, we propose a new Deep Deterministic Policy Gradient based Maneuver Control (DDPG-MC) scheme which minimizes the overall data packet loss through online training instantaneous headings and patrol velocities of the drone, and the selection of the ground sensors for data collection in a continuous action space. Moreover, the maneuver control of the drone and communication schedule is formulated as an absorbing Markov chain, where network states consist of battery energy levels, data queue backlogs, timestamps of the data collection, and channel conditions between the ground sensors and the drone. An experience replay memory is utilized onboard at the drone to store the training experiences of the maneuver control and communication schedule at each time step.
- Cooperative Secret Key Generation for Platoon-Based Vehicular CommunicationsPublication . Li, Kai; Lu, Lingyun; Ni, Wei; Tovar, Eduardo; Guizani, MohsenIn a vehicular platoon, the lead vehicle that is responsible for managing the platoon's moving directions and velocity periodically disseminates messages to the following automated vehicles in a multi-hop vehicular network. However, due to the broadcast nature of wireless channels, vehicle-to-vehicle (V2V) communications are vulnerable to eavesdropping and message modification. Generating secret keys by extracting the shared randomness in a wireless fading channel is a promising way for V2V communication security. We study a security scheme for platoon-based V2V communications, where the platooning vehicles generate a shared secret key based on the quantized fading channel randomness. To improve conformity of the generated key, the probability of secret key agreement is formulated, and a novel secret key agreement algorithm is proposed to recursively optimize the channel quantization intervals, maximizing the key agreement probability. Numerical evaluations demonstrate that the key agreement probability achieved by our security protocol given different platoon size, channel quality, and number of quantization intervals. Furthermore, by applying our security protocol, it is shown that the probability that the encrypted data being cracked by an eavesdropper is less than 5%.
- Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder ApproachPublication . Li, Kai; Zheng, Jingjing; Yuan, Xin; Ni, Wei; Akan, Ozgur B.; Poor, H. VincentThis paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL.
- Data-driven Deep Reinforcement Learning for Online Flight Resource Allocation in UAVaided Wireless Powered Sensor NetworksPublication . Li, Kai; Ni, Wei; Kurunathan, Harrison; Dressler, FalkoIn wireless powered sensor networks (WPSN), data of ground sensors can be collected or relayed by an unmanned aerial vehicle (UAV) while the battery of the ground sensor can be charged via wireless power transfer. A key challenge of resource allocation in UAV-aided WPSN is to prevent battery drainage and buffer overflow of the ground sensors in the presence of highly dynamic lossy airborne channels which can result in packet reception errors. Moreover, state and action spaces of the resource allocation problem are large, which is hardly explored online. To address the challenges, a new data-driven deep reinforcement learning framework, DDRL-RA, is proposed to train flight resource allocation online so that the data packet loss is minimized. Due to time-varying airborne channels, DDRL-RA firstly leverages long short-term memory (LSTM) with pre-collected offline datasets for channel randomness predictions. Then, Deep Deterministic Policy Gradient (DDPG) is studied to control the flight trajectory of the UAV, and schedule the ground sensor to transmit data and harvest energy. To evaluate the performance of DDRL-RA, a UAV-ground sensor testbed is built, where real-world datasets of channel gains are collected. DDRL-RA is implemented on Tensorflow, and numerical results show that DDRL-RA achieves 19\% lower packet loss than other learning-based frameworks.
- Data-driven Flight Control of Internet-of- Drones for Sensor Data Aggregation using Multi-agent Deep Reinforcement LearningPublication . Li, Kai; Ni, Wei; Emami, Yousef; Dressler, FalkoEnergy-harvesting-powered sensors are increasingly deployed beyond the reach of terrestrial gateways, where there is often no persistent power supply. Making use of the internet of drones (IoD) for data aggregation in such environments is a promising paradigm to enhance network scalability and connectivity. The flexibility of IoD and favorable line-of-sight connections between the drones and ground nodes are exploited to improve data reception at the drones. In this article, we discuss the challenges of online flight control of IoD, where data-driven neural networks can be tailored to design the trajectories and patrol speeds of the drones and their communication schedules, preventing buffer overflows at the ground nodes. In a small-scale IoD, a multi-agent deep reinforcement learning can be developed with long short-term memory to train the continuous flight control of IoD and data aggregation scheduling, where a joint action is generated for IoD via sharing the flight control decisions among the drones. In a large-scale IoD, sharing the flight control decisions in real-time can result in communication overheads and interference. In this case, deep reinforcement learning can be trained with the second-hand visiting experiences, where the drones learn the actions of each other based on historical scheduling records maintained at the ground nodes.
- Deep Graph-based Reinforcement Learning for Joint Cruise Control and Task Offloading for Aerial Edge Internet-of-Things (EdgeIoT)Publication . Li, Kai; Ni, Wei; Yuan, Xin; Noor, Alam; Jamalipour, AbbasThis paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices. When the UAV schedules an IoT device to offload its computation task, the tasks buffered at the other unselected devices could be outdated and have to be cancelled. We investigate a new joint optimization of UAV cruise control and task offloading allocation, which maximizes tasks offloaded to the UAV, subject to the IoT device’s computation capacity and battery budget, and the UAV’s speed limit. Since the optimization contains a large solution space while the instantaneous network states are unknown to the UAV, we propose a new deep graph-based reinforcement learning framework. An advantage actor-critic (A2C) structure is developed to train the real-time continuous actions of the UAV in terms of the flight speed, heading, and the offloading schedule of the IoT device. By exploring hidden representations resulting from the network feature correlation, our framework takes advantage of graph neural networks (GNN) to supervise the training of UAV’s actions in A2C. The proposed GNN-A2C framework is implemented with Google Tensorflow. The performance analysis shows that GNN-A2C achieves fast convergence and reduces considerably the task missing rate in aerial EdgeIoT.
- Deep Q-Learning based Resource Management in UAV-assisted Wireless Powered IoT NetworksPublication . Li, Kai; Ni, Wei; Tovar, Eduardo; Jamalipour, AbbasIn Unmanned Aerial Vehicle (UAV)-assisted Wireless Powered Internet of Things (IoT), the UAV is employed to charge the IoT nodes remotely via Wireless Power Transfer (WPT) and collect their data. A key challenge of resource management for WPT and data collection is preventing battery drainage and butter overflow of the ground IoT nodes in the presence of highly dynamic airborne channels. In this paper, we consider the resource management problem in practical scenarios, where the UAV has no a-prior information on battery levels and data queue lengths of the nodes. We formulate the resource management of UAV-assisted WPT and data collection as Markov Decision Process (MDP), where the states consist of battery levels and data queue lengths of the IoT nodes, channel qualities, and positions of the UAV. A deep Q-learning based resource management is proposed to minimize the overall data packet loss of the IoT nodes, by optimally deciding the IoT node for data collection and power transfer, and the associated modulation scheme of the IoT node.
- Deep Q-Networks for Aerial Data Collection in Multi-UAV-Assisted Wireless Sensor NetworksPublication . Emami, Yousef; Wei, Bo; Li, Kai; Ni, Wei; Tovar, EduardoUnmanned Aerial Vehicles (UAVs) can collaborate to collect and relay data for ground sensors in remote and hostile areas. In multi-UAV-assisted wireless sensor networks (MA-WSN), the UAVs' movements impact on channel condition and can fail data transmission, this situation along with newly arrived data give rise to buffer overflows at the ground sensors. Thus, scheduling data transmission is of utmost importance in MA-WSN to reduce data packet losses resulting from buffer overflows and channel fading. In this paper, we investigate the optimal ground sensor selection at the UAVs to minimize data packet losses . The optimization problem is formulated as a multiagent Markov decision process, where network states consist of battery levels and data buffer lengths of the ground sensor, channel conditions, and waypoints of the UAV along the trajectory. In practice, an MA-WSN contains a large number of network states, while the up-to-date knowledge of the network states and other UAVs' sensor selection decisions is not available at each agent. We propose a Multi-UAV Deep Reinforcement Learning based Scheduling Algorithm (MUAIS) to minimize the data packet loss, where the UAVs learn the underlying patterns of the data and energy arrivals at all the ground sensors. Numerical results show that the proposed MUAIS achieves at least 46\% and 35\% lower packet loss than an optimal solution with single-UAV and an existing non-learning greedy algorithm, respectively.