Browsing by Author "Akan, Ozgur B."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder ApproachPublication . Li, Kai; Zheng, Jingjing; Yuan, Xin; Ni, Wei; Akan, Ozgur B.; Poor, H. VincentThis paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL.
- Poisoning federated learning with graph neural networks in Internet of DronesPublication . Li, Kai; NOOR, ALAM; Ni, Wei; Tovar, Eduardo; Fu, Xiaoming; Akan, Ozgur B.Internet of Drones (IoD) is an innovative technology that integrates mobile computing capabilities with drones, enabling them to process data at or near the location where it is collected. Federated learning can significantly enhance the efficiency and effectiveness of data processing and decision-making in IoD. Since federated learning relies on aggregating updates from multiple drones, a malicious drone can generate poisoning local model updates that involves erroneous information, leading to incorrect decisions or even dangerous situations. In this paper, a new data-independent model poisoning attack is developed to manipulate federated learning accuracy, which does not rely on training data at drones. The proposed attack leverages an adversarial graph neural network (A-GNN) to generate poisoning local model updates based on the benign local models overheard. Particularly, the A-GNN discerns the graph structural correlations between the benign local models and the features of the training data that underpin these models. The graph structural correlations are reconstructively manipulated at the malicious drone to crafts poisoning local model updates, where the training loss of the federated learning is maximized.