Logo do repositório
 
A carregar...
Miniatura
Publicação

Leverage variational graph representation for model poisoning on federated learning

Utilize este identificador para referenciar este registo.
Nome:Descrição:Tamanho:Formato: 
CISTER-TR-240507.pdf1.44 MBAdobe PDF Ver/Abrir

Orientador(es)

Resumo(s)

This article puts forth a new training data-untethered model poisoning (MP) attack on federated learning (FL). The new MP attack extends an adversarial variational graph autoencoder (VGAE) to create malicious local models based solely on the benign local models overheard without any access to the training data of FL. Such an advancement leads to the VGAE-MP attack that is not only efficacious but also remains elusive to detection. VGAE-MP attack extracts graph structural correlations among the benign local models and the training data features, adversarially regenerates the graph structure, and generates malicious local models using the adversarial graph structure and benign models’ features. Moreover, a new attacking algorithm is presented to train the malicious local models using VGAE and sub-gradient descent, while enabling an optimal selection of the benign local models for training the VGAE. Experiments demonstrate a gradual drop in FL accuracy under the proposed VGAE-MP attack and the ineffectiveness of existing defense mechanisms in detecting the attack, posing a severe threat to FL.

Descrição

Palavras-chave

Federated learning Variational graph autoencoders Data-untethered model Poisoning

Contexto Educativo

Citação

Li, Kai & Yuan, Xin & Zheng, Jingjing & Ni, Wei & Dressler, Falko & Jamalipour, Abbas. (2024). Leverage Variational Graph Representation for Model Poisoning on Federated Learning. IEEE transactions on neural networks and learning systems. PP. 10.1109/TNNLS.2024.3394252.

Projetos de investigação

Unidades organizacionais

Fascículo

Editora

IEEE

Licença CC

Métricas Alternativas