Browsing by Author "SILVA, MIGUEL DIOGO GAMEIRO"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Robust machine learning against adversarial attacksPublication . SILVA, MIGUEL DIOGO GAMEIRO; Maia, Eva Catarina GomesThe growing interest in the application of Artificial Intelligence (AI) in various domains is evident, and the general public is becoming more accustomed to them. However, this interest is accompanied by significant challenges that threaten the security of AI-based systems. Rather than relying on AI as the sole decision-maker in critical scenarios, it should serve merely as a suportive tool, with its vulnerabilities carefully considered. One major concern is adversarial attacks, where bad actors introduce small, imperceptible perturbations to input data, causing AI models to misclassify. The resilience to this type of attack is called robustness, and is essential to ensure secure deployment, to avoid potential harmful consequences, such as bypassing AI-based security systems. This dissertation presents a tool, Adversarial to Understand Robustness and Offensive Resilience Analysis (AURORA), for testing the robustness of Machine Learning (ML) models to adversarial threats, focusing on adversarial attacks. The results of the attacks are evaluated using state-of-the-art metrics identified through a systematic review. As most of the work on this topic is made around images, this tool focuses on a less explored field, tabular data. glsAURORA also presents a new adjustment to these metrics based on the distance between original and perturbed samples. Since effective adversarial examples should closely resemble the original input, those that differ significantly are considered less meaningful and have less influence on the evaluation. Therefore, attacks that generate completely unrelated samples are penalised, reducing their success rate. This adjustment also accounts for the validity of perturbed samples, since invalid data should not influence evaluation metrics as much as valid data. Overall, from the results of the case study conducted using AURORA, as most of the existing methods are focused on image data, these methods do not create valid or realistic adversarial samples for tabular data. By adjusting the data to create valid samples, the attack success rate decreases. This highlights the need for testing models against the appropriate methods specific to the data used. By developing AURORA, this dissertation contributes to the advancement of adversarial robustness research in ML, particularly in the context of tabular data. AURORA provides a simple and effective framework for evaluating robustness, while considering the constraints and considerations related to tabular data. It provides two robustness scores perspectives: one suited for a general use, and another for high-stakes, real world scenarios where only the best performing adversarial attacks are considered in the evaluation. A key takeway of this dissertation is the need to continue efforts to improve the robustness and trustworthiness of ML models, and to raise awareness of the inherent vulnerabilities of ML models, and the risks associated with their use.
