ISEP – GECAD – Artigos
Permanent URI for this collection
Browse
Browsing ISEP – GECAD – Artigos by Subject "Adversarial attacks"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion DetectionPublication . Vitorino, João; Oliveira, Nuno; Praça, IsabelAdversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the Adaptative Perturbation Pattern Method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer Perceptron (MLP) and Random Forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.
- Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic reviewPublication . Vitorino, João; Dias, Tiago; Fonseca, Tiago; Maia, Eva; Praça, IsabelEvery novel technology adds hidden vulnerabilities ready to be exploited by a growing number of cyber-attacks. Automated software testing can be a promising solution to quickly analyze thousands of lines of code by generating and slightly modifying function-specific testing data to encounter a multitude of vulnerabilities and attack vectors. This process draws similarities to the constrained adversarial examples generated by adversarial learning methods, so there could be significant benefits to the integration of these methods in automated testing tools. Therefore, this systematic review is focused on the current state-of-the-art of constrained data generation methods applied for adversarial learning and software testing, aiming to guide researchers and developers to enhance testing tools with adversarial learning methods and improve the resilience and robustness of their digital systems. The found constrained data generation applications for adversarial machine learning were systematized, and the advantages and limitations of approaches specific for software testing were thoroughly analyzed, identifying research gaps and opportunities to improve testing tools with adversarial attack methods.
- Towards Adversarial Realism and Robust Learning for IoT Intrusion Detection and ClassificationPublication . Vitorino, João; Praça, Isabel; Maia, EvaThe internet of things (IoT) faces tremendous security challenges. Machine learning models can be used to tackle the growing number of cyber-attack variations targeting IoT systems, but the increasing threat posed by adversarial attacks restates the need for reliable defense strategies. This work describes the types of constraints required for a realistic adversarial cyber-attack example and proposes a methodology for a trustworthy adversarial robustness analysis with a realistic adversarial evasion attack vector. The proposed methodology was used to evaluate three supervised algorithms, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM), and one unsupervised algorithm, isolation forest (IFOR). Constrained adversarial examples were generated with the adaptative perturbation pattern method (A2PM), and evasion attacks were performed against models created with regular and adversarial training. Even though RF was the least affected in binary classification, XGB consistently achieved the highest accuracy in multi-class classification. The obtained results evidence the inherent susceptibility of tree-based algorithms and ensembles to adversarial evasion attacks and demonstrate the benefits of adversarial training and a security-by-design approach for a more robust IoT network intrusion detection and cyber-attack classification.