Repository logo
 

ISEP - DM - Engenharia de Sistemas Computacionais Críticos

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 10 of 14
  • Rumo a uma estrutura de verificação e validação baseada em arquiteturas de microserviços
    Publication . HERNÁNDEZ, LIDIA GLORIA WILSON; Pinho, Luís Miguel Rosário da Silva
    In recent years, microservices architectures have become a popular approach in software development to address the challenges of scalability, maintainability, and agility of complex systems. This thesis focuses on the intersection of microservices architectures with formal verification techniques, especially in the context of safety-critical applications such as Assisted and/or Automated Driving solutions. The lack of research in this area requires a comprehensive investigation of how microservices architectures and formal verification techniques can interact effectively. The primary challenge is to seamlessly integrate microservices architectures with formal verification processes to ensure the safety properties typical of Cyber-Physical Systems, with a particular focus on autonomous driving applications. The importance of this work lies in the development of a robust framework that reconciles the decentralised nature of microservices with the requirements of formal verification. Research objectives include analysing the benefits and limitations of microservices, identifying suitable formal verification tools and microservices frameworks, designing microservices APIs, and conducting experiments for validation. Successful implementations and a comprehensive overview of formal verification methods in safety-critical systems form the basis for this research. As a proof of concept, this thesis presents the implementation of VVFramework, a microservicesbased hybrid verification and validation framework that integrates NuSMV, Z3, and an automatic Python-SMV translation pipeline and demonstrates its applicability for small case studies in safety-critical domains.
  • Abordagem à especificação e verificação de requisitos: Caso de estudo em Updates via Over the Air
    Publication . CARVALHO, ANAISA CRISTINA PEREIRA; Pereira, David Miguel Ramalho; Pinho, Luís Miguel Rosário da Silva
    Modern vehicles rely on increasingly complex embedded software to deliver advanced features in performance, safety, and connectivity. In automotive software development, a precise requirement specification is essential to manage complexity and ensure reliability throughout the life cycle. Although natural language has been standard for documenting requirements, it often introduces ambiguity and misinterpretation. This thesis focuses on investigating the advantages of adopting formal requirements engineering over natural language approaches within the automotive field, using the Formal Requirements Elicitation Tool (FRET) tool and taking as use case a representative set of requirements of an Over-the-Air (OTA) updates project derived from the automotive industry. This analysis highlights how FRET improves clarity when eliciting, managing, analysis, and verifying requirements, thus serving as a strong asset for reducing errors in later development phases, which is specially important in critical projects. This work also provides evidence of the tangible benefits of formal techniques for the specification, analysis, and verification of vehicle software requirements, but also highlights that tools such as FRET still require formal logic training and, given the niche nature of the tool, that users maintain regular interaction with the developers in order to be aware of features that are fundamental for better usage of the tool, however that still remain undocumented in the literature.
  • Space is so monotonic: Introducing dynamic schedulers to satellite software
    Publication . PASCHOALETTO, ALEXANDER PINHEIRO; Sousa, Paulo Manuel Baltarejo de
    The space industry has seen a trend shift in the recent decades by a handful of perspectives. The increase in competition via the introduction of more participants (both state-related and private), the expansion in mission scopes from simple research and defense to others such as internet service and even tourism, and the growing computational demand to handle these new missions are a few of them. On this scenario, satellites come as a fundamental element in most use cases. Just as any spacecraft, they need to be developed to withstand the harsh physical conditions of space, which imply techniques such as radiation hardening in some components to operate reliably, and are deployed in environments where maintenance is complicated. They are also expected to work autonomously for years, even decades. Given these and other challenges, satellites traditionally have a long and expensive development phase, and are slow-paced when it comes to incorporating recent technologies. For the on-board computers that go within them, single-core processors of outdated architectures, fixed-priority schedulers and low CPU workloads are dominant. This paradigm works for now, but may not in the years to come as trends such as Artificial Intelligence (AI) and real-time video streaming may also reach the space domain. The lastest iterations of space-oriented software do tackle the issue of development costs by introducing a greater code re-usability across missions, but little seems to be done regarding the software performance itself. In this context, this Thesis aims at bringing modern software paradigms into play by introducing the support of three widely known schedulers - Rate Monotonic (RM), Earliest Deadline First (EDF), and Constant Bandwidth Server (CBS) - into KARVEL, a space-oriented software originally developed by Critical Software. We evaluate their performance, advantages and shortcomings in both synthetic (by emulation work as busy-wait routines) and real-world workloads (by deploying it into a robot), and demonstrate that indeed dynamic algorithms such as EDF are capable of outperforming RM even on overloaded scenarios.
  • Application of model checking in the formal verification of requirements for a speed control system in railway
    Publication . COELHO, CRISTIANO MANUEL GARCÊS; Pinho, Luís Miguel Rosário da Silva
    Railway speed control systems are essential for the safety and efficiency of railway transportation systems, as they ensure compliance with regulations and prevent accidents. As the demand for rail transport grows, rigorous verification of these systems is becomes increasingly critical. In such projects, errors are sometimes only detected during the testing phase, leading to costs that could have been avoided if these errors were identified in the early stages of the project. This thesis focuses on the topic of the formal verification of speed control system, specifically using model checking, to ensure that these systems meet safety requirements as well as performance standards. As the basis for the work presented in this thesis, we will adopt the EBICAB 700 architecture, and for which a small, yet representative, set of safety, operational, or regulatory requirements will be rigorously specified using well-known temporal logic languages, and verified against the system models using the model checking tools NuSMV and UPPAAL. This thesis work aims at contributing to the effort of demonstrating the feasibility and benefits of applying formal verification to railway systems, not only in terms of safety assurance but also as a strategy for reducing late-stage development risks and increased costs. It highlights the value of incorporating formal methods into the requirements and design phases of critical system engineering.
  • Performance monitoring of real-time applications in RISC-V platforms
    Publication . Soares, Nuno Filipe Pessoa; Carvalho, Tiago Diogo Ribeiro de
    Real-time systems are an important field of research, specially when considering that these systems interact with the real-world and their tasks have direct impact in human lives and society. It is imperative that the traceability and performance of these systems can be ensured. RISC-V is an open-source instruction set architecture, gaining interest from both industry and academia, specially in the context of real-time systems development due to its versatility and customization capabilities. The performance analysis of RISC-V systems emerges as a critical aspect, with the need to evaluate metrics, such as system efficiency and scalability, in order to deal with increasing workloads and tasks complexity. There is a lack of support for performance analysis, specially in real-time operating systems for RISC-V architectures, usually not providing full support to the latest performance monitoring related specifications, or being tangled to a specific operating system. This thesis provides a comprehensive study on performance analysis approaches on RISCV systems, exploring tools and solutions for performance monitoring of applications, that provide access to the system performance counters. As a solution, this work proposes an API capable of retrieving performance metrics from the hardware performance counters, accessible via code instrumentation over a target application. The API proposed works as a proof-of-concept for the development of more sophisticated tool capable of facilitating the retrieval and configuration of performance metrics from a RISC-V system. To access the API performance, some introductory tests have been performed to showcase that the API is capable of interacting and managing the hardware performance counters present on the system. Additionally, the overhead generated from the API was also analysed briefly, showcasing that the API has limited impact on the overall system’s performance when retrieving metrics from the hardware performance counters.
  • Real-time quality index estimation for redundant sampled values streams in digital substations
    Publication . Oriente, Everton Matheus; Barros, António Manuel de Sousa
    Protection devices are essential elements in the electric power grid, as they safeguard equipment and ensure the stability and reliability of the network by detecting faults and preventing damage through timely isolation of affected areas. With technological advancements, electromechanical protection relays have evolved into Intelligent Electronic Devices (IEDs), essentially protection relays with built-in communication capabilities. These modern relays not only protect equipment but also exchange critical information with other devices. However, the variety of events and data exchanged between power grid components is substantial. In this respect, communication protocols present both advantages and challenges. On one hand, they enable the signalling of diverse events and the transmission of multiple pieces of information over a single medium. On the other hand, a protocol is only effective if all components within the system comply with it. The introduction of the IEC-61850 standard has established a unified communication across the energy sector, promotes interoperability between manufacturers, offering significant advantages over other standards. Real-time data exchange is facilitated by specific protocols within the IEC 61850 framework, which specifically address the communication requirements of IEDs, ensuring enhanced data quality and availability. As IEDs play a crucial role in substations, with many of their functions being both critical and time-sensitive, any developments incorporated into these devices can significantly impact the entire system. An IED analyses the stream of Sampled Values (SVs) sent by a Merging Unit to detect faults and implement protective measures. The Merging Unit sends SVs over two independent channels: the primary and the secondary (redundant) channels. When the primary channel fails, the secondary channel takes over, ensuring the information that the primary channel could not transmit is conveyed. As such, the current decision-making process for switching channels is based solely on the occurrence of a total channel failure. This thesis aims to enhance IED functionality by adding intelligence to this channel selection method. It proposes ways to evaluate, quantify, and qualify the received information to determine the most reliable signal source, whether from Channel 1 (primary) or Channel 2 (secondary). This approach ensures that protection relays receive better information, enabling protection algorithms to act correctly and promptly within their operating parameters.
  • Formal analysis of an authority transfer protocol for UAVs
    Publication . Rodrigues, Ricardo Alexandre Almeida; Pereira, David Miguel Ramalho
    The use of Unmanned Aerial Vehicles (UAVs) in a variety of applications has resulted in a growing demand for Beyond Visual Line of Sight (BVLOS) operations to cover long distances. This thesis focuses on an initial analysis of a UAV communication protocol designed for transferring control authority between ground controls within a common range, addressing challenges and evaluating its efficacy across different operational phases. The research encompasses start-up, mission execution, and handover phases, assessing the protocol’s interactions with ground-based systems for reliability, efficiency, and adaptability. A comprehensive overview of model checking is provided, emphasizing transition systems, formalisms, and tools for system correctness verification. The research employs model checking to formally specify and model the UAV communication protocol, establishing evaluation criteria through the use of LTL and CTL properties. The derived requirements serve as a foundational framework for future iterations, ensuring the protocol’s robustness and addressing security strategies in the event of connection loss. The holistic approach contributes to a comprehensive understanding of the protocol’s functionality, aiming to enhance the security authorization handover procedure and promote public and regulatory acceptance of BVLOS operations with UAVs.
  • Intelligent edge orchestration of real-time applications
    Publication . Calai, Rafael; Barros, António Manuel de Sousa
    With the rapid development of embedded systems, new communication technologies, and distributed systems, the complexity of critical system development has increased. In the past, these systems typically only performed the function of controlling a specific process, without the need for communication or being components of a complex system centralizing data processing for decision-making. Nowadays, these critical systems have more tasks and are much more integrated, aiming to provide new functionalities to users, reduce costs, and enhance the security and availability of essential services. This scenario has brought new challenges to the development of such systems, including increased computational needs, new communication methods, ensuring cybersecurity, and meeting real-time requirements in distributed systems. The central theme of the thesis will focus on the challenge associated with ensuring real-time requirements in distributed systems.To address this issue, we propose a scheduling algorithm for Docker Swarm, built on top of RT-Linux. This algorithm is designed to ensure that each process meets its temporal constraints by prioritising allocation according to their respective priorities and the resources available.
  • Integrated monitoring for cyber-security in residential scenarios
    Publication . Carneiro, Mário Miguel Silva de Sá; Ferreira, Luís Miguel Moreira Lino
    The increasing number of Internet of Things devices and the increasing adoption of smart homes have led to an increase in risk related to cybersecurity. The purpose of this dissertation is to examine these limitations and propose innovative solutions for anomaly detection using machine learning (ML) methods. The objectives and motivations for this work, which focuses on improving home network security, are explained in Chapter 1, which also provides a review of smart homes and their connections with cybersecurity issues. In Chapter 2, "State of the Art," the rise of IoT in everyday use and related safety concerns are addressed. In addition, it covers over basic concepts like machine learning strategies and the way these interact with intrusion detection systems (IDS). In order to mitigate increasing threats it considers that integrating ML with cybersecurity in IoT systems is important. The machine learning techniques selected for this project are presented in Chapter 3, with a focus on creating a reliable anomaly detection pipeline.Comprehensive data pre-processing, including cleaning, merging, normalization, and analysis, ensures sure the data is suitable for model training. In Chapter 4, training individual and ensemble models will be addressed along with an analysis of performance metrics in scenarios using binary and multi-class classification.Z-Score normalization is one strategy that is frequently used to handle unbalanced datasets. It has been demonstrated to perform better than Min-Max, especially when applied to the UNSW-NB15 dataset. The implementation of APIs using Streamlit for real-time visualisation and FastAPI for backend integration with ML models will be discussed in Chapter 5.This combination enables the ability to anticipate cyberattacks and visually represent anomalies in an effective way. The testbed built to automate cyberattacks and extract important features for model training will be discussed in detail in Chapter 6. The results of the evaluations, that compare the performance of the individual models and the ensemble, are presented in Chapter 7. The ensemble performed better than expected, especially when it came to identifying anomalies in multi-class environments. It achieved this with high accuracy and a significant reduction in false positives and negatives. Chapter 8 ends with an overview of the project’s conclusions and contributions.The main findings emphasize the significance of selecting normalization strategies and the advantages of using ensemble models to improve attack detection.
  • Deployment of ML Mechanisms for Cybersecurity in Resource-Constrained Embedded Systems
    Publication . Vicente, Pedro Miguel Casal; Santos, Pedro Miguel Salgueiro dos
    The increase of low security devices in the Internet is being exploited by hackers to compro mise data or use to use them as external agents to perform further attacks. As so, it is of crucial importance that networks posses a system that correctly assess the nature of incom ing and outgoing packets to protect the local network and the overall Internet connected systems. To achieve this, Machine Learning is being broadly used due to his early success. Nevertheless, these mechanisms are better inserted at the entry point of local networks, an embedded system which has limited resources to train machine learning models and/or to perform inference tasks. Since Cybersecurity is a real-time problem, the embedded systems should perform these activities in a very restricted time interval. The time required to clas sify the packets depends on the overall system load, machine learning models complexity and desired accuracy. This thesis aims to assess the current support for ML in embedded systems, either through the interoperability of models or through their development in low level languages, and the relationship between the time required by different embedded sys tems, the different tools and models. This thesis explored one transpilation tool, m2cgen, two interoperability formats, PMML and Open Neural Network Exchange (ONNX) and one real time environment, ONNXRuntime, to deploy an already trained model in a device with limited resources. Results demonstrate that ONNXRuntime was the only machine learn ing tool with a perfect match regarding samples prediction’s classification from the original models. An analysis on the time required to execute this task revealed that ONNXRun time is faster than Scikit-Learn with the Isolation Forest (ISO), One Class Support Vector Machine (OCSVM) and Stochastic Gradient Descent One Class Support Vector Machine (SGDOCSVM) models and slower with the Local Outlier Factor (LOF) model.