Percorrer por data de Publicação, começado por "2025-10-27"
A mostrar 1 - 10 de 11
Resultados por página
Opções de ordenação
- Transferências Assíncronas de Dados Não-Finitos através de Data SpacesPublication . CORREIA, AFONSO MIGUEL FERNANDES; Coelho, Jorge Manuel NevesData has become an increasingly vital asset for organizations, as the volume of information being generated reaches an all-time high. This growth affects not only individual enterprises but also entire supply chains, with data sharing becoming key to organizational success. Yet, data exchange remains a complex endeavor, as businesses must comply with strict security, sovereignty and privacy requirements. Data Spaces were introduced to address these challenges, fostering ecosystems of trust where organizations can share information while preserving data sovereignty. At the core of this concept lies the Connector, a key component that acts as the gateway for data to flow between Data Space participants. Connectors support both synchronous and asynchronous transfers, handling either finite or non-finite data. A prominent initiative in this field is the Eclipse Dataspace Components (EDC) project, an open-source framework for building Data Space components. Although this project aims to establish data sharing environments where any type of transfer is possible, it lacked support for asynchronous non-finite data transfers. The goal of this dissertation is to develop a new functionality for the EDC project to enable this type of data transfers, addressing this gap. As part of this work, interviews were conducted with members of an organization participating in the Catena-X Data Space. These revealed that asynchronous non-finite data transfers were already taking place through workarounds that were either complex, costly, or incompatible with the core principles of Data Spaces. The interviewees also shared their expectations on how these transfers should occur instead, allowing for the definition of technical requirements. Building on the insights gathered from the interviews, a design for handling asynchronous non-finite data transfers was proposed to the EDC project. This proposal was discussed within the community, being adjusted until a consensus was reached. As a result, the final approach introduces three contributions to the project: a method to identify non-finite data, a service to perform asynchronous non-finite data transfers, and a mechanism to trigger transfers on demand. Following this design, a solution was developed to fulfill the defined requirements and tackle the identified problem. Its outcomes were presented to the participants of the interview process, receiving positive feedback. On a technical level, the EDC maintainers approved the contributions, including the feature for the upcoming release. In the end, the defined objectives were achieved. Besides contributing the developed feature, this dissertation also provides a detailed overview of Data Spaces and the initiatives in this field. In conclusion, continued development on projects such as the EDC are essential to potentiate the benefits of Data Spaces, creating value for organizations seeking to thrive in this data-driven ecosystem.
- Otimização Inteligente na Gestão de Armazéns: Aplicação de Machine Learning para Previsão de Quantidades e Alocação EficientePublication . FRANCO, ÁLVARO JOSÉ FERNANDES; Faria, Luiz Felipe Rocha deThis thesis proposes an integrated artificial intelligence framework to optimize warehouse operations through two complementary tasks: forecasting daily product outflows and solving the 3D Bin Packing Problem for space-efficient storage allocation. In the first stage, machine learning techniques are applied to historical warehouse movement data to predict the total quantity of products expected to be transferred each day. In the second stage, the predicted volumes are packed into constrained three-dimensional storage bins using reinforcement learning-based methods. A custom OpenAI Gym environment is developed to simulate realistic packing conditions, including box rotation, collision detection, stacking constraints, and compactness rewards. The agent learns packing strategies through interaction with the environment and is evaluated against traditional heuristic baselines. The main contributions of this work include the development of a reinforcement learning– based environment, carefully designed reward functions that encourage efficient packing behavior, and the integration of product forecasting with spatial decision-making. Together, these elements form a complete pipeline that turns historical warehouse data into smart, automated decisions for daily storage planning.
- De Corpo e Alma: o cinema em movimento e a subjetividade dos elementos narrativosPublication . Machado, Beatriz Pereira; Pinheiro, José AlbertoO fenómeno de Fátima deve ser compreendido para além de apenas uma manifestação religiosa popular. Foi um elemento fundamental no contexto de guerra, conflito político e transformação cultural em Portugal. As aparições aos três pastorinhos, a 1917, pela Nossa Senhora de Fátima, dividiram opiniões entre crentes e céticos, mas rapidamente se tornaram num símbolo nacional e internacional da vivência católica contemporânea e como espaço de convergência espiritual e social. O documentário pretende traduzir o poder desse fenómeno e acompanhar o percurso íntimo e simbólico do peregrino, aquele que acredita em algo superior a nós próprios, aquele que procura respostas. Por vezes essas respostas surgem em forma de sensações e experiências que apenas os que caminham parecem entender. “De Corpo e Alma” promete refletir, através da experiência de um grupo de peregrinos de Santo Tirso, sobre o significado universal da fé e de que forma é que ela os motiva a completar o desafio a que se propõe. O seu percurso espelha o de milhares de peregrinos que se submetem ao mesmo esforço físico e processo de transformação íntima. É nessa passagem do individual para o coletivo, do local para o universal, que o documentário procura focar. Um processo que incluiu um método de investigação, nomeadamente sobre a história de Fátima e os seus milagres, assim como entrevistas a peregrinos, membros da organização e uma psicóloga. Este documentário conclui que a peregrinação é muito mais do que apenas a chegada, é um caminho que transforma a nível intelectual e emocional. Cada passo representa a força que depositamos em algo que não é palpável, não é material, não é visível se quer. E que pode ser tanto ou mais poderoso.
- Visão robótica na análise automatizada do pé diabéticoPublication . BASTOS, ANTÓNIO PEDRO ALMEIDA; Coelho, Luís Filipe Martins PintoA diabetes mellitus (DM), geralmente conhecida como diabetes, é uma doença crónica de elevada prevalência, constituindo uma das principais causas de mortalidade e morbilidade. A ausência de tratamento pode conduzir a complicações graves, como o pé diabético, pelo que o diagnóstico precoce é fundamental. Na área clínica, existem métodos de diagnóstico, como os testes de sensibilidade do pé diabético com o monofilamento Semmes-Weinstein (SW), cuja repetição e simplicidade tornam a sua automatização vantajosa. Nesta dissertação, foi desenvolvido um sistema de análise do pé diabético como end-effector para robô, que permite a aproximação segura ao paciente e suporta a recolha de informação do mesmo pela utilização de sensores. Inicialmente, para o sistema de visão que estima a localização dos pontos em avaliação, foram utilizadas as redes neuronais convolucionais UNet e UNet 3+ para segmentação de imagens da região plantar. Posteriormente, desenvolveu-se um end-effector instrumentado, na forma de um sistema de análise de parâmetros como a força e temperatura, utilizando um transdutor de força e um termómetro MLX90614, com transmissão dos dados para o PC. Foram realizados testes para validar o funcionamento do sistema e estudar a influência da tara antes de efetuar medições. Por último, desenvolveu-se um programa em Python para controlo do robô UR3e, permitindo automatizar o teste de sensibilidade com o end-effector acoplado. A arquitetura UNet revelou melhor desempenho de segmentação, com F1-Score e índice de Jaccard superiores e maior equilíbrio entre precision e recall, enquanto a UNet 3+ apresentou instabilidade e uma forte dependência do dataset. Nos testes com o transdutor, a aplicação da tara mostrou ser essencial para eliminar o offset inicial e garantir medições mais fiáveis e reprodutíveis. O sistema robótico desenvolvido demonstrou a viabilidade da automatização do teste com o monofilamento Semmes-Weinstein com o robô UR3e, embora persistam limitações que deverão ser colmatadas em desenvolvimentos futuros. O projeto desenvolvido demonstrou o potencial de aplicações de robôs colaborativos no contexto biomédico, ao automatizar processos e recolher dados de interesse, através de sistemas de análise como os end-effectors. Assim, constitui um ponto de partida para o desenvolvimento de futuros trabalhos mais eficientes e adaptáveis aos contextos clínicos em evolução constante.
- Framework de Segurança para KubernetesPublication . BARBOSA, DIOGO DA COSTA; Nogueira, Luis Miguel PinhoIn this document, we can follow the entire process of developing the Master’s Thesis written for the completion of the Master’s Degree in Computer Engineering at the Higher Institute of Engineering of Porto. Considering that the Master’s degree specializes in Cybersecurity and Systems Administration and that one of the most significant emerging technologies in recent years is Kubernetes, which is closely linked to the current systems administration scenario, it made perfect sense to develop something in the area of Kubernetes security. In recent years, more and more companies have adopted other styles of system development, as conventional monolithic systems have begun to show some limitations. Thus, new architectures for computer systems have emerged, one of the most prominent in the market in recent years being Microservices. This architecture aims to divide systems into small services, whose operation does not depend on any other service. However, adopting this has been quite complicated, until solutions appeared that could work according to the principles of Microservices. The solution for better implementing Microservices lies in containers. These allow for a higher level of abstraction of the system where they run, managing to simulate different types of systems on the same machine. Containers complement Microservices, as each of the independent services developed will be executed in different containers. This achieves another level of independence, as technological dependencies practically cease to exist, since each container has a different execution. However, when large companies began their journey to migrate to Microservices, they noticed that the larger the system used, the greater the number of containers required, and since each container has a different execution and even different containers running the same service (different instances), managing these containers was becoming quite difficult. As a result, container management tools began to be developed, with some companies managing to develop their own internal solutions. The best example is Google, which currently has two internal container management tools, which it used as a basis to create the largest open-source container management tool. Kubernetes was launched by Google in 2014 and immediately received strong support from the entire community, which readily contributed its knowledge to improvements that would come over the next eleven years. Despite the keen interest of a large community and many companies, it is still considered a recent technology. Kubernetes brought a new vision of containers, since in reality it is not containers that are executed, but Pods, which can be considered improved containers. The basic operation of Kubernetes groups several Nodes, which can be virtual machines, physical machines, or even an instance in the cloud. One of the Nodes will be the main one where the Control Plane will be hosted, which is considered the brain of a Kubernetes cluster. This consists of several components with different tasks, including all communication between Pods and even with the outside world, which passes through the API Server. Another component monitors all running Pods and compares them with the desired state for the cluster, which in turn is stored in another component of the Control Plane. Although it seems like an excellent solution for migrating to microservices, companies are beginning to fear some security flaws that may exist. But even more worrying for these same companies is knowing whether they are maintaining the same level of security maturity in their system. This problem is the main focus of this work, since the intended result is to provide a security checklist to be applied to clusters in order to determine the cluster’s security maturity level and how it can be improved. Throughout the document, an in-depth analysis of Kubernetes will be carried out in order to understand the critical points of a Kubernetes cluster that must be analyzed in depth to ensure its total security. With this analysis completed, a checklist will be drawn up, which will provide information on how to perform the verification and, if necessary, an explanation of how to improve these points.
- Desenvolvimento e Avaliação de um Sistema para Deteção de Peixes Juvenis e Plâncton em Ecossistemas AquáticosPublication . BARBOSA, DIOGO SAMUEL SEIXAS; Silva, Eduardo Alexandre Pereira da; Martins, Alfredo Manuel OliveiraA presente tese tem como objetivo o desenvolvimento e a avaliação de dois sistemas, em que engloba um sistema para a deteção eficiente de plâncton e espécies de peixes juvenis em ecossistemas aquáticos. A tecnologia proposta visa aprimorar a precisão e a eficácia das técnicas de deteção existentes, novo design do produto e oferecer modos de operação como mooring assim como um modo de operação autónomo. O sistema proposto incorpora avanços em ótica, métodos de deteção e processamento de imagens para oferecer uma solução de alta resolução em tempo real. A tese abordará aspetos técnicos relacionados com a escolha da câmara, incluindo a escolha de sensores, sistema de iluminação adequados e algoritmos avançados para a identificação e classificação automatizada dos organismos mencionados. Além disso, serão realizados testes com o sistema que irá ser validado em laboratório e testado em aplicação real no terreno. Os resultados esperados desta pesquisa irão proporcionar uma compreensão mais abrangente e detalhada da distribuição e dinâmica do plâncton e de peixes juvenis, entre outras espécies.
- Automatização da Entrega do Produto ao ClientePublication . PISCO, PETRA LOPES; Sousa, Paulo Alexandre Gandra deA presente dissertação investiga o processo de deployment manual do PlexHub, um sistema modular e orientado a microsserviços desenvolvido pela PlexIT para o setor vinícola. O modelo atual, inteiramente manual, envolve tarefas repetitivas e configurações personalizadas, como modificações de configurações por cliente, migrações de bases de dados e instalação direta em máquinas virtuais, o que resulta em ineficiência, morosidade e alta vulnerabilidade a erros. Com isso, propõe-se uma solução baseada em práticas de DevOps, com foco na automação através de pipelines de CI/CD, utilizando ferramentas amplamente adotadas, como GitHub Actions, Docker e Docker Compose. O objetivo é aumentar a fiabilidade, reduzir o tempo de instalação e garantir maior consistência entre os ambientes. Para validar a abordagem proposta, foi desenvolvido um ambiente de testes que simula as condições reais das máquinas virtuais dos clientes. Os resultados demonstram uma melhoria significativa no tempo de deployment. Este estudo visa não apenas otimizar os processos internos da PlexIT, mas também oferecer uma contribuição valiosa para a investigação e aplicação de práticas DevOps em contextos empresariais com recursos limitados.
- Aprendizagem por reforço robusta baseada em visão para navegação de UAVs em parques fotovoltaicosPublication . CAMPANHÃ, JOÃO FERREIRA; Malheiro, Maria Benedita Campos Neves; Pinto, Andry MaykolThis dissertation proposes and validates a robust Reinforcement Learning (RL) method for visual navigation of Unmanned Aerial Vehicles (UAVs) tasked to inspect floating photovoltaic panel arrays in the Alqueva reservoir, using simulation-based development and testing. Panel inspection requires low-altitude flights, and the dynamic nature of the floating environment renders waypoint-based planning ineffective, requiring the method to operate under varied conditions and resist visual disturbances. To address these challenges, the study compares two feature extraction architectures: a vision-based model and a multimodal data model that combines visual data with numerical inputs, including actions and velocities. The Soft Actor-Critic (SAC) policy was selected to process the latent state produced by the feature extractors. Following training with domain randomization, results showed that the multimodal model that combines visual and action inputs outperforms other variants in accuracy, control, and task completion. However, its robustness to visual perturbations remained somewhat limited. To address this shortcoming, the domain randomization was refined, the model retrained with appropriate regularization, and the hyperparameters tuned, significantly improving robustness at the cost of a slight reduction in overall performance. This work contributes with a modular simulation pipeline for training and validation, a comparative analysis between models exploring unimodal and multimodal data, and practical insights into the accuracy–robustness trade-off in Reinforcement Learning (RL). Domain randomization and data multimodality were fundamental to improving model performance and generalization.
- Migrar aplicações REST para gRPCPublication . AMARAL, HUGO MIGUEL MENDES; Azevedo, Isabel de Fátima SilvaThe communication between services in distributed architectures in microservices has traditionally been supported by REST, due to its simplicity, generalized adoption, and easy integration with multiple platforms. However, as the systems grow in scale and complexity, relevant limitations associated with REST emerge. It is hard to impose, it is built on top of HTTP 1.X and makes use of text formats readable by humans, which is inefficient for service-to-service interactions and there’s the absence of well-defined and strongly typed service definitions. Those limitations become even clearer on systems that require high performance, low latency and strong consistency in service communication. Google’s gRPC presents an as alternative capable of filling those gaps. In particular, gRPC outperforms the traditional REST paradigm, particularly in inter-service communication within microservices architectures, in key metrics such as throughput, response time, bandwidth efficiency, and bi-directional streaming. Moreover, as gRPC uses Protocol Buffers to define services, gRPC service contracts clearly define the types that will be used for interaction between the applications. This helps in overcoming common runtime and interoperability errors that are typically faced when applications are built by multiple teams and different technologies. However, the adoption of gRPC in already developed REST systems is not a straightforward process. It involves technical and organizational challenges, which go from ensuring the compatibility of clients and servers during the transition to the lack of good practices. In addition, the information about how to effectively migrate REST applications to gRPC is extremely limited, as there is a significant gap in academic literature regarding a methodical and strategic way to perform a migration. The work’s objective, at a high level, is to explore, design and compare migration approaches of REST projects to gRPC, and implement one or more of the developed strategies, providing an extensive analysis of the results. That is achieved by delving particularly into gRPC, but also studying other frameworks, dissecting their differences and identifying their benefits and downsides and what led to their adoption. In the end, the final goal is to provide helpful insights and guidelines for engineering teams studying modernization in their communication protocols, contributing to the broader discussion about the transformation and progression of distributed systems and API architectures. For that, a systematic literature review was conducted and then complemented by an analysis of real cases in the industry, in order to identify methodologies, challenges, tools and good practices relevant for the migration process. The systematic review revealed a lack of scientific sources directly focused on a migration from REST to gRPC, with the contributions from technical blogs from companies like WePay, Google Cloud and LinkedIn being very helpful. Those cases showed that a migration can be successful with a proper plan, adoption of gradual strategies, automation tools, and thorough testing. The adopted methodology is based on the practical application of the obtained knowledge by migrating an open-source project, which represents modern microservice-based architectures. The process began with the detailed analysis of each service, endpoint mapping and understanding the interaction between components. Then, the gRPC contracts were defined following strict structure and naming conventions and centralized in a separate repository for the effect in order to make the management, versioning, and integration easier. The automatic generation of code to servers, clients and gateway was done using Protocol Buffer Compiler and the necessary plugins for Java and Go. During the migration, special attention was given to the coexistence of REST and gRPC, resorting to the gRPC Gateway to ensure that REST clients kept working without any changes. This approach allowed a gradual and smooth transition, while minimizing the risks. The validation of the migration’s success was made with automatic and manual tests, especially to ensure functional equivalence after the process is complete. The experience also demonstrated the importance of automation mechanisms in CI pipelines for the generation, publishing and validation of stubs. Among the faced challenges, the lack of a formal API Specification for REST APIs, which requires a manual analysis of the code, and the low test coverage are highlighted. Nevertheless, the acquired experience allowed the definition of a clear set of guidelines to support engineering teams planning a transition. The results showed that the migration of REST systems to gRPC is viable and beneficial, as long as it is thoroughly planned, has a good level of automation, and good engineering practices. The adoption of bridge tools and contract centralization allows for minimizing the risks and ensuring the operational uninterruptedness of the systems during the migration. The experience also highlights the need for as much documentation, testing, and automation as possible.
- Relatórios automatizados de ocorrências públicas por meio de reconhecimento de vozPublication . ALBERGARIA, JOÃO TOMÁS PEREIRA SOARES DE; Sousa, Paulo Alexandre Gandra deA presente dissertação explora o potencial da tecnologia Voice over Internet Protocol (VoIP) integrada com mecanismos de automação de processos, através do desenvolvimento de uma aplicação mock que simula cenários de reporte e gestão de incidentes. O objetivo não é modernizar sistemas de comunicação em uso real, mas demonstrar, em ambiente controlado, como a convergência entre VoIP, reconhecimento automático de fala (ASR), síntese de fala (TTS) e modelos de linguagem pode criar fluxos de comunicação mais rápidos, precisos e menos dependentes da intervenção humana. A arquitetura proposta assenta no Asterisk, utilizado como PBX principal e integrado com o UniMRCP para ligação a serviços de ASR e TTS, enquanto o GPT-4 assegura processamento avançado de texto. A aplicação mock permite simular chamadas VoIP e interações web, assegurando triagem automática de incidentes e geração de registos estruturados. Os resultados demonstram que a integração destas tecnologias reduz atrasos, elimina inconsistências manuais e fornece dados organizados para análise em tempo real. Embora não se destine a substituir sistemas críticos, o protótipo confirma o valor do VoIP como motor de automação e inovação em comunicações digitais.
