Carneiro, Davide RuaPereira, João Tiago Moreira2024-12-172024-12-1720242024http://hdl.handle.net/10400.22/26889Explainable Artificial Intelligence (XAI) techniques are increasingly necessary for ensuring trust and acceptance of complex machine learning models across various fields. One widely used XAI method, Local Interpretable Model-agnostic Explanations (LIME), is particularly popular for image-based explanations but faces challenges in terms of speed, accuracy, and applicability in different contexts. An improvement to LIME is proposed to optimize its performance, including faster training times and better prediction accuracy, with a focus on finding an alternative machine learning algorithm that can outperform the current one used by LIME. Additionally, this project defines and explores metrics derived from LIME explanations that can help evaluate the quality of image classification models, even in concept drift scenarios where labeled data may be scarce. These metrics are validated against human feedback, identifying four key metrics that could prove useful for automated systems to assess model outputs. Furthermore, in domains like manufacturing, LIME explanations must be adapted to context-specific challenges. In the case of defect detection in the textile industry, the permutation generation process used by LIME can mislead the underlying model, generating poor explanations. A methodology is proposed to mitigate this issue, supporting more accurate and contextually relevant explanations that can enhance decision-making and human-centric approaches in industrial scenarios.engLIMEOtimizaçãoMachine LearningVisão de ComputadorExplicabilidadeDeteção de DefeitosFabricoLIME: Optimising the creation of explanationsmaster thesis203759923