| Name: | Description: | Size: | Format: | |
|---|---|---|---|---|
| DM_JoãoPereira_MEI_2024 | 19.61 MB | Adobe PDF |
Authors
Advisor(s)
Abstract(s)
Explainable Artificial Intelligence (XAI) techniques are increasingly necessary for ensuring
trust and acceptance of complex machine learning models across various fields. One
widely used XAI method, Local Interpretable Model-agnostic Explanations (LIME), is
particularly popular for image-based explanations but faces challenges in terms of speed,
accuracy, and applicability in different contexts.
An improvement to LIME is proposed to optimize its performance, including faster training
times and better prediction accuracy, with a focus on finding an alternative machine
learning algorithm that can outperform the current one used by LIME. Additionally, this
project defines and explores metrics derived from LIME explanations that can help evaluate
the quality of image classification models, even in concept drift scenarios where labeled
data may be scarce. These metrics are validated against human feedback, identifying four
key metrics that could prove useful for automated systems to assess model outputs.
Furthermore, in domains like manufacturing, LIME explanations must be adapted to
context-specific challenges. In the case of defect detection in the textile industry, the permutation
generation process used by LIME can mislead the underlying model, generating
poor explanations. A methodology is proposed to mitigate this issue, supporting more
accurate and contextually relevant explanations that can enhance decision-making and
human-centric approaches in industrial scenarios.
Description
Keywords
LIME Otimização Machine Learning Visão de Computador Explicabilidade Deteção de Defeitos Fabrico
