Details
| Originalsprache | Englisch |
|---|---|
| Titel des Sammelwerks | CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management |
| Seiten | 2482-2492 |
| Seitenumfang | 11 |
| ISBN (elektronisch) | 9798400720406 |
| Publikationsstatus | Veröffentlicht - 10 Nov. 2025 |
| Veranstaltung | 34th ACM International Conference on Information and Knowledge Management, CIKM 2025 - Seoul, Südkorea Dauer: 10 Nov. 2025 → 14 Nov. 2025 |
Publikationsreihe
| Name | CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management |
|---|
Abstract
Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.
ASJC Scopus Sachgebiete
- Entscheidungswissenschaften (insg.)
- Informationssysteme und -management
- Informatik (insg.)
- Angewandte Informatik
- Informatik (insg.)
- Information systems
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management. 2025. S. 2482-2492 (CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - ITL-LIME
T2 - 34th ACM International Conference on Information and Knowledge Management, CIKM 2025
AU - Raza, Rehan
AU - Wang, Guanjin
AU - Wong, Kok Wai
AU - Laga, Hamid
AU - Fisichella, Marco
N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s).
PY - 2025/11/10
Y1 - 2025/11/10
N2 - Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.
AB - Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.
KW - contrastive learning
KW - explainable ai
KW - instance transfer learning
KW - lime
KW - model agnostic explanation
UR - http://www.scopus.com/inward/record.url?scp=105023189185&partnerID=8YFLogxK
U2 - 10.1145/3746252.3761183
DO - 10.1145/3746252.3761183
M3 - Conference contribution
AN - SCOPUS:105023189185
T3 - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
SP - 2482
EP - 2492
BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
Y2 - 10 November 2025 through 14 November 2025
ER -