ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

Organisationseinheiten

Externe Organisationen

  • Murdoch University
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management
Seiten2482-2492
Seitenumfang11
ISBN (elektronisch)9798400720406
PublikationsstatusVeröffentlicht - 10 Nov. 2025
Veranstaltung34th ACM International Conference on Information and Knowledge Management, CIKM 2025 - Seoul, Südkorea
Dauer: 10 Nov. 202514 Nov. 2025

Publikationsreihe

NameCIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management

Abstract

Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.

ASJC Scopus Sachgebiete

Zitieren

ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings. / Raza, Rehan; Wang, Guanjin; Wong, Kok Wai et al.
CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management. 2025. S. 2482-2492 (CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Raza, R, Wang, G, Wong, KW, Laga, H & Fisichella, M 2025, ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings. in CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management. CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management, S. 2482-2492, 34th ACM International Conference on Information and Knowledge Management, CIKM 2025, Seoul, Südkorea, 10 Nov. 2025. https://doi.org/10.1145/3746252.3761183
Raza, R., Wang, G., Wong, K. W., Laga, H., & Fisichella, M. (2025). ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings. In CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management (S. 2482-2492). (CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management). https://doi.org/10.1145/3746252.3761183
Raza R, Wang G, Wong KW, Laga H, Fisichella M. ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings. in CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management. 2025. S. 2482-2492. (CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management). doi: 10.1145/3746252.3761183
Raza, Rehan ; Wang, Guanjin ; Wong, Kok Wai et al. / ITL-LIME : Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings. CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management. 2025. S. 2482-2492 (CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management).
Download
@inproceedings{1cc469a04add4038a4724f7f622b1436,
title = "ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings",
abstract = "Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.",
keywords = "contrastive learning, explainable ai, instance transfer learning, lime, model agnostic explanation",
author = "Rehan Raza and Guanjin Wang and Wong, {Kok Wai} and Hamid Laga and Marco Fisichella",
note = "Publisher Copyright: {\textcopyright} 2025 Copyright held by the owner/author(s).; 34th ACM International Conference on Information and Knowledge Management, CIKM 2025, CIKM 2025 ; Conference date: 10-11-2025 Through 14-11-2025",
year = "2025",
month = nov,
day = "10",
doi = "10.1145/3746252.3761183",
language = "English",
series = "CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management",
pages = "2482--2492",
booktitle = "CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management",

}

Download

TY - GEN

T1 - ITL-LIME

T2 - 34th ACM International Conference on Information and Knowledge Management, CIKM 2025

AU - Raza, Rehan

AU - Wang, Guanjin

AU - Wong, Kok Wai

AU - Laga, Hamid

AU - Fisichella, Marco

N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s).

PY - 2025/11/10

Y1 - 2025/11/10

N2 - Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.

AB - Explainable Artificial Intelligence (XAI) methods, such as Local Interpretable Model-Agnostic Explanations (LIME), have advanced the interpretability of black-box machine learning models by approximating their behavior locally using interpretable surrogate models. However, LIME's inherent randomness in perturbation and sampling can lead to locality and instability issues, especially in scenarios with limited training data. In such cases, data scarcity can result in the generation of unrealistic variations and samples that deviate from the true data manifold. Consequently, the surrogate model may fail to accurately approximate the complex decision boundary of the original model. To address these challenges, we propose a novel Instance-based Transfer Learning LIME framework (ITL-LIME) that enhances explanation fidelity and stability in data-constrained environments. ITL-LIME introduces instance transfer learning into the LIME framework by leveraging relevant real instances from a related source domain to aid the explanation process in the target domain. Specifically, we employ clustering to partition the source domain into clusters with representative prototypes. Instead of generating random perturbations, our method retrieves pertinent real source instances from the source cluster whose prototype is most similar to the target instance. These are then combined with the target instance's neighboring real instances. To define a compact locality, we further construct a contrastive learning-based encoder as a weighting mechanism to assign weights to the instances from the combined set based on their proximity to the target instance. Finally, these weighted source and target instances are used to train the surrogate model for explanation purposes. Experimental evaluation with real-world datasets demonstrates that ITL-LIME greatly improves the stability and fidelity of LIME explanations in scenarios with limited data. Our code is available at https://github.com/rehanrazaa/ITL-LIME.

KW - contrastive learning

KW - explainable ai

KW - instance transfer learning

KW - lime

KW - model agnostic explanation

UR - http://www.scopus.com/inward/record.url?scp=105023189185&partnerID=8YFLogxK

U2 - 10.1145/3746252.3761183

DO - 10.1145/3746252.3761183

M3 - Conference contribution

AN - SCOPUS:105023189185

T3 - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management

SP - 2482

EP - 2492

BT - CIKM 2025 - Proceedings of the 34th ACM International Conference on Information and Knowledge Management

Y2 - 10 November 2025 through 14 November 2025

ER -

Von denselben Autoren