Causal Probing for Dual Encoders

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

Organisationseinheiten

Externe Organisationen

  • Delft University of Technology
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCIKM 2024
UntertitelProceedings of the 33rd ACM International Conference on Information and Knowledge Management
Seiten2292-2303
Seitenumfang12
ISBN (elektronisch)9798400704369
PublikationsstatusVeröffentlicht - 21 Okt. 2024
Veranstaltung33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, USA / Vereinigte Staaten
Dauer: 21 Okt. 202425 Okt. 2024

Abstract

Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

ASJC Scopus Sachgebiete

Zitieren

Causal Probing for Dual Encoders. / Wallat, Jonas; Hinrichs, Hauke; Anand, Avishek.
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2292-2303.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Wallat, J, Hinrichs, H & Anand, A 2024, Causal Probing for Dual Encoders. in CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. S. 2292-2303, 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024, Boise, USA / Vereinigte Staaten, 21 Okt. 2024. https://doi.org/10.1145/3627673.3679556
Wallat, J., Hinrichs, H., & Anand, A. (2024). Causal Probing for Dual Encoders. In CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (S. 2292-2303) https://doi.org/10.1145/3627673.3679556
Wallat J, Hinrichs H, Anand A. Causal Probing for Dual Encoders. in CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2292-2303 doi: 10.1145/3627673.3679556
Wallat, Jonas ; Hinrichs, Hauke ; Anand, Avishek. / Causal Probing for Dual Encoders. CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. S. 2292-2303
Download
@inproceedings{03acd2ad182f4287acad9d6c28897a58,
title = "Causal Probing for Dual Encoders",
abstract = "Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.",
keywords = "information retrieval, interpretability, language models, probing",
author = "Jonas Wallat and Hauke Hinrichs and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} 2024 ACM.; 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 ; Conference date: 21-10-2024 Through 25-10-2024",
year = "2024",
month = oct,
day = "21",
doi = "10.1145/3627673.3679556",
language = "English",
pages = "2292--2303",
booktitle = "CIKM 2024",

}

Download

TY - GEN

T1 - Causal Probing for Dual Encoders

AU - Wallat, Jonas

AU - Hinrichs, Hauke

AU - Anand, Avishek

N1 - Publisher Copyright: © 2024 ACM.

PY - 2024/10/21

Y1 - 2024/10/21

N2 - Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

AB - Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

KW - information retrieval

KW - interpretability

KW - language models

KW - probing

UR - http://www.scopus.com/inward/record.url?scp=85209995253&partnerID=8YFLogxK

U2 - 10.1145/3627673.3679556

DO - 10.1145/3627673.3679556

M3 - Conference contribution

AN - SCOPUS:85209995253

SP - 2292

EP - 2303

BT - CIKM 2024

T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024

Y2 - 21 October 2024 through 25 October 2024

ER -

Von denselben Autoren