Details
Original language | English |
---|---|
Title of host publication | Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings |
Editors | Claudia Hauff, Craig Macdonald, Dietmar Jannach, Gabriella Kazai, Franco Maria Nardini, Fabio Pinelli, Fabrizio Silvestri, Nicola Tonellotto |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 230-246 |
Number of pages | 17 |
ISBN (electronic) | 978-3-031-88708-6 |
ISBN (print) | 9783031887079 |
Publication status | Published - 3 Apr 2025 |
Event | 47th European Conference on Information Retrieval, ECIR 2025 - Lucca, Italy Duration: 6 Apr 2025 → 10 Apr 2025 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Volume | 15572 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.
Keywords
- Adaptive Retrieval, LLM, Reranking
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings. ed. / Claudia Hauff; Craig Macdonald; Dietmar Jannach; Gabriella Kazai; Franco Maria Nardini; Fabio Pinelli; Fabrizio Silvestri; Nicola Tonellotto. Springer Science and Business Media Deutschland GmbH, 2025. p. 230-246 (Lecture Notes in Computer Science; Vol. 15572 LNCS).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Guiding Retrieval Using LLM-Based Listwise Rankers
AU - Rathee, Mandeep
AU - MacAvaney, Sean
AU - Anand, Avishek
N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/4/3
Y1 - 2025/4/3
N2 - Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.
AB - Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.
KW - Adaptive Retrieval
KW - LLM
KW - Reranking
UR - http://www.scopus.com/inward/record.url?scp=105003307382&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-88708-6_15
DO - 10.1007/978-3-031-88708-6_15
M3 - Conference contribution
AN - SCOPUS:105003307382
SN - 9783031887079
T3 - Lecture Notes in Computer Science
SP - 230
EP - 246
BT - Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings
A2 - Hauff, Claudia
A2 - Macdonald, Craig
A2 - Jannach, Dietmar
A2 - Kazai, Gabriella
A2 - Nardini, Franco Maria
A2 - Pinelli, Fabio
A2 - Silvestri, Fabrizio
A2 - Tonellotto, Nicola
PB - Springer Science and Business Media Deutschland GmbH
T2 - 47th European Conference on Information Retrieval, ECIR 2025
Y2 - 6 April 2025 through 10 April 2025
ER -