Loading [MathJax]/jax/output/HTML-CSS/config.js

Guiding Retrieval Using LLM-Based Listwise Rankers

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Mandeep Rathee
  • Sean MacAvaney
  • Avishek Anand

Research Organisations

External Research Organisations

  • University of Glasgow
  • Delft University of Technology

Details

Original languageEnglish
Title of host publicationAdvances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings
EditorsClaudia Hauff, Craig Macdonald, Dietmar Jannach, Gabriella Kazai, Franco Maria Nardini, Fabio Pinelli, Fabrizio Silvestri, Nicola Tonellotto
PublisherSpringer Science and Business Media Deutschland GmbH
Pages230-246
Number of pages17
ISBN (electronic)978-3-031-88708-6
ISBN (print)9783031887079
Publication statusPublished - 3 Apr 2025
Event47th European Conference on Information Retrieval, ECIR 2025 - Lucca, Italy
Duration: 6 Apr 202510 Apr 2025

Publication series

NameLecture Notes in Computer Science
Volume15572 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.

Keywords

    Adaptive Retrieval, LLM, Reranking

ASJC Scopus subject areas

Cite this

Guiding Retrieval Using LLM-Based Listwise Rankers. / Rathee, Mandeep; MacAvaney, Sean; Anand, Avishek.
Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings. ed. / Claudia Hauff; Craig Macdonald; Dietmar Jannach; Gabriella Kazai; Franco Maria Nardini; Fabio Pinelli; Fabrizio Silvestri; Nicola Tonellotto. Springer Science and Business Media Deutschland GmbH, 2025. p. 230-246 (Lecture Notes in Computer Science; Vol. 15572 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Rathee, M, MacAvaney, S & Anand, A 2025, Guiding Retrieval Using LLM-Based Listwise Rankers. in C Hauff, C Macdonald, D Jannach, G Kazai, FM Nardini, F Pinelli, F Silvestri & N Tonellotto (eds), Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings. Lecture Notes in Computer Science, vol. 15572 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 230-246, 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, 6 Apr 2025. https://doi.org/10.1007/978-3-031-88708-6_15, https://doi.org/10.48550/arXiv.2501.09186
Rathee, M., MacAvaney, S., & Anand, A. (2025). Guiding Retrieval Using LLM-Based Listwise Rankers. In C. Hauff, C. Macdonald, D. Jannach, G. Kazai, F. M. Nardini, F. Pinelli, F. Silvestri, & N. Tonellotto (Eds.), Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings (pp. 230-246). (Lecture Notes in Computer Science; Vol. 15572 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-88708-6_15, https://doi.org/10.48550/arXiv.2501.09186
Rathee M, MacAvaney S, Anand A. Guiding Retrieval Using LLM-Based Listwise Rankers. In Hauff C, Macdonald C, Jannach D, Kazai G, Nardini FM, Pinelli F, Silvestri F, Tonellotto N, editors, Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings. Springer Science and Business Media Deutschland GmbH. 2025. p. 230-246. (Lecture Notes in Computer Science). doi: 10.1007/978-3-031-88708-6_15, 10.48550/arXiv.2501.09186
Rathee, Mandeep ; MacAvaney, Sean ; Anand, Avishek. / Guiding Retrieval Using LLM-Based Listwise Rankers. Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings. editor / Claudia Hauff ; Craig Macdonald ; Dietmar Jannach ; Gabriella Kazai ; Franco Maria Nardini ; Fabio Pinelli ; Fabrizio Silvestri ; Nicola Tonellotto. Springer Science and Business Media Deutschland GmbH, 2025. pp. 230-246 (Lecture Notes in Computer Science).
Download
@inproceedings{e4f3262beae94f89a991b7dcfc079909,
title = "Guiding Retrieval Using LLM-Based Listwise Rankers",
abstract = "Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document{\textquoteright}s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.",
keywords = "Adaptive Retrieval, LLM, Reranking",
author = "Mandeep Rathee and Sean MacAvaney and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.; 47th European Conference on Information Retrieval, ECIR 2025, ECIR 2025 ; Conference date: 06-04-2025 Through 10-04-2025",
year = "2025",
month = apr,
day = "3",
doi = "10.1007/978-3-031-88708-6_15",
language = "English",
isbn = "9783031887079",
series = "Lecture Notes in Computer Science",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "230--246",
editor = "Claudia Hauff and Craig Macdonald and Dietmar Jannach and Gabriella Kazai and Nardini, {Franco Maria} and Fabio Pinelli and Fabrizio Silvestri and Nicola Tonellotto",
booktitle = "Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings",
address = "Germany",

}

Download

TY - GEN

T1 - Guiding Retrieval Using LLM-Based Listwise Rankers

AU - Rathee, Mandeep

AU - MacAvaney, Sean

AU - Anand, Avishek

N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

PY - 2025/4/3

Y1 - 2025/4/3

N2 - Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.

AB - Large Language Models (LLMs) have shown strong promise as rerankers, especially in “listwise” settings where an LLM is prompted to rerank several search results at once. However, this “cascading” retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document’s score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%–all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage.

KW - Adaptive Retrieval

KW - LLM

KW - Reranking

UR - http://www.scopus.com/inward/record.url?scp=105003307382&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-88708-6_15

DO - 10.1007/978-3-031-88708-6_15

M3 - Conference contribution

AN - SCOPUS:105003307382

SN - 9783031887079

T3 - Lecture Notes in Computer Science

SP - 230

EP - 246

BT - Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Proceedings

A2 - Hauff, Claudia

A2 - Macdonald, Craig

A2 - Jannach, Dietmar

A2 - Kazai, Gabriella

A2 - Nardini, Franco Maria

A2 - Pinelli, Fabio

A2 - Silvestri, Fabrizio

A2 - Tonellotto, Nicola

PB - Springer Science and Business Media Deutschland GmbH

T2 - 47th European Conference on Information Retrieval, ECIR 2025

Y2 - 6 April 2025 through 10 April 2025

ER -