Details
Original language | English |
---|---|
Title of host publication | JCDL 2024 - Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries |
Editors | Jian Wu, Xiao Hu, Terhi Nurmikko-Fuller, Sam Chu, Ruixian Yang, J. Stephen Downie |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (electronic) | 9798400710933 |
Publication status | Published - 13 Mar 2025 |
Event | 24th ACM/IEEE Joint Conference on Digital Libraries, JCDL 2024 - Hong Kong, Hong Kong Duration: 16 Dec 2024 → 20 Dec 2024 |
Publication series
Name | Proceedings of the ACM/IEEE Joint Conference on Digital Libraries |
---|---|
ISSN (Print) | 1552-5996 |
Abstract
There are lots of scientific articles are being published every year, it is increasingly challenging for researchers to maintain oversight and track scientific progress. Meanwhile, Large Language Models (LLMs) have revolutionized natural language processing tasks. This research focuses on generating summaries from research paper abstracts by utilizing LLMs and comprehensively evaluating the performance of the summarization. LLMs offer customizable outputs through Prompt Engineering by leveraging descriptive instructions including instructive examples and injection of context knowledge. We investigate the performance of various prompting techniques for various LLMs using both GPT-4 and human evaluation. For that purpose, we created a comprehensive benchmark dataset for scholarly summarization covering multiple scientific domains. We integrated our approach in the Open Research Knowledge Graph (ORKG) to enable quicker syn-Thesis of research findings and trends across multiple studies, facilitating the dissemination of scientific knowledge to policymakers, practitioners, and the public.
Keywords
- Large Language Model, Open Research Knowledge Graph (ORKG), Summarization
ASJC Scopus subject areas
- Engineering(all)
- General Engineering
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
JCDL 2024 - Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries. ed. / Jian Wu; Xiao Hu; Terhi Nurmikko-Fuller; Sam Chu; Ruixian Yang; J. Stephen Downie. Institute of Electrical and Electronics Engineers Inc., 2025. 9 (Proceedings of the ACM/IEEE Joint Conference on Digital Libraries).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Leveraging LLMs for Scientific Abstract Summarization
T2 - 24th ACM/IEEE Joint Conference on Digital Libraries, JCDL 2024
AU - Keya, Farhana
AU - Jaradeh, Mohamad Yaser
AU - Auer, Sören
N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).
PY - 2025/3/13
Y1 - 2025/3/13
N2 - There are lots of scientific articles are being published every year, it is increasingly challenging for researchers to maintain oversight and track scientific progress. Meanwhile, Large Language Models (LLMs) have revolutionized natural language processing tasks. This research focuses on generating summaries from research paper abstracts by utilizing LLMs and comprehensively evaluating the performance of the summarization. LLMs offer customizable outputs through Prompt Engineering by leveraging descriptive instructions including instructive examples and injection of context knowledge. We investigate the performance of various prompting techniques for various LLMs using both GPT-4 and human evaluation. For that purpose, we created a comprehensive benchmark dataset for scholarly summarization covering multiple scientific domains. We integrated our approach in the Open Research Knowledge Graph (ORKG) to enable quicker syn-Thesis of research findings and trends across multiple studies, facilitating the dissemination of scientific knowledge to policymakers, practitioners, and the public.
AB - There are lots of scientific articles are being published every year, it is increasingly challenging for researchers to maintain oversight and track scientific progress. Meanwhile, Large Language Models (LLMs) have revolutionized natural language processing tasks. This research focuses on generating summaries from research paper abstracts by utilizing LLMs and comprehensively evaluating the performance of the summarization. LLMs offer customizable outputs through Prompt Engineering by leveraging descriptive instructions including instructive examples and injection of context knowledge. We investigate the performance of various prompting techniques for various LLMs using both GPT-4 and human evaluation. For that purpose, we created a comprehensive benchmark dataset for scholarly summarization covering multiple scientific domains. We integrated our approach in the Open Research Knowledge Graph (ORKG) to enable quicker syn-Thesis of research findings and trends across multiple studies, facilitating the dissemination of scientific knowledge to policymakers, practitioners, and the public.
KW - Large Language Model
KW - Open Research Knowledge Graph (ORKG)
KW - Summarization
U2 - 10.1145/3677389.3702588
DO - 10.1145/3677389.3702588
M3 - Conference contribution
AN - SCOPUS:105001141869
T3 - Proceedings of the ACM/IEEE Joint Conference on Digital Libraries
BT - JCDL 2024 - Proceedings of the 24th ACM/IEEE Joint Conference on Digital Libraries
A2 - Wu, Jian
A2 - Hu, Xiao
A2 - Nurmikko-Fuller, Terhi
A2 - Chu, Sam
A2 - Yang, Ruixian
A2 - Downie, J. Stephen
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 December 2024 through 20 December 2024
ER -