Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

  • Ming Jiang
  • Jennifer D’Souza
  • Sören Auer
  • J. Stephen Downie

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
  • University of Illinois Urbana-Champaign (UIUC)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)197-215
Seitenumfang19
FachzeitschriftInternational Journal on Digital Libraries
Jahrgang23
Ausgabenummer2
Frühes Online-Datum2 Nov. 2021
PublikationsstatusVeröffentlicht - Juni 2022

Abstract

The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

ASJC Scopus Sachgebiete

Zitieren

Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. / Jiang, Ming; D’Souza, Jennifer; Auer, Sören et al.
in: International Journal on Digital Libraries, Jahrgang 23, Nr. 2, 06.2022, S. 197-215.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Jiang M, D’Souza J, Auer S, Downie JS. Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. International Journal on Digital Libraries. 2022 Jun;23(2):197-215. Epub 2021 Nov 2. doi: 10.48550/arXiv.2305.02291, 10.1007/s00799-021-00313-y
Jiang, Ming ; D’Souza, Jennifer ; Auer, Sören et al. / Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. in: International Journal on Digital Libraries. 2022 ; Jahrgang 23, Nr. 2. S. 197-215.
Download
@article{1d2b034695c143098a7bedb4e22de255,
title = "Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections",
abstract = "The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers{\textquoteright} performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier{\textquoteright}s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.",
keywords = "Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification",
author = "Ming Jiang and Jennifer D{\textquoteright}Souza and S{\"o}ren Auer and Downie, {J. Stephen}",
note = "Funding Information: This material is based upon work supported by the National Science Foundation under Grant No. OAC 1939929 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) ",
year = "2022",
month = jun,
doi = "10.48550/arXiv.2305.02291",
language = "English",
volume = "23",
pages = "197--215",
number = "2",

}

Download

TY - JOUR

T1 - Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections

AU - Jiang, Ming

AU - D’Souza, Jennifer

AU - Auer, Sören

AU - Downie, J. Stephen

N1 - Funding Information: This material is based upon work supported by the National Science Foundation under Grant No. OAC 1939929 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536)

PY - 2022/6

Y1 - 2022/6

N2 - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

AB - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

KW - Digital library

KW - Information extraction

KW - Knowledge graphs

KW - Neural machine learning

KW - Scholarly text mining

KW - Semantic relation classification

UR - http://www.scopus.com/inward/record.url?scp=85118454711&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2305.02291

DO - 10.48550/arXiv.2305.02291

M3 - Article

AN - SCOPUS:85118454711

VL - 23

SP - 197

EP - 215

JO - International Journal on Digital Libraries

JF - International Journal on Digital Libraries

SN - 1432-5012

IS - 2

ER -

Von denselben Autoren