Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | WWW `24 |
Untertitel | Proceedings of the ACM Web Conference 2024 |
Seiten | 4534-4543 |
Seitenumfang | 10 |
ISBN (elektronisch) | 9798400701719 |
Publikationsstatus | Veröffentlicht - 13 Mai 2024 |
Veranstaltung | 33rd ACM Web Conference, WWW 2024 - Singapore, Singapur Dauer: 13 Mai 2024 → 17 Mai 2024 |
Abstract
Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Computernetzwerke und -kommunikation
- Informatik (insg.)
- Software
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 4534-4543.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Human vs ChatGPT
T2 - 33rd ACM Web Conference, WWW 2024
AU - Nguyen, Thi Huyen
AU - Rudra, Koustav
N1 - Publisher Copyright: © 2024 ACM.
PY - 2024/5/13
Y1 - 2024/5/13
N2 - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
AB - Recent studies have exploited the vital role of microblogging platforms, such as Twitter, in crisis situations. Various machine-learning approaches have been proposed to identify and prioritize crucial information from different humanitarian categories for preparation and rescue purposes. In crisis domain, the explanation of models' output decisions is gaining significant research momentum. Some previous works focused on human annotations of rationales to train and extract supporting evidence for model interpretability. However, such annotations are usually expensive, require much effort, and are not always available in real-time situations of a new crisis event. In this paper, we investigate the recent advances in large language models (LLMs) as data annotators on informal tweet text. We perform a detailed qualitative and quantitative evaluation of ChatGPT rationale annotations over a few-shot setup. ChatGPT annotations are quite close to humans but less precise in nature. Further, we propose an active learning-based interpretable classification model from a small set of annotated data. Our experiments show that (a). ChatGPT has the potential to extract rationales for the crisis tweet classification tasks, but the performance is slightly less than the model trained on human-annotated rationale data (\sim3-6%), (b). active learning setup can help reduce the burden of manual annotations and maintain a trade-off between performance and data size.
KW - active learning
KW - crisis events
KW - interpretability
KW - large language model
KW - semi-supervised learning
KW - twitter
UR - http://www.scopus.com/inward/record.url?scp=85194082785&partnerID=8YFLogxK
U2 - 10.1145/3589334.3648141
DO - 10.1145/3589334.3648141
M3 - Conference contribution
AN - SCOPUS:85194082785
SP - 4534
EP - 4543
BT - WWW `24
Y2 - 13 May 2024 through 17 May 2024
ER -