Details
Original language | English |
---|---|
Pages | 9294-9313 |
Publication status | Published - Aug 2024 |
Event | Findings of the Association for Computational Linguistics ACL 2024 - Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 https://2024.aclweb.org/ |
Conference
Conference | Findings of the Association for Computational Linguistics ACL 2024 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 11 Aug 2024 → 16 Aug 2024 |
Internet address |
Abstract
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2024. 9294-9313 Paper presented at Findings of the Association for Computational Linguistics ACL 2024, Bangkok, Thailand.
Research output: Contribution to conference › Paper › Research › peer review
}
TY - CONF
T1 - Disentangling Dialect from Social Bias via Multitask Learning to Improve Fairness.
AU - Spliethöver, Maximilian
AU - Menon, Sai Nikhil
AU - Wachsmuth, Henning
PY - 2024/8
Y1 - 2024/8
N2 - Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.
AB - Dialects introduce syntactic and lexical variations in language that occur in regional or social groups. Most NLP methods are not sensitive to such variations. This may lead to unfair behavior of the methods, conveying negative bias towards dialect speakers. While previous work has studied dialect-related fairness for aspects like hate speech, other aspects of biased language, such as lewdness, remain fully unexplored. To fill this gap, we investigate performance disparities between dialects in the detection of five aspects of biased language and how to mitigate them. To alleviate bias, we present a multitask learning approach that models dialect language as an auxiliary task to incorporate syntactic and lexical variations. In our experiments with African-American English dialect, we provide empirical evidence that complementing common learning approaches with dialect modeling improves their fairness. Furthermore, the results suggest that multitask learning achieves state-of-the-art performance and helps to detect properties of biased language more reliably.
M3 - Paper
SP - 9294
EP - 9313
T2 - Findings of the Association for Computational Linguistics ACL 2024
Y2 - 11 August 2024 through 16 August 2024
ER -