Details
Original language | English |
---|---|
Journal | Journal of applied philosophy |
Early online date | 5 Feb 2025 |
Publication status | E-pub ahead of print - 5 Feb 2025 |
Abstract
Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these algorithmic groups, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is systematically disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should prevent algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.
ASJC Scopus subject areas
- Arts and Humanities(all)
- Philosophy
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Journal of applied philosophy, 05.02.2025.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Emergent Discrimination
T2 - Should We Protect Algorithmic Groups?
AU - Zeiser, Jannik
N1 - Publisher Copyright: © 2025 The Author(s). Journal of Applied Philosophy published by John Wiley & Sons Ltd on behalf of Society for Applied Philosophy.
PY - 2025/2/5
Y1 - 2025/2/5
N2 - Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these algorithmic groups, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is systematically disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should prevent algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.
AB - Discrimination is usually thought of in terms of socially salient groups, such as race or gender. Some scholars argue that the rise of algorithmic decision-making poses a challenge to this notion. Algorithms are not bound by a social view of the world. Therefore, they may not only inherit pre-existing social biases and injustices but may also discriminate based on entirely new categories that have little or no meaning to humans at all, such as ‘being born on a Tuesday’. Should this prospect change how we theorize about discrimination, and should we protect these algorithmic groups, as some have suggested? I argue that the phenomenon is adequately described as ‘discrimination’ when a group is systematically disadvantaged. At present, we lack information about whether any algorithmic group meets this criterion, so it is difficult to protect such groups. Instead, we should prevent algorithms from disproportionately disadvantaging certain individuals, and I outline strategies for doing so.
UR - http://www.scopus.com/inward/record.url?scp=85216974172&partnerID=8YFLogxK
U2 - 10.1111/japp.12793
DO - 10.1111/japp.12793
M3 - Article
AN - SCOPUS:85216974172
JO - Journal of applied philosophy
JF - Journal of applied philosophy
SN - 0264-3758
ER -