Details
Original language | English |
---|---|
Title of host publication | Database and Expert Systems Applications |
Subtitle of host publication | 30th International Conference, DEXA 2019, Linz, Austria, August 26–29, 2019, Proceedings |
Editors | Sven Hartmann, Josef Küng, Gabriele Anderst-Kotsis, Ismail Khalil, Sharma Chakravarthy, A Min Tjoa |
Pages | 261-276 |
Number of pages | 16 |
Volume | I |
ISBN (electronic) | 9783030276157 |
Publication status | Published - 3 Aug 2019 |
Event | 30th International Conference on Database and Expert Systems Applications, DEXA 2019 - Linz, Austria Duration: 26 Aug 2019 → 29 Aug 2019 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 11706 |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision. Existing fairness-aware approaches tackle fairness as a batch learning problem and aim at learning a fair model which can then be applied to future instances of the problem. In many applications, however, the data comes sequentially and its characteristics might evolve with time. In such a setting, it is counter-intuitive to “fix” a (fair) model over the data stream as changes in the data might incur changes in the underlying model therefore, affecting its fairness. In this work, we propose fairness-enhancing interventions that modify the input data so that the outcome of any stream classifier applied to that data will be fair. Experiments on real and synthetic data show that our approach achieves good predictive performance and low discrimination scores over the course of the stream.
Keywords
- Data mining, Fairness-aware learning, Stream classification
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Database and Expert Systems Applications: 30th International Conference, DEXA 2019, Linz, Austria, August 26–29, 2019, Proceedings. ed. / Sven Hartmann; Josef Küng; Gabriele Anderst-Kotsis; Ismail Khalil; Sharma Chakravarthy; A Min Tjoa. Vol. I 1. ed. 2019. p. 261-276 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11706).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Fairness-Enhancing Interventions in Stream Classification
AU - Iosifidis, Vasileios
AU - Tran, Thi Ngoc Han
AU - Ntoutsi, Eirini
N1 - Funding information:. The work is inspired by the German Research Foundation (DFG) project OSCAR (Opinion Stream Classification with Ensembles and Active leaRners) for which the last author is Co-Principal Investigator.
PY - 2019/8/3
Y1 - 2019/8/3
N2 - The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision. Existing fairness-aware approaches tackle fairness as a batch learning problem and aim at learning a fair model which can then be applied to future instances of the problem. In many applications, however, the data comes sequentially and its characteristics might evolve with time. In such a setting, it is counter-intuitive to “fix” a (fair) model over the data stream as changes in the data might incur changes in the underlying model therefore, affecting its fairness. In this work, we propose fairness-enhancing interventions that modify the input data so that the outcome of any stream classifier applied to that data will be fair. Experiments on real and synthetic data show that our approach achieves good predictive performance and low discrimination scores over the course of the stream.
AB - The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision. Existing fairness-aware approaches tackle fairness as a batch learning problem and aim at learning a fair model which can then be applied to future instances of the problem. In many applications, however, the data comes sequentially and its characteristics might evolve with time. In such a setting, it is counter-intuitive to “fix” a (fair) model over the data stream as changes in the data might incur changes in the underlying model therefore, affecting its fairness. In this work, we propose fairness-enhancing interventions that modify the input data so that the outcome of any stream classifier applied to that data will be fair. Experiments on real and synthetic data show that our approach achieves good predictive performance and low discrimination scores over the course of the stream.
KW - Data mining
KW - Fairness-aware learning
KW - Stream classification
UR - http://www.scopus.com/inward/record.url?scp=85077112018&partnerID=8YFLogxK
U2 - 10.48550/arXiv.1907.07223
DO - 10.48550/arXiv.1907.07223
M3 - Conference contribution
AN - SCOPUS:85077112018
SN - 9783030276140
VL - I
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 261
EP - 276
BT - Database and Expert Systems Applications
A2 - Hartmann, Sven
A2 - Küng, Josef
A2 - Anderst-Kotsis, Gabriele
A2 - Khalil, Ismail
A2 - Chakravarthy, Sharma
A2 - Tjoa, A Min
T2 - 30th International Conference on Database and Expert Systems Applications, DEXA 2019
Y2 - 26 August 2019 through 29 August 2019
ER -