Aligning AI Systems with Human Values

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten442-444
Seitenumfang3
ISBN (elektronisch)9798331538347
ISBN (Print)979-8-3315-3835-4
PublikationsstatusVeröffentlicht - 1 Sept. 2025
Veranstaltung33rd IEEE International Requirements Engineering Conference Workshops, REW 2025 - Valencia, Spanien
Dauer: 1 Sept. 20255 Sept. 2025

Publikationsreihe

Name IEEE International Requirements Engineering Conference Workshops
ISSN (Print)2770-6826
ISSN (elektronisch)2770-6834

Abstract

Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.

ASJC Scopus Sachgebiete

Zitieren

Aligning AI Systems with Human Values. / Herrmann, Marc; Mircea, Michael; Schneider, Kurt.
Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 442-444 ( IEEE International Requirements Engineering Conference Workshops).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Herrmann, M, Mircea, M & Schneider, K 2025, Aligning AI Systems with Human Values. in Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. IEEE International Requirements Engineering Conference Workshops, Institute of Electrical and Electronics Engineers Inc., S. 442-444, 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025, Valencia, Spanien, 1 Sept. 2025. https://doi.org/10.1109/REW66121.2025.00065
Herrmann, M., Mircea, M., & Schneider, K. (2025). Aligning AI Systems with Human Values. In Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025 (S. 442-444). ( IEEE International Requirements Engineering Conference Workshops). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/REW66121.2025.00065
Herrmann M, Mircea M, Schneider K. Aligning AI Systems with Human Values. in Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc. 2025. S. 442-444. ( IEEE International Requirements Engineering Conference Workshops). doi: 10.1109/REW66121.2025.00065
Herrmann, Marc ; Mircea, Michael ; Schneider, Kurt. / Aligning AI Systems with Human Values. Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 442-444 ( IEEE International Requirements Engineering Conference Workshops).
Download
@inproceedings{464c1909137d4b10969df5e3f7992097,
title = "Aligning AI Systems with Human Values",
abstract = "Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.",
keywords = "AI systems, human centered design, Human values, large language models, requirements elicitation, user needs",
author = "Marc Herrmann and Michael Mircea and Kurt Schneider",
note = "Publisher Copyright: {\textcopyright} 2025 IEEE.; 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025, REW 2025 ; Conference date: 01-09-2025 Through 05-09-2025",
year = "2025",
month = sep,
day = "1",
doi = "10.1109/REW66121.2025.00065",
language = "English",
isbn = "979-8-3315-3835-4",
series = " IEEE International Requirements Engineering Conference Workshops",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "442--444",
booktitle = "Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025",
address = "United States",

}

Download

TY - GEN

T1 - Aligning AI Systems with Human Values

AU - Herrmann, Marc

AU - Mircea, Michael

AU - Schneider, Kurt

N1 - Publisher Copyright: © 2025 IEEE.

PY - 2025/9/1

Y1 - 2025/9/1

N2 - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.

AB - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.

KW - AI systems

KW - human centered design

KW - Human values

KW - large language models

KW - requirements elicitation

KW - user needs

UR - http://www.scopus.com/inward/record.url?scp=105020992636&partnerID=8YFLogxK

U2 - 10.1109/REW66121.2025.00065

DO - 10.1109/REW66121.2025.00065

M3 - Conference contribution

AN - SCOPUS:105020992636

SN - 979-8-3315-3835-4

T3 - IEEE International Requirements Engineering Conference Workshops

SP - 442

EP - 444

BT - Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025

Y2 - 1 September 2025 through 5 September 2025

ER -

Von denselben Autoren