Details
| Originalsprache | Englisch |
|---|---|
| Titel des Sammelwerks | Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025 |
| Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
| Seiten | 442-444 |
| Seitenumfang | 3 |
| ISBN (elektronisch) | 9798331538347 |
| ISBN (Print) | 979-8-3315-3835-4 |
| Publikationsstatus | Veröffentlicht - 1 Sept. 2025 |
| Veranstaltung | 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025 - Valencia, Spanien Dauer: 1 Sept. 2025 → 5 Sept. 2025 |
Publikationsreihe
| Name | IEEE International Requirements Engineering Conference Workshops |
|---|---|
| ISSN (Print) | 2770-6826 |
| ISSN (elektronisch) | 2770-6834 |
Abstract
Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
- Informatik (insg.)
- Software
- Ingenieurwesen (insg.)
- Sicherheit, Risiko, Zuverlässigkeit und Qualität
- Mathematik (insg.)
- Modellierung und Simulation
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc., 2025. S. 442-444 ( IEEE International Requirements Engineering Conference Workshops).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Aligning AI Systems with Human Values
AU - Herrmann, Marc
AU - Mircea, Michael
AU - Schneider, Kurt
N1 - Publisher Copyright: © 2025 IEEE.
PY - 2025/9/1
Y1 - 2025/9/1
N2 - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
AB - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
KW - AI systems
KW - human centered design
KW - Human values
KW - large language models
KW - requirements elicitation
KW - user needs
UR - http://www.scopus.com/inward/record.url?scp=105020992636&partnerID=8YFLogxK
U2 - 10.1109/REW66121.2025.00065
DO - 10.1109/REW66121.2025.00065
M3 - Conference contribution
AN - SCOPUS:105020992636
SN - 979-8-3315-3835-4
T3 - IEEE International Requirements Engineering Conference Workshops
SP - 442
EP - 444
BT - Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025
Y2 - 1 September 2025 through 5 September 2025
ER -