Details
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 442-444 |
| Number of pages | 3 |
| ISBN (electronic) | 9798331538347 |
| ISBN (print) | 979-8-3315-3835-4 |
| Publication status | Published - 1 Sept 2025 |
| Event | 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025 - Valencia, Spain Duration: 1 Sept 2025 → 5 Sept 2025 |
Publication series
| Name | IEEE International Requirements Engineering Conference Workshops |
|---|---|
| ISSN (Print) | 2770-6826 |
| ISSN (electronic) | 2770-6834 |
Abstract
Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
Keywords
- AI systems, human centered design, Human values, large language models, requirements elicitation, user needs
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
- Computer Science(all)
- Software
- Engineering(all)
- Safety, Risk, Reliability and Quality
- Mathematics(all)
- Modelling and Simulation
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025. Institute of Electrical and Electronics Engineers Inc., 2025. p. 442-444 ( IEEE International Requirements Engineering Conference Workshops).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Aligning AI Systems with Human Values
AU - Herrmann, Marc
AU - Mircea, Michael
AU - Schneider, Kurt
N1 - Publisher Copyright: © 2025 IEEE.
PY - 2025/9/1
Y1 - 2025/9/1
N2 - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
AB - Human values shape what stakeholders need, expect, and trust from AI systems. These values, such as sustainability, fairness, or privacy, often differ between groups and can conflict in ways that traditional requirements engineering struggles to address. When systems are designed without recognizing these differences, the result can be miscommunication, misalignment, and loss of trust.Our approach centers on two key principles: 1) Identifying human values is essential for developing AI systems that align with user expectations and needs. This involves prioritizing these values and resolving conflicts to ensure AI systems are ethically sound, and 2) Employing large language models to uncover and analyze these values allows us to leverage broad knowledge bases.Rather than replacing human judgment, LLMs serve as tools to scale and support value elicitation and analysis. For example, they can be used to generate (AI-)personas, extract concerns from interviews, or surface value conflicts in stakeholder feedback. This helps ensure underrepresented voices are not overlooked during early stages of design. Our goal is to enable AI systems that are not only technically robust but also socially responsible, trusted, and inclusive.
KW - AI systems
KW - human centered design
KW - Human values
KW - large language models
KW - requirements elicitation
KW - user needs
UR - http://www.scopus.com/inward/record.url?scp=105020992636&partnerID=8YFLogxK
U2 - 10.1109/REW66121.2025.00065
DO - 10.1109/REW66121.2025.00065
M3 - Conference contribution
AN - SCOPUS:105020992636
SN - 979-8-3315-3835-4
T3 - IEEE International Requirements Engineering Conference Workshops
SP - 442
EP - 444
BT - Proceedings - 2025 IEEE 33rd International Requirements Engineering Conference Workshops, REW 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd IEEE International Requirements Engineering Conference Workshops, REW 2025
Y2 - 1 September 2025 through 5 September 2025
ER -