Details
| Originalsprache | Englisch |
|---|---|
| Titel des Sammelwerks | 13th International Conference on Learning Representations, ICLR 2025 |
| Seiten | 92036-92065 |
| Seitenumfang | 30 |
| ISBN (elektronisch) | 9798331320850 |
| Publikationsstatus | Veröffentlicht - 24 Apr. 2025 |
| Veranstaltung | 13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapur Dauer: 24 Apr. 2025 → 28 Apr. 2025 |
Abstract
Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.
ASJC Scopus Sachgebiete
- Geisteswissenschaftliche Fächer (insg.)
- Sprache und Linguistik
- Informatik (insg.)
- Angewandte Informatik
- Sozialwissenschaften (insg.)
- Ausbildung bzw. Denomination
- Sozialwissenschaften (insg.)
- Linguistik und Sprache
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
13th International Conference on Learning Representations, ICLR 2025. 2025. S. 92036-92065.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Aligning Visual Contrastive learning models via Preference Optimization
AU - Afzali, Amirabbas
AU - Khodabandeh, Borna
AU - JafariNodeh, Mahyar
AU - Gottschalk, Simon
AU - Rasekh, Ali
AU - Kazemi, Sepehr
N1 - Publisher Copyright: © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
PY - 2025/4/24
Y1 - 2025/4/24
N2 - Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.
AB - Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.
UR - http://www.scopus.com/inward/record.url?scp=105010273475&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2411.08923
DO - 10.48550/arXiv.2411.08923
M3 - Conference contribution
AN - SCOPUS:105010273475
SP - 92036
EP - 92065
BT - 13th International Conference on Learning Representations, ICLR 2025
T2 - 13th International Conference on Learning Representations, ICLR 2025
Y2 - 24 April 2025 through 28 April 2025
ER -