Aligning Visual Contrastive learning models via Preference Optimization

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Amirabbas Afzali
  • Borna Khodabandeh
  • Mahyar JafariNodeh
  • Simon Gottschalk
  • Ali Rasekh
  • Sepehr Kazemi

Organisationseinheiten

Externe Organisationen

  • Massachusetts Institute of Technology (MIT)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des Sammelwerks13th International Conference on Learning Representations, ICLR 2025
Seiten92036-92065
Seitenumfang30
ISBN (elektronisch)9798331320850
PublikationsstatusVeröffentlicht - 24 Apr. 2025
Veranstaltung13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapur
Dauer: 24 Apr. 202528 Apr. 2025

Abstract

Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.

ASJC Scopus Sachgebiete

Zitieren

Aligning Visual Contrastive learning models via Preference Optimization. / Afzali, Amirabbas; Khodabandeh, Borna; JafariNodeh, Mahyar et al.
13th International Conference on Learning Representations, ICLR 2025. 2025. S. 92036-92065.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Afzali, A, Khodabandeh, B, JafariNodeh, M, Gottschalk, S, Rasekh, A & Kazemi, S 2025, Aligning Visual Contrastive learning models via Preference Optimization. in 13th International Conference on Learning Representations, ICLR 2025. S. 92036-92065, 13th International Conference on Learning Representations, ICLR 2025, Singapore, Singapur, 24 Apr. 2025. https://doi.org/10.48550/arXiv.2411.08923
Afzali, A., Khodabandeh, B., JafariNodeh, M., Gottschalk, S., Rasekh, A., & Kazemi, S. (2025). Aligning Visual Contrastive learning models via Preference Optimization. In 13th International Conference on Learning Representations, ICLR 2025 (S. 92036-92065) https://doi.org/10.48550/arXiv.2411.08923
Afzali A, Khodabandeh B, JafariNodeh M, Gottschalk S, Rasekh A, Kazemi S. Aligning Visual Contrastive learning models via Preference Optimization. in 13th International Conference on Learning Representations, ICLR 2025. 2025. S. 92036-92065 doi: 10.48550/arXiv.2411.08923
Afzali, Amirabbas ; Khodabandeh, Borna ; JafariNodeh, Mahyar et al. / Aligning Visual Contrastive learning models via Preference Optimization. 13th International Conference on Learning Representations, ICLR 2025. 2025. S. 92036-92065
Download
@inproceedings{a7ccf2ac1005450e8e8e660d3f829f6f,
title = "Aligning Visual Contrastive learning models via Preference Optimization",
abstract = "Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.",
author = "Amirabbas Afzali and Borna Khodabandeh and Mahyar JafariNodeh and Simon Gottschalk and Ali Rasekh and Sepehr Kazemi",
note = "Publisher Copyright: {\textcopyright} 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.; 13th International Conference on Learning Representations, ICLR 2025, ICLR 2025 ; Conference date: 24-04-2025 Through 28-04-2025",
year = "2025",
month = apr,
day = "24",
doi = "10.48550/arXiv.2411.08923",
language = "English",
pages = "92036--92065",
booktitle = "13th International Conference on Learning Representations, ICLR 2025",

}

Download

TY - GEN

T1 - Aligning Visual Contrastive learning models via Preference Optimization

AU - Afzali, Amirabbas

AU - Khodabandeh, Borna

AU - JafariNodeh, Mahyar

AU - Gottschalk, Simon

AU - Rasekh, Ali

AU - Kazemi, Sepehr

N1 - Publisher Copyright: © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.

PY - 2025/4/24

Y1 - 2025/4/24

N2 - Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.

AB - Contrastive learning models have demonstrated impressive abilities to capture semantic similarities by aligning representations in the embedding space. However, their performance can be limited by the quality of the training data and its inherent biases. While Preference Optimization (PO) methods such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have been applied to align generative models with human preferences, their use in contrastive learning has yet to be explored. This paper introduces a novel method for training contrastive learning models using different PO methods to break down complex concepts. Our method systematically aligns model behavior with desired preferences, enhancing performance on the targeted task. In particular, we focus on enhancing model robustness against typographic attacks and inductive biases, commonly seen in contrastive vision-language models like CLIP. Our experiments demonstrate that models trained using PO outperform standard contrastive learning techniques while retaining their ability to handle adversarial challenges and maintain accuracy on other downstream tasks. This makes our method well-suited for tasks requiring fairness, robustness, and alignment with specific preferences. We evaluate our method for tackling typographic attacks on images and explore its ability to disentangle gender concepts and mitigate gender bias, showcasing the versatility of our approach.

UR - http://www.scopus.com/inward/record.url?scp=105010273475&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2411.08923

DO - 10.48550/arXiv.2411.08923

M3 - Conference contribution

AN - SCOPUS:105010273475

SP - 92036

EP - 92065

BT - 13th International Conference on Learning Representations, ICLR 2025

T2 - 13th International Conference on Learning Representations, ICLR 2025

Y2 - 24 April 2025 through 28 April 2025

ER -

Von denselben Autoren