Loading [MathJax]/extensions/tex2jax.js

LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

Details

Original languageEnglish
Title of host publicationProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
EditorsLun-Wei Ku, Andre F. T. Martins, Vivek Srikumar
PublisherAssociation for Computational Linguistics (ACL)
Pages4455-4476
Number of pages22
Volume1
ISBN (electronic)9798891760943
Publication statusPublished - Aug 2024
Event62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) - Bangkok, Thailand
Duration: 11 Aug 202416 Aug 2024

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
Volume1
ISSN (Print)0736-587X

Abstract

Ensuring that online discussions are civil and productive is a major challenge for social media platforms. Such platforms usually rely both on users and on automated detection tools to flag inappropriate arguments of other users, which moderators then review. However, this kind of post-hoc moderation is expensive and time-consuming, and moderators are often overwhelmed by the amount and severity of flagged content. Instead, a promising alternative is to prevent negative behavior during content creation. This paper studies how inappropriate language in arguments can be computationally mitigated. We propose a reinforcement learning-based rewriting approach that balances content preservation and appropriateness based on existing classifiers, prompting an instruction-finetuned large language model (LLM) as our initial policy. Unlike related style transfer tasks, rewriting inappropriate arguments allows deleting and adding content permanently. It is therefore tackled on document level rather than sentence level. We evaluate different weighting schemes for the reward function in both absolute and relative human assessment studies. Systematic experiments on non-parallel data provide evidence that our approach can mitigate the inappropriateness of arguments while largely preserving their content. It significantly outperforms competitive baselines, including few-shot learning, prompting, and humans.

ASJC Scopus subject areas

Cite this

LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. / Ziegenbein, Timon; Skitalinska, Gabriella; Bayat Makou, Alireza et al.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). ed. / Lun-Wei Ku; Andre F. T. Martins; Vivek Srikumar. Vol. 1 Association for Computational Linguistics (ACL), 2024. p. 4455-4476 (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Vol. 1).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Ziegenbein, T, Skitalinska, G, Bayat Makou, A & Wachsmuth, H 2024, LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. in L-W Ku, AFT Martins & V Srikumar (eds), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). vol. 1, Proceedings of the Annual Meeting of the Association for Computational Linguistics, vol. 1, Association for Computational Linguistics (ACL), pp. 4455-4476, 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), Bangkok, Thailand, 11 Aug 2024. https://doi.org/10.48550/arXiv.2406.03363, https://doi.org/10.18653/v1/2024.acl-long.244
Ziegenbein, T., Skitalinska, G., Bayat Makou, A., & Wachsmuth, H. (2024). LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. In L.-W. Ku, A. F. T. Martins, & V. Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 4455-4476). (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Vol. 1). Association for Computational Linguistics (ACL). https://doi.org/10.48550/arXiv.2406.03363, https://doi.org/10.18653/v1/2024.acl-long.244
Ziegenbein T, Skitalinska G, Bayat Makou A, Wachsmuth H. LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. In Ku LW, Martins AFT, Srikumar V, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. Association for Computational Linguistics (ACL). 2024. p. 4455-4476. (Proceedings of the Annual Meeting of the Association for Computational Linguistics). doi: 10.48550/arXiv.2406.03363, 10.18653/v1/2024.acl-long.244
Ziegenbein, Timon ; Skitalinska, Gabriella ; Bayat Makou, Alireza et al. / LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). editor / Lun-Wei Ku ; Andre F. T. Martins ; Vivek Srikumar. Vol. 1 Association for Computational Linguistics (ACL), 2024. pp. 4455-4476 (Proceedings of the Annual Meeting of the Association for Computational Linguistics).
Download
@inproceedings{dcd9498cc0024bd294a91a8c9b320de2,
title = "LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback",
abstract = "Ensuring that online discussions are civil and productive is a major challenge for social media platforms. Such platforms usually rely both on users and on automated detection tools to flag inappropriate arguments of other users, which moderators then review. However, this kind of post-hoc moderation is expensive and time-consuming, and moderators are often overwhelmed by the amount and severity of flagged content. Instead, a promising alternative is to prevent negative behavior during content creation. This paper studies how inappropriate language in arguments can be computationally mitigated. We propose a reinforcement learning-based rewriting approach that balances content preservation and appropriateness based on existing classifiers, prompting an instruction-finetuned large language model (LLM) as our initial policy. Unlike related style transfer tasks, rewriting inappropriate arguments allows deleting and adding content permanently. It is therefore tackled on document level rather than sentence level. We evaluate different weighting schemes for the reward function in both absolute and relative human assessment studies. Systematic experiments on non-parallel data provide evidence that our approach can mitigate the inappropriateness of arguments while largely preserving their content. It significantly outperforms competitive baselines, including few-shot learning, prompting, and humans.",
author = "Timon Ziegenbein and Gabriella Skitalinska and {Bayat Makou}, Alireza and Henning Wachsmuth",
year = "2024",
month = aug,
doi = "10.48550/arXiv.2406.03363",
language = "English",
volume = "1",
series = "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics (ACL)",
pages = "4455--4476",
editor = "Lun-Wei Ku and Martins, {Andre F. T.} and Vivek Srikumar",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
address = "Australia",
note = "62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), ACL 2024 ; Conference date: 11-08-2024 Through 16-08-2024",

}

Download

TY - GEN

T1 - LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback

AU - Ziegenbein, Timon

AU - Skitalinska, Gabriella

AU - Bayat Makou, Alireza

AU - Wachsmuth, Henning

PY - 2024/8

Y1 - 2024/8

N2 - Ensuring that online discussions are civil and productive is a major challenge for social media platforms. Such platforms usually rely both on users and on automated detection tools to flag inappropriate arguments of other users, which moderators then review. However, this kind of post-hoc moderation is expensive and time-consuming, and moderators are often overwhelmed by the amount and severity of flagged content. Instead, a promising alternative is to prevent negative behavior during content creation. This paper studies how inappropriate language in arguments can be computationally mitigated. We propose a reinforcement learning-based rewriting approach that balances content preservation and appropriateness based on existing classifiers, prompting an instruction-finetuned large language model (LLM) as our initial policy. Unlike related style transfer tasks, rewriting inappropriate arguments allows deleting and adding content permanently. It is therefore tackled on document level rather than sentence level. We evaluate different weighting schemes for the reward function in both absolute and relative human assessment studies. Systematic experiments on non-parallel data provide evidence that our approach can mitigate the inappropriateness of arguments while largely preserving their content. It significantly outperforms competitive baselines, including few-shot learning, prompting, and humans.

AB - Ensuring that online discussions are civil and productive is a major challenge for social media platforms. Such platforms usually rely both on users and on automated detection tools to flag inappropriate arguments of other users, which moderators then review. However, this kind of post-hoc moderation is expensive and time-consuming, and moderators are often overwhelmed by the amount and severity of flagged content. Instead, a promising alternative is to prevent negative behavior during content creation. This paper studies how inappropriate language in arguments can be computationally mitigated. We propose a reinforcement learning-based rewriting approach that balances content preservation and appropriateness based on existing classifiers, prompting an instruction-finetuned large language model (LLM) as our initial policy. Unlike related style transfer tasks, rewriting inappropriate arguments allows deleting and adding content permanently. It is therefore tackled on document level rather than sentence level. We evaluate different weighting schemes for the reward function in both absolute and relative human assessment studies. Systematic experiments on non-parallel data provide evidence that our approach can mitigate the inappropriateness of arguments while largely preserving their content. It significantly outperforms competitive baselines, including few-shot learning, prompting, and humans.

UR - http://www.scopus.com/inward/record.url?scp=85204450138&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2406.03363

DO - 10.48550/arXiv.2406.03363

M3 - Conference contribution

VL - 1

T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics

SP - 4455

EP - 4476

BT - Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

A2 - Ku, Lun-Wei

A2 - Martins, Andre F. T.

A2 - Srikumar, Vivek

PB - Association for Computational Linguistics (ACL)

T2 - 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)

Y2 - 11 August 2024 through 16 August 2024

ER -

By the same author(s)