Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • National Research Council Italy (CNR)
View graph of relations

Details

Original languageEnglish
Title of host publicationAIxEDU 2023
Subtitle of host publicationHigh-performance Artificial Intelligence Systems in Education
Number of pages8
Publication statusPublished - 2 Jan 2024
Event1st International Workshop on High-Performance Artificial Intelligence Systems in Education (AIxEDU 2023) - Rome, Italy
Duration: 6 Nov 20236 Nov 2023

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR Workshop Proceedings
Volume3605
ISSN (Print)1613-0073

Abstract

Social media poses numerous dangers, such as the spread of toxic content, hate speech, false information, and moral outrage. In particular, the propagation of moral outrage on social media can result in harmful outcomes like promoting conspiracy theories, violent protests, and political polarization. Teenagers frequently use social media and are especially susceptible to these threats, being exposed to several risks that can impact their lives. Recent studies have confirmed that a combination of human and AI efforts can be highly effective in several domains. In this study, we examine human-AI collaboration’s potential and efficacy in detecting morality in online posts. To this end, we conducted a pilot study with teenagers to determine their ability to recognize moral content in online posts. Consequently, we exploited Prompt Engineering to program ChatGPT to recognize morality and comprehend its potential as an intelligent tutor in supporting students in such a context.

Keywords

    ChatGPT, Human-AI collaboration, moral classification, social media platforms

ASJC Scopus subject areas

Cite this

Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory. / Schicchi, Daniele; Upadhyaya, Apoorva; Fisichella, Marco et al.
AIxEDU 2023: High-performance Artificial Intelligence Systems in Education. 2024. (CEUR Workshop Proceedings; Vol. 3605).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Schicchi, D, Upadhyaya, A, Fisichella, M & Taibi, D 2024, Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory. in AIxEDU 2023: High-performance Artificial Intelligence Systems in Education. CEUR Workshop Proceedings, vol. 3605, 1st International Workshop on High-Performance Artificial Intelligence Systems in Education (AIxEDU 2023), Rome, Italy, 6 Nov 2023. <https://ceur-ws.org/Vol-3605/7.pdf>
Schicchi, D., Upadhyaya, A., Fisichella, M., & Taibi, D. (2024). Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory. In AIxEDU 2023: High-performance Artificial Intelligence Systems in Education (CEUR Workshop Proceedings; Vol. 3605). https://ceur-ws.org/Vol-3605/7.pdf
Schicchi D, Upadhyaya A, Fisichella M, Taibi D. Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory. In AIxEDU 2023: High-performance Artificial Intelligence Systems in Education. 2024. (CEUR Workshop Proceedings).
Schicchi, Daniele ; Upadhyaya, Apoorva ; Fisichella, Marco et al. / Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory. AIxEDU 2023: High-performance Artificial Intelligence Systems in Education. 2024. (CEUR Workshop Proceedings).
Download
@inproceedings{ea4bd8fd80ab49a889fb97bdaf8c0b10,
title = "Using ChatGPT to Enhance Students{\textquoteright} Behavior in Social Media via the Moral Foundation Theory",
abstract = "Social media poses numerous dangers, such as the spread of toxic content, hate speech, false information, and moral outrage. In particular, the propagation of moral outrage on social media can result in harmful outcomes like promoting conspiracy theories, violent protests, and political polarization. Teenagers frequently use social media and are especially susceptible to these threats, being exposed to several risks that can impact their lives. Recent studies have confirmed that a combination of human and AI efforts can be highly effective in several domains. In this study, we examine human-AI collaboration{\textquoteright}s potential and efficacy in detecting morality in online posts. To this end, we conducted a pilot study with teenagers to determine their ability to recognize moral content in online posts. Consequently, we exploited Prompt Engineering to program ChatGPT to recognize morality and comprehend its potential as an intelligent tutor in supporting students in such a context.",
keywords = "ChatGPT, Human-AI collaboration, moral classification, social media platforms",
author = "Daniele Schicchi and Apoorva Upadhyaya and Marco Fisichella and Davide Taibi",
year = "2024",
month = jan,
day = "2",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
booktitle = "AIxEDU 2023",
note = "1st International Workshop on High-Performance Artificial Intelligence Systems in Education (AIxEDU 2023), AIxEDU 2023 ; Conference date: 06-11-2023 Through 06-11-2023",

}

Download

TY - GEN

T1 - Using ChatGPT to Enhance Students’ Behavior in Social Media via the Moral Foundation Theory

AU - Schicchi, Daniele

AU - Upadhyaya, Apoorva

AU - Fisichella, Marco

AU - Taibi, Davide

PY - 2024/1/2

Y1 - 2024/1/2

N2 - Social media poses numerous dangers, such as the spread of toxic content, hate speech, false information, and moral outrage. In particular, the propagation of moral outrage on social media can result in harmful outcomes like promoting conspiracy theories, violent protests, and political polarization. Teenagers frequently use social media and are especially susceptible to these threats, being exposed to several risks that can impact their lives. Recent studies have confirmed that a combination of human and AI efforts can be highly effective in several domains. In this study, we examine human-AI collaboration’s potential and efficacy in detecting morality in online posts. To this end, we conducted a pilot study with teenagers to determine their ability to recognize moral content in online posts. Consequently, we exploited Prompt Engineering to program ChatGPT to recognize morality and comprehend its potential as an intelligent tutor in supporting students in such a context.

AB - Social media poses numerous dangers, such as the spread of toxic content, hate speech, false information, and moral outrage. In particular, the propagation of moral outrage on social media can result in harmful outcomes like promoting conspiracy theories, violent protests, and political polarization. Teenagers frequently use social media and are especially susceptible to these threats, being exposed to several risks that can impact their lives. Recent studies have confirmed that a combination of human and AI efforts can be highly effective in several domains. In this study, we examine human-AI collaboration’s potential and efficacy in detecting morality in online posts. To this end, we conducted a pilot study with teenagers to determine their ability to recognize moral content in online posts. Consequently, we exploited Prompt Engineering to program ChatGPT to recognize morality and comprehend its potential as an intelligent tutor in supporting students in such a context.

KW - ChatGPT

KW - Human-AI collaboration

KW - moral classification

KW - social media platforms

UR - http://www.scopus.com/inward/record.url?scp=85183204894&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85183204894

T3 - CEUR Workshop Proceedings

BT - AIxEDU 2023

T2 - 1st International Workshop on High-Performance Artificial Intelligence Systems in Education (AIxEDU 2023)

Y2 - 6 November 2023 through 6 November 2023

ER -

By the same author(s)