Details
Original language | English |
---|---|
Title of host publication | CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems |
Publisher | Association for Computing Machinery (ACM) |
ISBN (electronic) | 9781450367080 |
Publication status | Published - 21 Apr 2020 |
Event | 2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020 - Honolulu, United States Duration: 25 Apr 2020 → 30 Apr 2020 |
Publication series
Name | Conference on Human Factors in Computing Systems - Proceedings |
---|
Abstract
The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. Recent work has shown that crowd workers in microtask marketplaces can complete a variety of human intelligence tasks (HITs) using conversational interfaces with similar output quality compared to the traditional Web interfaces. In this paper, we investigate the effectiveness of using conversational interfaces to improve worker engagement in microtask crowdsourcing. We designed a text-based conversational agent that assists workers in task execution, and tested the performance of workers when interacting with agents having different conversational styles. We conducted a rigorous experimental study on Amazon Mechanical Turk with 800 unique workers, to explore whether the output quality, worker engagement and the perceived cognitive load of workers can be affected by the conversational agent and its conversational styles. Our results show that conversational interfaces can be effective in engaging workers, and a suitable conversational style has potential to improve worker engagement.
Keywords
- cognitive task load, conversational interface, conversational style, microtask crowdsourcing, user engagement
ASJC Scopus subject areas
- Computer Science(all)
- Computer Graphics and Computer-Aided Design
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Software
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2020. 3376403 (Conference on Human Factors in Computing Systems - Proceedings).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Improving Worker Engagement Through Conversational Microtask Crowdsourcing
AU - Qiu, Sihang
AU - Gadiraju, Ujwal
AU - Bozzon, Alessandro
PY - 2020/4/21
Y1 - 2020/4/21
N2 - The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. Recent work has shown that crowd workers in microtask marketplaces can complete a variety of human intelligence tasks (HITs) using conversational interfaces with similar output quality compared to the traditional Web interfaces. In this paper, we investigate the effectiveness of using conversational interfaces to improve worker engagement in microtask crowdsourcing. We designed a text-based conversational agent that assists workers in task execution, and tested the performance of workers when interacting with agents having different conversational styles. We conducted a rigorous experimental study on Amazon Mechanical Turk with 800 unique workers, to explore whether the output quality, worker engagement and the perceived cognitive load of workers can be affected by the conversational agent and its conversational styles. Our results show that conversational interfaces can be effective in engaging workers, and a suitable conversational style has potential to improve worker engagement.
AB - The rise in popularity of conversational agents has enabled humans to interact with machines more naturally. Recent work has shown that crowd workers in microtask marketplaces can complete a variety of human intelligence tasks (HITs) using conversational interfaces with similar output quality compared to the traditional Web interfaces. In this paper, we investigate the effectiveness of using conversational interfaces to improve worker engagement in microtask crowdsourcing. We designed a text-based conversational agent that assists workers in task execution, and tested the performance of workers when interacting with agents having different conversational styles. We conducted a rigorous experimental study on Amazon Mechanical Turk with 800 unique workers, to explore whether the output quality, worker engagement and the perceived cognitive load of workers can be affected by the conversational agent and its conversational styles. Our results show that conversational interfaces can be effective in engaging workers, and a suitable conversational style has potential to improve worker engagement.
KW - cognitive task load
KW - conversational interface
KW - conversational style
KW - microtask crowdsourcing
KW - user engagement
UR - http://www.scopus.com/inward/record.url?scp=85085657281&partnerID=8YFLogxK
U2 - 10.1145/3313831.3376403
DO - 10.1145/3313831.3376403
M3 - Conference contribution
AN - SCOPUS:85085657281
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery (ACM)
T2 - 2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020
Y2 - 25 April 2020 through 30 April 2020
ER -