Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views?
Dublin Core
Title
Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views?
Subject
Artificial Intelligence, AI, algorithms, content moderation, news recommendations, polarization, biased information processing, social media, counter-attitudinal views, news, bias, online moderation, perceived justice
Description
Although artificial intelligence is blamed for many societal challenges, it also has underexplored potential in political contexts online. We rely on six preregistered experiments in three countries
(N ¼ 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across
countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical
implications of these findings.
(N ¼ 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across
countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical
implications of these findings.
Creator
Magdalena Wojcieszak, Arti Thakur, Joao Fernando Ferreira Gonc ~ ¸alves,Andreu Casas, Ericka Menchen-Trevino , & Miriam Boon
Source
https://academic.oup.com/jcmc/article/26/4/223/6298304
Publisher
Oxford University Press
Date
3 February 2021
Contributor
Sri Wahyuni
Format
PDF
Language
English
Type
Text
Coverage
Journal of Computer-Mediated Communication 26 (2021)
Files
Collection
Citation
Magdalena Wojcieszak, Arti Thakur, Joao Fernando Ferreira Gonc ~ ¸alves,Andreu Casas, Ericka Menchen-Trevino , & Miriam Boon, “Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views?,” Repository Horizon University Indonesia, accessed May 20, 2025, https://repository.horizon.ac.id/items/show/8711.