Major

Business Administration

Anticipated Graduation Year

2025

Access Type

Open Access

Abstract

Misinformation on social media influences public opinion, shaping beliefs, behaviors, and policy decisions. Platforms have implemented varying misinformation warning messages to curb “fake news,” but their impact on user trust and engagement remains unclear. Our research examines how different misinformation content moderation approaches influence users' perceptions and biometric responses related to attention and affective states. We propose a within-subject experimental study with three conditions: platform-driven moderation, community-driven moderation, and no moderation. By comparing these, we assess user reactions to misinformation warnings. Using eye-tracking, galvanic skin response (GSR), and facial expression analysis, we explore how these responses relate to users’ trust in moderation and willingness to engage (like, comment, share) with flagged content. This research contributes to a deeper understanding of how different moderation approaches shape user trust and engagement, informing the development of evidence-based strategies for balancing misinformation control and freedom of expression.

Faculty Mentors & Instructors

Dinko Bačić

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.

Share

COinS
 

Navigating Trust and Engagement: The Impact of Misinformation Moderation on User Perception and Biometric Response

Misinformation on social media influences public opinion, shaping beliefs, behaviors, and policy decisions. Platforms have implemented varying misinformation warning messages to curb “fake news,” but their impact on user trust and engagement remains unclear. Our research examines how different misinformation content moderation approaches influence users' perceptions and biometric responses related to attention and affective states. We propose a within-subject experimental study with three conditions: platform-driven moderation, community-driven moderation, and no moderation. By comparing these, we assess user reactions to misinformation warnings. Using eye-tracking, galvanic skin response (GSR), and facial expression analysis, we explore how these responses relate to users’ trust in moderation and willingness to engage (like, comment, share) with flagged content. This research contributes to a deeper understanding of how different moderation approaches shape user trust and engagement, informing the development of evidence-based strategies for balancing misinformation control and freedom of expression.