How Does Crowdsourced Fact-Checking Affect Audience and Author Responses? Evidence from Twitter’s Community Notes

Abstract

The spread of misinformation online remains a significant concern, prompting various online platforms to adopt the crowdsourced fact-checking approach as a countermeasure. While prior research has focused on its short-term impact on user responses to flagged (or labeled) misinformation content, a more comprehensive assessment of its efficacy necessitates an examination of its long-term impact in a broader content scope. Using Twitter’s Community Notes program (formerly known as Birdwatch)—a crowdsourced fact-checking intervention—as the research context, we investigate the crowdsourced fact-checking’s long-term impact on shaping user response to labeled authors’ subsequent content. Specifically, we examine how crowdsourced fact-checking influences the audiences’ subsequent engagement with labeled authors and how it affects labeled authors’ subsequent content generation behavior over the long term. By employing the staggered timing of labels as our identification strategy, we find that community notes decrease audience engagement with labeled authors. Moreover, while it prompts labeled authors to reduce their post quantity, it does not significantly impact their post quality. This study contributes to the literature on crowdsourced fact-checking by extending the evaluation of its efficacy to a longer time horizon and broader content scope and bridging the responses of both misinformation consumers (audience) and producers (labeled authors).

Date
Feb 6, 2025 3:39 AM — 2:23 AM
Location
Joint Work with Yingxin Zhou, Yi Gao, and Pei-Yu Chen
Jingbo Hou
Jingbo Hou
Assistant Professor in ISA

Enjoying my life now!