Crowdsourced fact-checking, where users collaboratively label or flag potential misinformation content, has been increasingly used on online platforms to combat misinformation. However, its practical effectiveness remains unclear. This study evaluates its effectiveness across two key dimensions: content scope and time horizon, by examining spillover effects across content and temporal effects over both short- and long-term periods. Using the context of Twitter’s Community Notes program, this study investigates the impacts of crowdsourced fact-checking on: (1) audience engagement with labeled authors and (2) labeled authors’ content generation. Our findings reveal that community notes reduce audience engagement with labeled authors’ subsequent unlabeled content, likely because these notes act as a negative signal that undermines audience perception of the labeled authors’ credibility and social image. This audience disengagement effect is more pronounced for authors with lower popularity, suggesting crowdsourced fact-checking’s limited impact on disciplining popular authors. Additionally, labeled authors decrease the volume of their future posts, although their effort invested in each post remained largely unchanged. These findings highlight the effectiveness of crowdsourced fact-checking in two aspects: (1) reducing audience engagement with labeled information sources and (2) fostering a degree of self-discipline in labeled authors’ content generation, while also revealing a key boundary condition—author popularity—that shapes these effects. This study contributes to the theoretical understanding of the effectiveness of crowdsourced fact-checking and provides practical insights for platforms seeking to curb misinformation.