Work in Progress

Reducing Social Media Usage During Elections: Evidence from a Multi-Country WhatsApp Deactivation Experiment

Under Review

Joint work with Rajeshwari Majumdar, Shelley Liu, Carolina Torreblanco, and Joshua Tucker.

Winner, 2025 MPSA Best Paper Award in Political Behavior

  • Presented at APSA 2024
Abstract Recent research has investigated how social media platforms may spread misinformation and encourage harmful political discourse, which fuels political polarization, prejudice, and offline violence. We deploy online field experiments in Brazil, India, and South Africa to examine how restricting the use of WhatsApp, the world’s most widely used messaging app, affects information exposure, political attitudes, and individual well-being. We incentivize participants to either (1) stop consuming multimedia content on WhatsApp or (2) limit overall WhatsApp usage to 10 minutes per day for four weeks ahead of each country’s elections. We find that our interventions significantly reduced participants’ exposure to misinformation, online toxicity, and uncivil discussions about politics—but at the expense of keeping up with true political news. Using a wide range of measures, we detected no changes to political attitudes, but uncovered substantial gains to individual well-being as treated participants substituted WhatsApp usage for other activities. Results highlight the complex trade-offs associated with the effects of social media use on information consumption and its downstream effects.

:::

The Relationship Between Offline Partisan Geographical Segregation and Online Partisan Segregation

Under Review

Joint work with Megan Brown, Tiago Ventura, Joshua A. Tucker, Jonathan Nagler.

  • Presented at MPSA and APSA 2023
Abstract Social media is often blamed for the creation of echo chambers. However, these claims fail to consider the prevalence of offline echo chambers resulting from high levels of partisan segregation in the United States. Our article empirically assesses these online versus offline dynamics by linking a novel dataset of voters’ offline partisan segregation extracted from publicly available voter files for 180 million US voters with their online network segregation on Twitter. We investigate offline and online partisan segregation using measures of geographical and network isolation of every matched voter-twitter user to their co-partisans online and offline. Our results show that while social media users tend to form politically homogeneous online networks, these levels of partisan sorting are significantly lower than those found in offline settings. Notably, Democrats are more isolated than Republicans in both settings, and only older Republicans exhibit higher online than offline segregation. Our results contribute to the emerging literature on political communication and the homophily of online networks, providing novel evidence on partisan sorting both online and offline.

Understanding Beliefs in Misinformation: Repetition, Partisan Signals and Bayesian Processing

In preparation

Joint work with Jim Bisbbe, Sarah Graham and Joshua A. Tucker.

Abstract Partisan motivations and repeated exposure are two dominant explanations for how individuals form beliefs about political misinformation. Yet, there is little research that integrates these processes, despite each pointing to different interventions to combat the spread of false information, especially in online information environments. In this paper, we situate both frameworks within a unified Bayesian model of belief formation and design survey experiments to explore several implications of this theoretical framework. We find that both partisan motivated reasoning and prior exposure (“illusory truth effects”) manifest in our data, and that they exacerbate each other, painting a bleak picture of how the steady drumbeat of partisan-flavored misinformation online influences public beliefs. However, we also find that the duration of these biases attenuates sharply over time and that attaching warning labels to false information mitigates the manifestation of both cognitive biases. The findings suggest that partisan motivations dominate belief formation in political settings, with prior exposure to misinformation playing a secondary role. These results contribute to a deeper understanding of cognitive biases in political information processing and provide a structured way of thinking about how best to understand the phenomenon of online misinformation, shifting the focus from the role of mass-level beliefs for falsehoods to the role of political elites and partisan media spreading rumors.