This comprehensive megastudy, conducted by Stanford-affiliated researchers and others, tested 25 interventions designed to reduce partisan animosity and antidemocratic attitudes within the polarized American political landscape. CommonAlly was one of the selected partners, in collaboration with YOUnify, focusing on an intervention to bridge divides through chatbot technology.
CommonAlly's intervention aimed to reduce partisan animosity by encouraging participants to find common ground on critical policy issues. Together with YOUnify, we developed a chat-based quiz asking participants to make their best guess on where they thought Democrats and Republicans stood on topics like gun control, immigration, and climate change. Following each response, participants received data from Beyond Conflict, a research organization studying toxic polarization, revealing the actual positions of each party. This approach highlighted the overestimated divisions between political groups, emphasizing areas of unexpected consensus and reframing polarized issues.
This case study examines CommonAlly's role in the megastudy, focusing on how our chatbot intervention allowed participants to reassess policy misperceptions and gain insights into areas of bipartisan agreement. While results showed moderate effectiveness, they offer valuable learnings on the potential for thoughtful approaches to temper partisan hostility. Our contribution underscores the importance of continued exploration into interactive, data-driven methods for fostering understanding in polarized settings, even where initial impacts may be incremental.
Within the study framework, our cohort’s specific objective was to use chatbot conversational technology to address widespread misperceptions about policy positions across the political spectrum, revealing unexpected areas of consensus on key issues among Americans. This intervention was designed to help participants view political divides more accurately and recognize points of agreement, contributing to a reduction in partisan hostility.
Average completion rate
Average place out of the other interventions
Average participants completed
Participate in Stanford University's megastudy to identify interventions to bolster democratic attitudes among Americans.
The entire study engaged 35,252 participants, with 31,835 completing various interventions within the overall study. The chatbot quiz was started by 1,131 participants and completed by an average of 1,017 participants.
The Correcting Policy Misperceptions Chatbot was designed to challenge perceptions on gun control, immigration, and climate change
The chatbot treatment showed effectiveness in reducing biased evaluations, but the effect was not as strong as other interventions. For instance, treatments such as "Common National Identity" and "Correcting Democracy Misperceptions" displayed stronger effects with higher statistical significance.
The chatbot intervention showed limited impact on reducing support for undemocratic practices. In comparison, treatments like "Democratic Collapse Threat" and "Befriending Meditation" achieved stronger reductions in this outcome.
The chatbot treatment ranked lower in its effectiveness in reducing partisan animosity. Top-performing treatments included "Positive Contact Video" and "Common National Identity," which significantly reduced animosity across partisan lines.
Between Republicans and Democrats, our chatbot identified no significant difference in partisan animosity or support for undemocratic practices, partisan violence, undemocratic candidates, opposition to bipartisan cooperation, social distrust, or social distance.
This means the chatbot had no unwarranted effects on either party participants. Most other interventions were the same in this regard.
The chatbot treatment showed a moderate effect when assessing a composite score of all measured outcomes. Still, it was generally outperformed by treatments targeting empathy, identity, or democratic collapse concerns, which showed more robust effectiveness across metrics.
This study seeks to fill in the knowledge gaps on depolarization interventions. However, the study has its limitations. At times, the target quotas for demographics fell short, including the percentage of participants who were high school graduates or had no high school degree and the percentage of “leaners” when it came to the strength of partisan identity. Also, the intervention’s effects did not endure as robustly as others, and its impact was less significant on key metrics like reduced support for partisan violence.
The Correcting Policy Misperceptions Chatbot contributed meaningful insights to the megastudy but ranked in the middle range of effectiveness relative to other interventions. This helped identify what user experience mediums are more or less effective when reducing partisan animosity and undemocratic attitudes. Future research using chatbots may consider how the avatar, phrasing of questions, language, or gifs did or did not affect participants' perspectives.