Language Models as Mirrors and Bridges for Intergroup Communication
Author(s)
Jiang, Hang
DownloadThesis PDF (6.114Mb)
Advisor
Roy, Deb K.
Terms of use
Metadata
Show full item recordAbstract
This dissertation explores how large language models (LLMs) can serve dual roles in intergroup communication: as mirrors that reflect intergroup differences, and as bridges that facilitate communication across group boundaries. Intergroup communication refers to interactions between individuals from different social groups, such as political, cultural, or professional communities, where divergent perspectives often lead to misunderstandings, unequal access to information, and social fragmentation.
The first part of the dissertation presents LLMs as mirrors that reveal intergroup differences. We first introduce CommunityLM, a novel framework for probing public opinion by fine-tuning LLMs on social media posts from specific communities. Our case study comparing Republican and Democratic groups reveals that model predictions align well with human survey responses, substantially outperforming established baselines. Building on this foundation, we develop PersonaLLM to investigate whether prompt-based LLM agents can generate content aligned with assigned personas, which has emerged as a popular approach for modeling the behaviors of social groups. Through automated and human evaluations, we demonstrate that these agents can complete personality tests and write stories that reflect the distinctive behavioral patterns of specific personality profiles. Together, these complementary projects illustrate how LLMs can effectively capture and simulate the unique perspectives and behaviors that characterize diverse social groups.
The second part of the dissertation presents LLMs as bridges that facilitate communication across group boundaries. First, we introduce Bridging Dictionary, an interactive tool that uses retrieval-augmented generation (RAG) techniques with LLMs to identify polarized language and suggest more inclusive alternatives. In collaboration with PBS Frontline, we demonstrate the potential of LLMs to reduce misunderstanding in journalism and political communication. Second, we present Legal Storytelling, a human-LLM collaboration framework that generates accessible narratives to explain complex legal concepts to non-experts. Through randomized controlled trials (RCTs), we find that LLM-generated narratives can improve legal literacy and help bridge communication gaps between experts and laypeople, particularly among non-native English speakers. Third, we develop FaciliTrain, a voice-based, LLM-powered system that enables facilitators to learn and practice intergroup dialogue skills with multiple LLM agents representing diverse social backgrounds and personas in a small-group setting. User studies with campus participants show encouraging early results, suggesting that LLMs can effectively support the development of communication skills essential for constructive intergroup dialogue. Together, these projects illustrate how LLMs can actively foster mutual understanding across social divides by promoting more inclusive, accessible, and constructive communication.
Date issued
2025-05Department
Program in Media Arts and Sciences (Massachusetts Institute of Technology)Publisher
Massachusetts Institute of Technology