Harnessing Accurate Bias in Large-Scale Language Models to Study Human Psychology

This recently NSF-funded project explores the use of artificial intelligence, in particular large-scale language models like GPT-3, to study human psychology. Here’s the abstract for the first paper in this project, “Out of One Many: Using Language Models to Simulate Human Samples.” This paper is currently under review.

Machine learning models often exhibit problematic biases (such as racism or sexism) which are often treated as a uniform property of the model. We show that the “bias” within the GPT-3 language model is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property {algorithmic fidelity and explore its extent in GPT-3 by conditioning the model on thousands of socio-demographic backstories from real human participants in two large research studies. We then demonstrate that the correlation between these “silicon samples” and samples from genuine humans goes far beyond simple surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes, raising the possibility that language models can be studied as effective proxies for specific human sub-populations despite their macro-level weaknesses.

Project collaborators/co-authors: Lisa Argyle, Ethan Busby, Nancy Fulda, Chris Rytting, Taylor Sorenson, David Wingate

Avatar
Joshua Gubler
Associate Professor of Political Science

Joshua Gubler is a comparative political psychologist at BYU studying intergroup cooperation and conflict, affect, emotion, persuasion, motivation, and political communication. He is also Program Coordinator for the Middle Eastern Studies/Arabic program in the Kennedy Center for International Studies.