My Research and Writings

Exemplifying Our Virtues or Rectifying Our Iniquity? 

National Self-Understanding and Natives' Attitudes Towards Immigration Policy

(Under review)

The attitudes that natives hold towards immigration policy are highly consequential, partially determining what policies are ultimately enacted as well as how immigrants are treated upon their arrival. Previous work suggests the way natives understand their nation is an important factor in determining their immigration policy preferences. I further this work by interrogating public opinion among a nationally representative sample of U.S. natives (N = 778)  towards eight concrete immigration policy proposals and by operationalizing their national self-understandings as decisions made during a novel concept association task. First, I find strong support for the claim that how U.S. natives understand "America" strongly predicts their attitudes towards these various policies above and beyond their partisan identity, political ideology, and demographic characteristics. Second, I show that affectively negative cognitive associations with the nation (e.g., "Racism" and "Violence") are more predictive of immigration policy attitudes than are affectively positive ones (e.g., "Openness" and "Bravery"). Finally, I perform a set of inductive analyses that reveal the specific ways in which U.S. natives' national self-understandings structure the immigration debate in the U.S. today. This work has broad implications for the nations and nationalism literature, work on public opinion towards immigration policy, and the sociology of culture.


Machine Learning and Deductive Social Science: An Introduction to Predictability Hypotheses

(In press for the Oxford Handbook of the Sociology of Machine Learning)

Sociologists have long evaluated models against benchmarks of prediction. However, until recently, prediction was more often treated as a measure of model fit than as the goal of sociological inference. Advances in machine learning and shifts in sociological praxis are fundamentally reshaping how prediction is used. We distinguish a growing class of hypotheses we term “content agnostic”, which focus not on the form, magnitude, or even direction of the causal effects of independent variables on dependent variables but treat the predictability of the latter from the former as a theoretically important quantity. This class of hypotheses is especially amenable to sociological theorizing; we demonstrate their diversity and utility by highlighting existing work whose core research questions are content agnostic across subfields of sociology as diverse as inequality, sociology of culture, and social psychology. Thinking about such hypotheses analytically provides three important insights. The first is that incorporating some practices of machine learning (e.g., the use of out-of-sample predictions to evaluate predictability) is necessary for validly testing them. The second is that highly expressive models common to machine learning (e.g., random forests or neural networks) should be just as preferred as traditional sociological workhorses such as OLS when evaluating them under most conditions. Finally, we argue sociology as a discipline will benefit from pursuing such hypotheses more frequently and discuss emergent directions for their future use.  

Word embeddings reveal how fundamental sentiments structure natural language

With Jeremy Freese

(Lead article at American Behavioral Scientist)

Central to affect control theory are culturally shared meanings of concepts. That these sentiments overlap among members of a culture presumably reflects their roots in the language use that members observe.  Yet the degree to which the affective meaning of a concept is encoded in the way linguistic representations of that concept are used in everyday symbolic exchange has yet to be demonstrated.  The question has methodological as well as theoretical significance for affect control theory, as language may provide an unobtrusive, behavioral method of obtaining EPA ratings complementary to those heretofore obtained via questionnaires.  We pursue a series of studies that evaluate whether tools from machine learning and computational linguistics can capture the fundamental affective meaning of concepts from large text corpora. We develop an algorithm that uses word embeddings to predict EPA profiles available from a recent EPA dictionary derived from traditional questionnaires, as well as novel concepts collected using an open-source web app we have developed. Across both a held-out portion of the available data as well as the novel data, our best predictions correlate with survey-based measures of the E, P, and A ratings of concepts at a magnitude greater than 0.85, 0.8, and 0.75 respectively.


Since the beginning of this millennium, data in the form of human-generated text in a machine-readable format has become increasingly available to social scientists, presenting a unique window into social life. However, harnessing vast quantities of this highly unstructured data in a systematic way presents a unique combination of analytical and methodological challenges. Luckily, our understanding of how to overcome these challenges has also developed greatly over this same period. In this article, I present a novel typology of the methods social scientists have used to analyze text data at scale in the interest of testing and developing social theory. I describe three “families” of methods: analyses of (1) term frequency, (2) document structure, and (3) semantic similarity. For each family of methods, I discuss their logical and statistical foundations, analytical strengths and weaknesses, as well as prominent variants and applications.

Negative Associations in Word Embeddings Predict Anti-black Bias across  Regions–but Only via Name Frequency

With Salvatore Giorgi, Robb Willer, and Johannes Eichstaedt

(Published in the Proceedings of ICWSM 2022)

The word embedding association test (WEAT) is an important method for measuring linguistic biases against social groups such as ethnic minorities in large text corpora. It does so by comparing the semantic relatedness of words prototypical of the groups (e.g., names unique to those groups) and attribute words (e.g., ‘pleasant’ and ‘unpleasant’ words). We show that anti-Black WEAT estimates from geo-tagged social media data at the level of metropolitan statistical areas strongly correlate with several measures of racial animus—even when controlling for sociodemographic covariates. However, we also show that every one of these correlations is explained by a third variable: the frequency of Black names in the underlying corpora relative to White names. This occurs because word embeddings tend to group positive (negative) words and frequent (rare) words together in the estimated semantic space. As the frequency of Black names on social media is strongly correlated with Black Americans’ prevalence in the population, this results in spuriously high anti-Black WEAT estimates wherever few Black Americans live. This suggests that research using the WEAT to measure bias should consider term frequency, and also demonstrates the potential consequences of using black-box models like word embeddings to study human cognition and behavior. 

Explaining the ‘Trump Gap’ in Social Distancing Using COVID Discourse 

With Sheridan Stewart, Brandon Walder, Shrinidhi K Lakshmikanth,  Ishan Shah, Sharath Chandra Guntuku , Garrick Sherman , Johannes Eichstaedt, and James Zhou

(Published in the Proceedings of EMNLP 2020)

Our ability to limit the future spread of COVID-19 will in part depend on our understanding of the psychological and sociological processes that lead people to follow or reject coronavirus health behaviors. We argue that the virus has taken on heterogeneous meanings in communities across the United States and that these disparate meanings shaped communities’ response to the virus during the early, vital stages of the outbreak in the U.S. Using word embeddings, we demonstrate that counties where residents socially distanced less on average (as measured by residential mobility) more semantically associated the virus in their COVID discourse with concepts of fraud, the political left, and more benign illnesses like the flu. We also show that the different meanings the virus took on in different communities explains a substantial fraction of what we call the “Trump Gap,” or the empirical tendency for more Trump-supporting counties to socially distance less. This work demonstrates that community-level processes of meaning-making determined behavioral responses to the COVID-19 pandemic and that these processes can be measured unobtrusively using Twitter. 

Exposure to the Views of Opposing Others with Latent Cognitive Differences Results in Social Influence--But Only When Those Differences Remain Obscured

With Douglas Guilbeault (first author), Katharina Lix,  Amir Goldberg, and Sameer Srivastava

(Published in  Management Science)

Cognitive  diversity  is  often  assumed  to  catalyze  creativity  and  innovation  by  promoting  social  learning  among group  members.  Yet,  in  many  contexts,  the  learning  benefits  of  cognitive  diversity  fail to materialize.  Why does cognitive diversity promote social learning in some contexts but not in others? We propose that the answer partly lies in the complex interplay between cognitive diversity and cognitive homophily:  The likelihood of individuals learning from one another, and thus changing their views about a substantive issue, depends crucially on whether they are aware of the cognitive similarities and differences that exist between them.  When social identities and cognitive associations about salient concepts related to a substantive issue are obscured, we theorize that cognitive diversity will promote social learning by exposing people to novel ideas.  When cognitive diversity is instead made visible and salient, we  anticipate  that  a  cognitive  homophily  response  is  activated  that  extinguishes  cognitive  diversity’s learning benefits—even when social identity cues and other categorical distinctions are suppressed.  To evaluate these ideas, we introduce a novel experimental paradigm and report the results of four preregistered studies (N=1,325) that lend support to our theory.  We discuss implications for research on social influence, collective intelligence, and cognitive diversity in groups.

Imagined otherness fuels blatant dehumanization of outgroups

With Amir Goldberg and Sameer Srivastava

(Published in Communications Psychology)

Dehumanization of others has been attributed to institutional processes that spread dehumanizing norms and narratives, as well as to individuals’ denial of mind to others. We propose that blatant dehumanization also arises when people actively contemplate others’ minds. We introduce the construct of imagined otherness—perceiving that a prototypical member of a social group construes an important facet of the social world in ways that diverge from the way most humans understand it—and argue that such attributions catalyze blatant dehumanization beyond the effects of general perceived difference and group identification. Measuring perceived schematic difference relative to the concept of America, we examine how this measure relates to the tendency of U.S. Republicans and Democrats to blatantly dehumanize members of the other political party. We report the results of two pre-registered studies—one correlational (N = 771) and one experimental (N = 398)—that together lend support for our theory. We discuss implications of these findings for research on social boundaries, political polarization, and the measurement of meaning.

Virtual reality perspective-taking increases cognitive empathy for specific others

With Jeremy Bailenson, Jamil Zaki, Joshua Bostick, and Robb Willer

(Published in PLOS ONE)

Previous research shows that virtual reality perspective-taking experiences (VRPT) can increase prosocial behavior toward others. We extend this research by exploring whether this effect of VRPT is driven by increased empathy and whether the effect extends to ostensibly real-stakes behavioral games. In a pre-registered laboratory experiment (N = 180), participants interacted with an ostensible partner (a student from the same university as them) on a series of real-stakes economic games after (a) taking the perspective of the partner in a virtual reality, “day-in-the-life” simulation, (b) taking the perspective of a different person in a “day-in-the-life” simulation, or (c) doing a neutral activity in a virtual environment. The VRPT experience successfully increased participants’ subsequent propensity to take the perspective of their partner (a facet of empathy), but only if the partner was the same person whose perspective participants assumed in the virtual reality simulation. Further, this effect of VRPT on perspective-taking was moderated by participants’ reported feeling of immersion in the virtual environment. However, we found no effects of VRPT experience on behavior in the economic games. 

Health Behavior Disparities Along Party Lines and Associative Diffusion

(Published in ASA Culture Section Newsletter)

A striking pattern that we see in Americans’ response to the coronavirus pandemic is the variation in response predictable by political party identification. Specifically, American Republicans are much less likely than American Democrats to engage in and endorse health behaviors that are at the time of writing recommended by the World Health Organization (Kushner Gadarian et al 2020). From the perspective of associative diffusion (Goldberg and Stein 2018), this division can be explained by a parsimonious set of initial conditions including animosity between the two political parties and the salient political leanings of sources of opinions concerning the pandemic. Here, I briefly describe polarization in response to the pandemic from the perspective of associative diffusion, contrast this perspective to an alternative explanation that revolves around the idea of “political echo chambers”, and offer interventions to mend the American divide suggested by the associative diffusion model.

Measuring the predictability of life outcomes with a scientific mass collaboration

With Matthew Salganik (first author), Ian Lundberg (second author), and the Fragile Families Challenge Consortium

(Published in the Proceedings of the National Academy of Sciences)

How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences. 

Insights into accuracy of social scientists' forecasts of societal change 

With Igor Grossman (first author), Amanda Rotella (second author), and many others (especially Jan Voelkel)

(Published in Nature Human Behavior)

How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2; N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a sample from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data.