Page 1 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Computational approaches to the neuroscience of 16 17 18 19 For Peer Review 20 21 22 Jeffrey A. Brooks 23 24 New York University 25 26 27 28 29 Ryan M. Stolier 30 31 Columbia University 32 33 34 35 Jonathan B. Freeman 36 37 38 New York University 39 40 41 42 43 44 45 46 47 Corresponding author: 48 49 Jeffrey A. Brooks or Jonathan B. Freeman, Ph.D. 50 51 Department of Psychology 52 New York University 53 6 Washington Place 54 55 New York, NY 10003 56 Email: [email protected] or [email protected] 57 58 59 1 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 2 of 32

1 2 3 Abstract 4 5 Across multiple domains of social perception - including social categorization, emotion 6 7 perception, impression formation, and mentalizing - multivariate pattern analysis (MVPA) of 8 fMRI data has permitted a more detailed understanding of how social information is processed 9 10 and represented in the brain. As in other fields, the neuroscientific study of social 11 12 perception initially relied on broad structure-function associations derived from univariate fMRI 13 14 analysis to map neural regions involved in these processes. In this review, we trace the ways that 15 social neuroscience studies using MVPA have built on these neuroanatomical associations to 16 17 better characterize the computational relevance of different brain regions, and how MVPA 18 19 allows explicit tests of theFor correspondence Peer between Review psychological models and the neural 20 21 representation of social information. We also describe current and future advances in 22 methodological approaches to multivariate fMRI data and their theoretical value for the 23 24 neuroscience of social perception. 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 2 60 http://mc.manuscriptcentral.com/scan Page 3 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 Computational approaches to the neuroscience of social perception 4 5 Other people are tremendously complex, and humans must navigate their relationships 6 7 and interactions with others under conditions of high uncertainty. Whether meeting a stranger, 8 reading a description of someone, or trying to determine how a friend is feeling, we rely on a set 9 10 of perceptions and inferences about the person to determine our behavior. Understanding how we 11 12 form impressions about others has been a central focus in for decades, and 13 14 more recently the topic has proven to be well-suited to methods from computational 15 neuroscience, which can readily leverage the inherently high-dimensional of 16 17 neuroimaging data alongside behavioral measures of social perception. In this article, we review 18 19 recent advances in computationalFor approachesPeer to Reviewthe neuroscience of social perception. We focus 20 21 particularly on multivariate analyses of fMRI data, but computational analyses of behavioral data 22 used in conjunction with fMRI, such as using fMRI and behavioral responses to estimate 23 24 parameters of computational models, is an increasingly popular approach as well and is reviewed 25 26 elsewhere (e.g., Cheong et al., 2017; Gonzalez & Chang, 2019; Hackel & Amodio, 2018). 27 As in other areas of social neuroscience, early fMRI studies on social perception generally 28 29 focused on univariate activation-based analyses to associate relevant social cognitive processes 30 31 with particular brain regions, which continues to be a valuable mainstay. This research described 32 33 a number of regions important for social perception, such as the primacy of the fusiform gyrus 34 (FG) in face processing (Haxby et al., 2000; 2002; Kanwisher et al., 1997); superior temporal 35 36 sulcus (STS) in dynamic face and body perception (Grossman et al., 2000; Haxby et al., 2000; 37 38 Said et al., 2010); and regions such as the medial prefrontal cortex (MPFC; Amodio & Frith, 39 40 2006; Mitchell, 2008) and temporo-parietal junction (TPJ; Saxe & Kanwisher, 2003) in thinking 41 about others and representing their mental states (“mentalizing”). Multivariate pattern analysis 42 43 (MVPA) was first introduced to neuroimaging research by Haxby and colleagues (2001), 44 45 expanding the conceptual and methodological toolkit of neuroimaging researchers by providing a 46 different way to conceptualize the patterns of activation that emerge in fMRI data. In the present 47 48 review, we first briefly introduce MVPA techniques and discuss their theoretical basis as well as 49 50 some related advantages and limitations. We then review in turn three domains of social 51 52 perception research where these techniques have proven highly valuable: perceiving social group 53 memberships, perceiving identity and associated traits and person knowledge, and perceiving 54 55 others’ emotional states. 56 57 58 59 3 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 4 of 32

1 2 3 MVPA Approaches 4 5 Univariate approaches primarily seek to relate the overall level of activation in a region to 6 7 task conditions or experimental variables, which can provide neuroanatomical associations with 8 those conditions or variables and their related psychological processes (although with exceptions, 9 10 e.g., adaptation paradigms). However, neural regions assessed via fMRI do not only vary in their 11 12 mean level of activation, but also in the spatial patterns of activation distributed across voxels. 13 14 MVPA methods are sensitive to fine-grained differences in these spatial patterns of activation, 15 whereas mass univariate testing treats each voxel individually, almost always ignoring the level 16 17 of activation in contiguous voxels in statistical tests. This sensitivity enables MVPA to 18 19 differentiate experimentalFor conditions Peer even in cases Review where mere differences in mean activation of 20 21 a voxel cannot (Haxby et al., 2001; 2014). 22 A key assumption often made about MVPA and influencing how MVPA analyses are 23 24 generally interpreted is that neural response patterns inherently contain information about an 25 26 associated cognitive state (Davis et al., 2014; Haynes, 2015; Lewis-Peacock & Norman, 2014; 27 Popov et al., 2018). Researchers can thus probe condition response patterns to see how they may 28 29 differ between brain regions, revealing the involvement of different regions in processing 30 31 information or representing states relevant for a given task. Thus, multivariate analyses often 32 33 target regions already known to be involved in specific tasks to specify how that region is 34 representing information throughout the task and what computational processes that region may 35 36 support. Neuronal recordings in nonhuman primates have long shown that the aggregate activity 37 38 of an assembly of neurons can provide a ‘code’ for various kinds of sensory and abstract 39 40 cognitive information in the brain (i.e., a “population code”; Averbeck et al., 2006). Although 41 fMRI voxels are far too large to be sensitive to individual neurons, neurons belonging to 42 43 different neuronal assemblies (e.g., related to state 1 vs. state 2) may be distributed in different 44 45 ways such that the precise assembly of neurons related to each state may vary across voxels. This 46 could thereby give rise to distinct fMRI multi-voxel patterns associated with distinct states, 47 48 despite a lack of sensitivity to individual neurons (Logothetis, 2008; Chaimow et al., 2011). In 49 50 this way, MVPA provides a way to extend the population coding approach from systems 51 52 neuroscience to the macro-scale populations measured with fMRI (Haynes, 2015; Haxby et al., 53 2001; 2014; Lewis-Peacock & Norman, 2014; but see the Conclusions section for a discussion of 54 55 related limitations). 56 57 58 59 4 60 http://mc.manuscriptcentral.com/scan Page 5 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 In its original application, Haxby and colleagues (2001) used MVPA to demonstrate that 4 5 ventral temporal cortex shows spatially distributed response patterns to visual object categories 6 7 that can be discriminated from the region’s face response pattern using a classifier. Importantly, 8 this included the fusiform face area (FFA), a region of fusiform gyrus (FG) that reliably shows 9 10 higher mean activations for faces compared to other visual categories (Kanwisher, 1997). This 11 12 approach demonstrated that regions which show selectivity in univariate signal for one category 13 14 also can hold (perhaps even equivalent amounts of) information about other categories. Such a 15 classification (or “decoding”) analysis enables researchers to use neural response patterns to 16 17 predict an associated cognitive state or stimulus condition (Lewis-Peacock & Norman, 2014). A 18 19 classifier is typically trainedFor on one Peer set of the data Review and tested on at least one other held-out set. 20 21 By training a classifier to discriminate between any given experimental factors (i.e., conditions 22 or stimulus characteristics), testing the classifier on held-out data can reveal which regions are 23 24 involved in representing those conditions or characteristics of the stimuli. Another common way 25 26 to interpret the results of a classifier is that categorical boundaries between stimuli are 27 “computationally relevant” in a given brain region if that region shows high classification 28 29 accuracy for that category boundary. This interpretation assumes that if a brain region’s patterns 30 31 of activity discriminate between two stimulus categories, information about that category 32 33 boundary is retained in that region’s spatial response patterns because it is relevant for whatever 34 computation that region is performing in that particular cognitive context. 35 36 In addition to being used to classify brain states by experimental conditions or stimulus 37 38 categories, multi-voxel patterns associated with different conditions can be directly compared by 39 40 measuring pairwise similarities in their response patterns. While sometimes quite illuminating in 41 and of itself (showing, for example, that White individuals with a strong pro-White bias have 42 43 more dissimilar neural response patterns for the Black and White race categories; Brosch et al., 44 45 2013), similarity between neural patterns can also be leveraged to test explicit theories about 46 how the brain represents information. This technique, called Representational Similarity 47 48 Analysis (RSA; Kriegeskorte et al., 2008), provides a shared framework for testing and 49 50 comparing diverse models of neural computation (computational, theoretical, behavioral) by 51 52 examining their second-order isomorphisms (i.e., similarities in the structure of their similarity 53 spaces). This technique was developed by computational neuroscientists who have used it to 54 55 assess how well the representational structure of the ventral-visual stream corresponds with 56 57 58 59 5 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 6 of 32

1 2 3 various computer vision models (Kriegeskorte et al., 2008; Khaligh-Razavi & Kriegeskorte, 4 5 2014; Jozwik et al., 2017). 6 7 Assuming that multi-voxel response patterns contain information about certain cognitive 8 factors, RSA starts by computing how similar (or dissimilar) each condition or category is from 9 10 each other in their neural response patterns throughout the brain. The resulting similarity space 11 12 can be directly compared with any other second-order similarity space, most commonly those 13 14 derived from computational models or behavioral task responses. Thus, while decoding and 15 classification approaches can reveal which regions contain or process information about 16 17 cognitive dimensions, RSA is able to directly test hypotheses about how that information is 18 19 organized and represented,For which canPeer in turn address Review questions about which psychological or 20 21 stimulus dimensions are computationally relevant in a given brain region (Kriegeskorte et al., 22 2008; Popal et al., 2019). Despite its explanatory power, RSA is highly computationally tractable 23 24 and thus broadly approachable for researchers. RSA is especially relevant for social and affective 25 26 neuroscience due to the large number of models proposed in the literature describing how social 27 groups, emotion categories, and traits relate to one another along dimensions such as stereotype 28 29 content, facial cues, and affective properties (e.g., Fiske et al., 2002; 2007; Oosterhof & 30 31 Todorov, 2008; Russell, 1980). RSA allows researchers to specifically adjudicate between such 32 33 models, testing competing and complementary explanations for how the brain represents and 34 computes social information. We now turn our attention to specific areas of social perception 35 36 research where MVPA and RSA have been leveraged to make important new insights. 37 38 39 40 Social Categories and Groups 41 An illustrative example of the contrast between univariate and multivariate fMRI 42 43 analyses in the domain of social categorization comes from a set of papers that used both 44 45 techniques to analyze the same dataset. In the experiment, participants were assigned to one of 46 two arbitrarily defined mixed-race groups and then had to categorize faces from the two groups 47 48 along either in-group vs. out-group or Black vs. White dimensions. The first paper (Van Bavel et 49 50 al., 2011) reported an in-group selectivity effect in the FFA, such that a univariate in-group vs. 51 52 out-group contrast showed greater mean activity in response to in-group vs. out-group members, 53 despite the fact that both groups were mixed-race. However, a follow-up report showed that, 54 55 despite the overall difference in activation in response to in-group members, the race of the faces 56 57 58 59 6 60 http://mc.manuscriptcentral.com/scan Page 7 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 could still be discriminated by a multivariate pattern classifier (Ratner, Kaul, & Van Bavel, 4 5 2013). These results indicated that the race of the faces was still computationally relevant in the 6 7 FFA, regardless of the difference in mean activation driven by the social context and current 8 processing goals. 9 10 Many studies examining univariate social category responses in regions such as the 11 12 FG/FFA interpreted greater activation for one category vs. another as enhanced processing of or 13 14 greater attention to the target category in the contrast (Golby et al., 2001; Lieberman et al., 15 2005). Some have interpreted such results as providing a neural basis for long-standing out- 16 17 group deficit effects in social psychology, such as better memory for in-group vs. out-group 18 19 faces along a number of dimensionsFor Peer (Hugenberg Review et al., 2010; Meissner & Brigham, 2001). 20 21 Multivariate decoding approaches simultaneously challenge and complement these results by 22 demonstrating that even if a brain region does not show univariate selectivity for a given 23 24 category, it still might represent and process information about that category. 25 26 Still, to date, relatively few studies have used multivariate decoding on social category 27 responses. In one of the earliest applications of MVPA to social categorization, Kaul et al. (2011) 28 29 found that face gender could be reliably decoded from an array of brain regions commonly 30 31 associated with various levels of face processing. Classifier performance was highest in the 32 33 medial (mOFC), FG, and inferior occipital gyrus (IOG). Contreras and 34 colleagues (2013) showed that both face gender and race could be decoded from neural response 35 36 patterns in a social categorization task, but after controlling for low-level differences in the 37 38 stimuli from each category, the only region that showed accurate decoding was the FFA, further 39 40 distinguishing the importance of this region in representing faces at the level of social categories. 41 In a study that aimed to decode multi-voxel patterns associated with the broadest social group 42 43 distinction possible (i.e., “us” vs. “them”), classification was used to show that the dorsal 44 45 anterior cingulate cortex/middle cingulate cortex (dACC/MCC) and anterior insula (AI) contain 46 high-level information about group boundaries ranging from arbitrarily-defined “minimal” 47 48 groups to political groups (Cikara et al., 2017). 49 50 Reconciling and integrating findings from univariate and multivariate fMRI is an 51 52 important ongoing task across all fields that use neuroimaging (Davis et al., 2014). RSA has 53 recently proven quite useful for these purposes, since it can not only help reveal which brain 54 55 regions represent social category-relevant information, but also the nature and organization of 56 57 58 59 7 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 8 of 32

1 2 3 those representations. In social psychology, social categorization has traditionally been 4 5 considered an automatic and obligatory perceptual process that precedes any more “cognitive” 6 7 social processes such as stereotyping (Allport, 1954). However, alternative approaches 8 emphasize the idea that social-conceptual knowledge (i.e., stereotypical associations) and other 9 10 top-down social cognitive processes can weigh in on face processing before a percept has 11 12 solidified, thereby allowing implicit stereotypes to shape face perception (Freeman & Ambady, 13 14 2011; Freeman & Johnson, 2016). Recent work applying RSA has examined whether an 15 individual’s stereotypical associations are reflected in how the brain represents others’ faces, an 16 17 approach which can also reveal how “deeply” such top-down associations reach (i.e., regions that 18 19 would suggest perceptual Forvs. post-perceptual Peer processing Review of faces). Stolier and Freeman (2016) 20 21 found that neural representations of faces’ social categories (e.g., Black, Female, Happy) in the 22 FG and orbitofrontal cortex (OFC) demonstrated a similarity structure that was predicted by a 23 24 subject’s own unique stereotype knowledge about those categories. That is, if someone held 25 26 more similar stereotypes about the categories ‘Male’ and ‘Anger’, the multi-voxel patterns 27 associated with those categories exhibited a greater similarity when subjects passively viewed 28 29 faces belonging to those categories (Figure 1). 30 31 These findings suggest that the way face category representations are organized in 32 33 regions important for face perception, such as the FG, is partially determined by stereotypes 34 about those categories. Moreover, the fact that this similarity structure was also observed in the 35 36 OFC suggests that domain-general perceptual processes associated with the OFC may be 37 38 involved in driving the impact of stereotypes on face perception (Freeman & Johnson, 2016). In 39 40 particular, the object recognition literature suggests that the OFC is recruited in perceptual 41 categorization tasks when incoming visual input matches a pre-existing visual association or 42 43 heuristic in memory (Bar, 2003; Bar et al., 2006; Summerfield & Egner, 2009). This effect is 44 45 strengthened when the visual input is ambiguous or impoverished, suggesting that the OFC is 46 involved in exerting visual predictions about category membership before those categorizations 47 48 have fully solidified. One possibility is that the OFC is involved in a similar predictive capacity 49 50 in perceiving social categories, supplying the FG with top-down visual predictions or 51 52 expectations about social categories. In this case, the use of RSA permits inferences about the 53 way social category representations are organized in visual processing regions as well as the 54 55 high-level regions that may be involved in providing top-down social information to them. 56 57 58 59 8 60 http://mc.manuscriptcentral.com/scan Page 9 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 It is sensible that visual face processing regions would partially depend on such rapid top- 4 5 down input, as there are a number of challenges faced in social categorization, particularly when 6 7 faces have atypical or ambiguous features. Any given facial feature can be more or less related to 8 any set of social categories at once, and faces naturally vary on featural continua related to 9 10 gender, race, and a host of other category dimensions. For example, a White face may not only 11 12 have more or less White category-associated cues but may bear partial cues related to the Black 13 14 category as well (Locke et al., 2005). Previous work suggests such multiple cues co-activate 15 multiple category representations regardless of the ultimate categorization (Freeman et al, 2008; 16 17 Freeman et al., 2010). MVPA has recently been used to investigate how the perceptual system is 18 19 guided towards a final categorizationFor Peer despite features Review that may initially activate multiple social 20 21 categories. Based on prior computational models of social categorization (Freeman & Ambady, 22 2011), Stolier and Freeman (2017) tested the possibility that initially the perceptual system co- 23 24 activates any categories associated with features on a face, which then must compete and resolve 25 26 over hundreds of milliseconds. They additionally examined whether cognitive monitoring 27 processes may be recruited help resolve the competition, either to flag more attentional resources 28 29 to be directed to the stimulus or perhaps to play an inhibitory role in the competition. For 30 31 example, a feminine male face may initially elicit simultaneous partial activation of the 32 33 categories Male and Female, then cognitive monitoring processes may help resolve the 34 competition such that one category (‘Male’) wins out and the other category is cleared from 35 36 processing (‘Female’). A likely candidate for such processes would be the pre-supplementary 37 38 motor area/dorsal anterior cingulate cortex (pre-SMA/dACC), a region central to cognitive 39 40 monitoring and competition between decisions in tasks (Dosenbach et al., 2007). 41 In the study, subjects were presented with faces manipulated to vary in the typicality of 42 43 their gender or race features, such that one category (e.g., Male or White) could have features 44 45 more or less related to the alternate category (e.g., Female or Black). To measure perceivers’ co- 46 activation of multiple categories while viewing each face, participants performed a mouse- 47 48 tracking task in the scanner, in which they made speeded face categorization decisions with a 49 50 computer mouse while the trajectories of their mouse movements were recorded. The deviation 51 52 of mouse trajectories toward unselected categories has been well-validated as a measure of 53 multiple category co-activation during perception (Freeman, 2018; Freeman & Ambady, 2010). 54 55 For instance, a mouse trajectory that deviated towards the ‘Male’ response option en route to a 56 57 58 59 9 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 10 of 32

1 2 3 categorization of ‘Female’ putatively reflects co-activation of the Male category despite a final 4 5 categorization of ‘Female’. To measure how brain regions involved in face perception held 6 7 information pertaining to both competing categories during the mouse-tracking task, the FG 8 multivariate response pattern to each individual trial’s face (e.g., ‘Male’) for one category was 9 10 compared to the mean response pattern to the alternate category (e.g., ‘Female’). The results 11 12 showed that, during trials in which mouse trajectories showed greater category co-activation (i.e., 13 14 deviation toward the alternate category), multivariate response patterns in the FG for those trials 15 showed a greater similarity to the mean response pattern for the alternate category. For instance, 16 17 when subjects steered the mouse towards the ‘Female’ response option en route to ‘Male’ for a 18 19 given face, the FG multi-voxelFor pattern Peer in response Review to that face was more similar to the average 20 21 multi-voxel response pattern for ‘Female’. To explore how cognitive monitoring may assist 22 perceivers in converging on their ultimate percept, the researchers also examined univariate 23 24 activation in the pre-SMA/dACC. They found that, on trials where mouse-tracking showed more 25 26 category co-activation (and overlapping neural response patterns), the pre-SMA/dACC became 27 additionally engaged, suggesting a recruitment of conflict resolution processes to help the FG 28 29 converge on a stable percept of a face. These findings suggest that other people’s complex and 30 31 sometimes ambiguous facial cues lead the FG to temporarily co-represent multiple categories, 32 33 which through the help of the pre-SMA/dACC rapidly resolve over time to drive the stable 34 categorization of other people. 35 36 37 38 Traits, identity, and person knowledge 39 40 Perhaps more than in any other domain of social neuroscience, enormous effort has been 41 invested in determining which brain regions are most involved in thinking about other minds 42 43 (i.e., mentalizing and theory of mind). This research has identified a set of brain regions that are 44 45 reliably engaged by storing, retrieving, and updating knowledge about others, such as the medial 46 prefrontal cortex (MPFC), temporo-parietal junction (TPJ), precuneus, and posterior cingulate 47 48 cortex (PCC). The MPFC has been a particular focus in social neuroscience for its frequent 49 50 associations with inferences about other minds, both in terms of fairly stable qualities like 51 52 personality traits and more fleeting or situational mental states. 53 Our ability to recognize and tailor our behavior toward numerous personally familiar 54 55 individuals would be computationally intractable without an extremely efficient coding scheme 56 57 58 59 10 60 http://mc.manuscriptcentral.com/scan Page 11 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 in brain regions associated with thinking about others. Univariate fMRI approaches have shown 4 5 that individual identities can activate associated person knowledge in regions such as the MPFC 6 7 (Cloutier et al., 2011; Todorov et al., 2007), but that does not necessarily mean that the brain 8 stores individual representations for each known identity. Recently, multivariate approaches have 9 10 extended this work by demonstrating the complexity and flexibility of these identity 11 12 representations. MVPA and RSA have been particularly well-suited to studying representations 13 14 of personality and identity due to the multitude of theories in social psychology about the 15 dimensions underlying our representations of others. In particular, RSA allows explicit tests of 16 17 how well such dimensional theories of personality predict the brain’s representational structure 18 19 during social perception orFor mentalizing Peer tasks (e.g., Review Tamir et al., 2016; Thornton & Mitchell, 20 21 2018). Due to the highly consistent set of regions associated with high-level in 22 the univariate fMRI literature (i.e., MPFC, TPJ, PCC/precuneus; see Figure 2), MVPA 23 24 investigations in this domain have had a well-defined set of a priori regions of interest to explore. 25 26 Research using classification and RSA approaches has begun targeted assessments of the 27 representational structure in these regions, revealing the granularity or discriminability of 28 29 identity or personality representations, which category boundaries seem to be relevant for 30 31 processing in these regions, and which aspects of social cognition each region is most 32 33 computationally relevant for. 34 For example, research using a searchlight-based classification approach showed that 35 36 multi-voxel response patterns in the MPFC, TPJ, and precuneus could discriminate between 37 38 personally familiar and unfamiliar individuals, and further, that neural patterns in the MPFC 39 40 could distinguish individual identities from both the familiar and unfamiliar conditions, 41 potentially reflecting rapid encoding of coarse personality representations of the unfamiliar 42 43 identities during the course of the experiment (Castello et al., 2017). Research introducing 44 45 biographical information about novel targets found such information rapidly shapes multivoxel 46 patterns in response to their faces in the bilateral FG, where face representations were grouped 47 48 by the amount of biographical information participants learned about each individual (Verosky, 49 50 Todorov, & Turke-Browne, 2013). Thornton and Mitchell (2017) similarly found that identity 51 52 and person knowledge about personally familiar others are represented in discriminable patterns 53 of neural activity in regions such as the MPFC, TPJ, and precuneus, but the task involved 54 55 imagining these individuals in various contexts rather than looking at their faces. Subjects in this 56 57 58 59 11 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 12 of 32

1 2 3 experiment also made judgments about how accurate and vivid they felt their mental simulations 4 5 were, and these judgments were predicted by how typical (for that target individual) the neural 6 7 response pattern in the medial parietal cortex/precuneus was on the relevant trial. Moreover, the 8 medial parietal cortex/precuneus was the only region that reliably encoded information about the 9 10 situational context of the mental simulation in addition to information about the individual 11 12 imagined in the simulation. While speculative, this possibly indicates the different computational 13 14 roles of these regions, such that the MPFC stores and organizes person knowledge, but medial 15 parietal regions are more involved in integrating person knowledge into social-cognitive 16 17 judgments and contextual aspects of mental prospection. 18 19 But what are the natureFor of thesePeer representations Review of other minds, and how are they 20 21 organized? While a comprehensive answer to this question remains elusive, recent work has used 22 RSA to characterize the organization of person knowledge, indicating a number of complex yet 23 24 efficient ways for organizing knowledge about numerous individuals along shared dimensions. 25 26 Building off the extensive history of dimensional models in the person perception literature (e.g., 27 Fiske et al., 2002; 2007; Oosterhof & Todorov, 2008), Tamir and colleagues (2016) tested how 28 29 well a large set of hypothesized social-cognitive dimensions from the literature explained the 30 31 representational space of multi-voxel patterns elicited when subjects thought abstractly about 32 33 mental states such as “awe” and “self-consciousness”. Using principal components analysis, the 34 researchers found that a smaller set of three dimensions explained a majority of the variance in 35 36 mental state representations: rationality, social impact, and valence. Complementary work used 37 38 RSA to test how well four specific models of person perception (including the big five factor 39 40 model of personality traits and warmth-competence model of social cognition) predicted the 41 neural representational space of response patterns during mentalizing (Thornton & Mitchell, 42 43 2018). The researchers found that all the four models significantly predicted the brain’s 44 45 representational structure, but not as well as a synthetic model derived from a combination of all 46 four candidate models. Thus, while there may be a low-dimensional space that largely explains 47 48 how we represent information about others, ultimately our understanding of the nature of these 49 50 representations is only emerging. Related work recently found neural representations of others 51 52 hold information pertaining to the mental states most attributed to each individual, with those 53 mental states weighted more strongly in a given target’s multi-voxel response patterns (Thornton 54 55 et al., 2019a; see Figure 2). 56 57 58 59 12 60 http://mc.manuscriptcentral.com/scan Page 13 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 One particularly informative approach for mental state and trait representation 4 5 researchers has been to compare representations of self and other, and of many familiar others, 6 7 within naturalistic groups of friends or acquaintances. Recently, Thornton and colleagues (2019) 8 examined the relationship between a given individual’s representation of their own self- 9 10 knowledge with the neural representations of others’ mental states. The researchers used RSA to 11 12 show that the neural patterns associated with one’s own self-concept are more distinct or 13 14 discriminable than those associated with knowledge about others, as reflected in regions such as 15 MPFC, TPJ, and dorsolateral prefrontal cortex (DLPFC). Other work employed a round-robin 16 17 design in an fMRI study on a sample of close friends, finding that multi-voxel representations of 18 19 a given individual’s self-conceptFor in Peer the MPFC wereReview correlated with their friend’s MPFC 20 21 representations of their personality (Chavez & Wagner, 2020). The strength of this relationship 22 was associated with how similar trait ratings were between the target individual (rating their own 23 24 traits) and the friend in question, suggesting that one factor driving “accuracy” in personality 25 26 judgments is how well our mental models of an individual’s personality match their own self- 27 assessments. 28 29 Another fMRI study on a group of familiar individuals used a large sample drawn from a 30 31 cohort of first-year MBA students (Parkinson et al., 2017), using RSA to show that 32 33 representations of familiar others are organized along social network dimensions. Multiple social 34 network attributes – including distance (number of intermediate paths between any given two 35 36 people in the network) and eigenvector centrality (generally characterizing how well connected a 37 38 given person is within their network) – were used to construct dissimilarity matrices reflecting 39 40 pairwise relationships between each individual in the social network on these metrics. These 41 network characteristics were found to predict the representational structure of neural response 42 43 patterns in regions such as the MPFC, superior temporal cortex (STC), inferior parietal lobule 44 45 (IPL), and precuneus/PCC when subjects viewed videos of their classmates introducing 46 themselves in the scanner. This work suggests that representations of familiar others are 47 48 organized in a manner partly determined by the overall structure of the social network in which 49 50 an individual is embedded. 51 52 The brain’s representational structure of identity and personality concepts is consistent 53 with a number of different models of high-level social cognition that predict how different minds 54 55 relate to one another. Together, these studies demonstrate that person representations (identity, 56 57 58 59 13 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 14 of 32

1 2 3 traits, personality) in the brain are highly structured, with high-level social dimensions such as 4 5 personality similarity and social network position partly organizing neural response patterns. It is 6 7 likely that a smaller set of underlying social cognitive dimensions explain most of the variance in 8 pattern response in brain regions such as MPFC and TPJ, rather than representations in these 9 10 regions being simultaneously constrained by several different psychological models of 11 12 personality and behavior. However, research investigating any such low-dimensional 13 14 representational space is only emerging. 15 16 17 Emotion Perception 18 19 Understanding howFor others arePeer feeling is anReview important tool for navigating the social world 20 21 safely and effectively, and understanding which emotion categories perceivers can successfully 22 “recognize” in others has been a central focus of the literature on emotion perception. Early, 23 24 seminal models of face perception (Bruce & Young, 1986) emphasized a processing dissociation 25 26 between static and dynamic facial cues, and since facial emotion is often categorized based on 27 dynamic facial movements, it has largely been treated separately in the literature from 28 29 dimensions of social perception that have categorical boundaries defined by static cues (e.g., race 30 31 and sex). The particular neuroanatomical dissociation is between the fusiform gyrus (FG), 32 33 thought to be more important in processing configurations of static facial cues, and the superior 34 temporal sulcus (STS), known to be involved in processing dynamic facial actions as well as 35 36 socially relevant actions more broadly, such as body movements (Haxby et al., 2000; 2002). The 37 38 neuroimaging literature on emotion perception has been further complicated by studies that 39 40 aimed to isolate regions most involved in perception of specific discrete emotions, for example 41 by showing participants facial expressions typically categorized as Angry, Disgusted, and 42 43 Neutral, and computing univariate Angry > Neutral and Disgusted > Neutral contrasts to 44 45 determine which regions are preferentially engaged by Angry and Disgusted facial expressions, 46 respectively. Numerous studies associated different possible brain regions with different specific 47 48 emotions, most famously strongly associating the amygdala with perceiving fear (Adolphs et al., 49 50 1995; Adolphs, 2008), although neuroimaging meta-analyses have been unable to find specific 51 52 associations that are consistent across the literature (Lindquist et al., 2012). 53 In an analogous manner to the fMRI literature on social categorization, multivariate 54 55 approaches have helped shift the neuroscientific study of emotion perception from a focus on 56 57 58 59 14 60 http://mc.manuscriptcentral.com/scan Page 15 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 associating brain regions with emotion categories to a greater understanding of the relevant 4 5 information different regions process. As with the study of traits and identity, this makes MVPA 6 7 particularly beneficial to the study of emotion perception because of the degree of debate in the 8 field surrounding different candidate models of emotion perception. The core of this debate 9 10 concerns how to specify the relationship between emotion categories (e.g., Anger) and facial 11 12 actions, with some assuming that specific facial expressions map directly onto corresponding 13 14 emotion categories and others assuming a more context- and perceiver-dependent relationship 15 (Barrett et al., 2019). Not only can diverse candidate models be directly tested against each other 16 17 using RSA, classification and decoding approaches can demonstrate which aspects of emotion 18 19 expressions determine boundariesFor betweenPeer emotion Review categories and perceiver impressions, 20 21 addressing theoretical predictions about the determinants of emotion percepts. 22 Multivariate classification approaches demonstrate that facial emotion categories can be 23 24 decoded in regions such as early visual cortex (V1; Petro et al., 2013), the posterior superior 25 26 temporal sulcus (pSTS; Said et al. 2010), and the fusiform gyrus (FG; Harry et al., 2013; 27 Wegrzyn et al., 2015a). For static images of facial expressions, Wegrzyn et al. (2015a) compared 28 29 classification performance in multiple brain regions, finding that classification performance was 30 31 highest in the FG, indicating that this region contains and processes emotion category-relevant 32 33 information. This is consistent with work showing that facial emotion categories exhibit category 34 selectivity effects in a manner consistent with social category perceptions (Calder et al., 1996; de 35 36 Gelder et al., 1997; Etcoff et al., 1992; Wegrzyn et al., 2015b). A recent study further challenged 37 38 the common FG/STS distinction by showing that emotion category representations in response to 39 40 dynamic facial movements could be decoded in the FG, and that emotion category 41 representations in response to static facial expressions could be decoded in the STS (Liang et al., 42 43 2017). However, in studies characterizing the representation of fine-grained facial movements 44 45 associated with emotion, the pSTS seems to be the most computationally relevant region (Deen 46 & Saxe, 2019; Srinivasan et al., 2016). 47 48 As in the domain of perceiving social categories and groups, the idea that social cognitive 49 50 processes and semantic representations of emotion categories may shape face processing has 51 52 become important in understanding facial emotion perception. Classic categorical approaches 53 such as basic emotion theory would predict a neural representational space in which different 54 55 categories are primarily represented on the basis of their differences in facial cues. In a recent 56 57 58 59 15 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 16 of 32

1 2 3 study using RSA, we measured the neural representational space of emotion categories from a 4 5 task in which subjects passively viewed emotional facial expressions, and also measured each 6 7 subject’s conceptual space of emotion categories (Brooks et al. 2019). We found that each 8 individual’s unique conceptual model of emotion categories was reflected in multi-voxel pattern 9 10 structure in the FG when viewing faces belonging to those categories, even when acknowledging 11 12 intrinsic visual differences in the emotion expressions themselves. Specifically, when an 13 14 individual believed any given pair of emotions (e.g., Angry and Sad) were more conceptually 15 similar, there was a corresponding similarity in the multi-voxel response patterns in the FG to 16 17 faces belonging to those categories (see Figure 1c). Of relevance for comparing conceptually- 18 19 shaped models of emotionFor perception Peer with more Review categorical models, these results emerged even 20 21 when controlling for three different models of similarity between categories in their displayed 22 facial cues and low-level visual properties, strengthening the claim of conceptually-structured 23 24 representations in the FG. These findings suggest that individual differences in conceptual 25 26 understanding of what different emotions mean may impact the visual representation of facial 27 expressions commonly associated with those categories. 28 29 Beyond using RSA to assess the correspondence between idiosyncratic conceptual 30 31 structure and neural pattern structure in FG, RSA has also proven useful in adjudicating between 32 33 different theoretical models of emotion in terms of how well they predict the brain’s 34 representational structure of emotion categories. In the domain of emotion perception, the 35 36 dominant models have been the categorical “basic emotion” model and a dimensional 37 38 “circumplex” model. Basic emotion theory posits a small set of psychologically distinct emotion 39 40 categories thought to yield associated facial expressions that are universally recognized in a 41 categorical fashion (Ekman, 1993; Ekman et al., 1969; Ekman & Friesen, 1971; 1976). The 42 43 circumplex model emphasizes two underlying dimensions of valence (positive vs. negative) and 44 45 arousal (high vs. low physiological activation or perceived intensity) that all emotion judgments 46 map onto (Russell, 1980; 2003). 47 48 In a study using RSA to explicitly compare how well these models fit representations of 49 50 20 emotion categories inferred from reading stories about individuals, Skerry and Saxe (2015) 51 52 found that neither model fit multi-voxel representations in the “theory of mind network” 53 (including multiple sub-regions of MPFC as well as the rTPJ) as well as a higher-dimensional 54 55 model generated from behavioral responses. This model contained 38 dimensions describing 56 57 58 59 16 60 http://mc.manuscriptcentral.com/scan Page 17 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 various contextual and situational factors that constrain perceiver appraisals. While these brain 4 5 regions, more often associated with mentalizing and theory of mind, have not been extensively 6 7 studied in the context of emotion perception, it is of course true that emotion perception is 8 fundamentally a case of mental state attribution, as with mentalizing. Indeed, an additional study 9 10 (Skerry & Saxe, 2014) showed that the representational space of emotion categories in the 11 12 MPFC was shared across situational inferences and facial emotion perception, indicating that 13 14 these regions implicated in theory of mind may represent and compute high-level aspects of 15 emotion perception across multiple modalities. Given that valence has also been found to partly 16 17 organize the brain’s mental state representations (Tamir et al., 2016), future work is needed to 18 19 disentangle the representationsFor and Peerpossibly shared Review underlying dimensions of emotion and 20 21 mental state representations. 22 23 24 Conclusions 25 26 Across multiple domains of social perception, multivariate analyses of fMRI data have 27 permitted a more fine-grained understanding of how social information is processed and 28 29 represented in the brain, and an increased understanding of the computational relevance of 30 31 specific brain regions in social-perceptual processes. Such an approach is equally informative 32 33 from neuroscientific and psychological perspectives, better characterizing the computational 34 characteristics of specific brain regions as well as allowing explicit tests of how well 35 36 psychological models fit the brain’s representational structure of social information. It is 37 38 important to note that this review was relatively limited to classification and RSA approaches, 39 40 but there are a number of other data-driven methods and advanced analysis techniques that have 41 been similarly useful in moving the field past localization approaches (see Wagner et al., 2019 42 43 for a relevant review). Additional methodological tools are rapidly accumulating and gaining 44 45 sophistication. For example, some recent work has expanded the use of multivariate pattern 46 classifiers to include multivariate patterns of connectivity between brain regions, which can also 47 48 be leveraged to make stronger causal claims about the relationships between distant brain regions 49 50 in their representational spaces (Anzellotti et al., 2017). In the domain of social perception, this 51 52 has already been used to decode wide-scale patterns of connectivity associated with facial 53 emotion categories (Liang et al., 2018). Future work is needed to better understand the 54 55 relationship between these various approaches. 56 57 58 59 17 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 18 of 32

1 2 3 While promising, these approaches are not without limitations. Notably, RSA is 4 5 inherently correlational and does not permit any causal inferences about determinants of neural 6 7 representational structure. Additionally, decoding approaches are sometimes particularly 8 sensitive to idiosyncratic factors such as the specific sample, stimuli, and task context (Davis et 9 10 al., 2014; Todd et al., 2013). For this reason, it is essential to test decoding models outside of the 11 12 original sample, and recent applications have made progress incorporating this level of rigor. 13 14 Large-scale openly shared fMRI datasets, and concurrent development of sophisticated tools to 15 analyze them, also serve to mitigate these issues. It is also important to note that MVPA analyses 16 17 are not immune to several issues faced by univariate fMRI. In particular, following a searchlight 18 19 or whole-brain MVPA analysisFor that Peer maps representational Review similarity or classification accuracy at 20 21 each voxel in the brain, the resulting maps undergo voxelwise statistical tests, making MVPA 22 vulnerable to the same statistical correction and reporting pitfalls observed in univariate fMRI 23 24 research. 25 26 Another challenge of multivariate fMRI lies in interpreting MVPA results and using them 27 to inform and develop psychological theory. What does it really mean to be able to discriminate 28 29 patterns of fMRI signal along a cognitive dimension? Multi-voxel patterns associated with a 30 31 particular category are often reported as the neural representation of that particular category, but 32 33 this interpretation requires care. The precise relationship between neuronal activity and 34 differences in spatial distribution of fMRI signal is unknown (but see Haynes, 2015 for a 35 36 summary of existing knowledge about the relationship between multi-voxel patterns and the 37 38 activity of neurons). However, empirical and theoretical efforts are underway to better 39 40 characterize the nature of multi-voxel response patterns and their precise relationship with 41 underlying neural activity as well as the cognitive dimensions they seem to encode (Davis et al., 42 43 2014; Grootswaters et al., 2018; Popov et al., 2018; Ritchie et al., 2019). The idiosyncrasies of 44 45 sample and stimuli can also introduce issues of interpretation. Researchers should take caution 46 when interpreting a classifier trained and tested on one sample as reflecting the “neural code” for 47 48 that cognitive dimension. As noted above, the interpretability and theoretical utility of decoding 49 50 models principally depends on their generalizability out of sample (Kriegeskorte & Douglas, 51 52 2019). Increased use of RSA and related methods may elucidate some of these open questions 53 about the nature of representation in fMRI response patterns. In particular, while RSA does not 54 55 permit causal inferences, its ability to test the dimensionality and informational content of neural 56 57 58 59 18 60 http://mc.manuscriptcentral.com/scan Page 19 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 activation patterns in a targeted way affords stronger conclusions about how a given brain region 4 5 encodes information. 6 7 While more work is needed to address these and other questions, the field stands to 8 benefit from continued use of MVPA and related approaches. Techniques are constantly 9 10 evolving and improving, building an exciting set of new avenues for computational 11 12 neuroimaging research on social perception, such as MVPA in conjunction with naturalistic 13 14 stimuli and tasks such as movies and real-time interactions (Wagner et al., 2019). Future work 15 using RSA would benefit from choosing diverse candidate models to test at the level of neural 16 17 representation. Of particular relevance to social perception are the astoundingly rapid advances 18 19 in “deep” neural network Formodels of Peer computer vision, Review which approach or approximate human 20 21 performance in a number of object recognition and categorization tasks (Goodfellow et al., 2016; 22 Hassabis et al., 2017; Kriegeskorte & Golan, 2019). Comparing internal representations from 23 24 advanced computer vision models to neural and behavioral category representations has already 25 26 proven useful in the object recognition literature (Jozwik et al., 2017; Kriegeskorte & Golan, 27 2019; Storrs et al., 2017), and could benefit the neuroscientific study of visual social perception 28 29 as well. These approaches, and other theoretical and methodological advancements, show 30 31 promise in improving our understanding of how visual input from another person’s face and 32 33 body is transformed into a socially relevant category representation in the brain. 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 19 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 20 of 32

1 2 3 References 4 5 Amodio, D. M., & Frith, C. D. (2006). Meeting of minds: the medial frontal cortex and social 6 7 cognition. Nature Reviews Neuroscience, 7(4), 268–277. 8 9 10 Anzellotti, S., Caramazza, A., & Saxe, R. (2017). Multivariate pattern dependence. PLOS 11 12 Computational Biology, 13(11), e1005799. 13 14 15 Averbeck, B., Latham, P. & Pouget, A. (2006). Neural correlations, population coding and 16 17 computation. Nature Reviews Neuroscience, 7, 358–366. 18 19 For Peer Review 20 21 Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in visual object 22 recognition. Journal of Cognitive Neuroscience, 15(4), 600-609. 23 24 25 26 Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., Hämäläinen, 27 M.S., Marinkovic, K., Schacter, D.L., Rosen, B.R., & Halgren, E. (2006). Top-down facilitation 28 29 of visual recognition. Proceedings of the National Academy of Sciences, 103(2), 449-454. 30 31 32 33 Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional 34 expressions reconsidered: Challenges to inferring emotion from human facial movements. 35 36 Psychological Science in the Public Interest, 20, 1–68. 37 38 39 40 Brooks, J.A., Chikazoe, J., Sadato, N., & Freeman, J.B. (2019). The neural representation of 41 facial emotion categories reflects conceptual structure. Proceedings of the National Academy of 42 43 Sciences. 44 45 46 Bruce, V., & Young, A. W. (1986). A theoretical perspective for understanding face recognition. 47 48 British Journal of Psychology, 77, 305–327. 49 50 51 52 Calder, A.J., Young, A.W., Perrett, D.I., Etcoff, N.L., & Rowland, D. (1996). Categorical 53 perception of morphed facial expressions. Visual Cognition, 3, 81–118. 54 55 56 57 58 59 20 60 http://mc.manuscriptcentral.com/scan Page 21 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 Chaimow, D., Yacoub, E., Ugurbil, K., & Shmuel, A. (2011). Modeling and analysis of 4 5 mechanisms underlying fMRI-based decoding of information conveyed in cortical columns. 6 7 NeuroImage, 56(2), 627–642. 8 9 10 Chavez, R.S. & Wagner, D.D. (2020). The neural representation of self is recapitulated in the 11 12 brains of friends: A round-robin fMRI study. Journal of Personality and Social Psychology, 13 14 118(3), 407-416. 15 16 17 Cheong, J.H., Jolly, E., Sul, S., & Chang, L.J. (2017). Computational models in social 18 19 neuroscience. In ComputationalFor Models Peer of Brain Review and Behavior, Moustafa, A. (Ed.), Wiley- 20 21 Blackwell. 22 23 24 Cikara, M., Van Bavel, J. J., Ingbretsen, Z., & Lau, T. (2017). Decoding “us” and “them:” 25 26 Neural representations of generalized group concepts. Journal of Experimental Psychology: 27 General, 146, 621-631. 28 29 30 31 Cloutier, J., Kelley, W. M., & Heatherton, T. F. (2011). The influence of perceptual and 32 33 knowledge-based familiarity on the neural substrates of face perception. Social Neuroscience, 34 6(1), 63–75. 35 36 37 38 Contreras, J. M., Banaji, M. R., & Mitchell, J. P. (2013). Multivoxel patterns in fusiform face 39 40 area differentiate faces by sex and race. PLoS ONE, 8(7), e69684. 41 42 43 Davis, T., LaRocque, K. F., Mumford, J. A., Norman, K. A., Wagner, A. D., & Poldrack, R. A. 44 45 (2014). What do differences between multi-voxel and univariate analysis mean? How subject-, 46 voxel-, and trial-level variance impact fMRI analysis. NeuroImage, 97, 271-283. 47 48 49 50 Deen, B., & Saxe R. (2019). Parts-based representations of perceived face movements in the 51 52 superior temporal sulcus. Human Brain Mapping. 53 54 55 56 57 58 59 21 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 22 of 32

1 2 3 De Gelder, B., Teunisse, J.-P., & Benson, P.J. (1997). Categorical perception of facial 4 5 expressions: Categories and their internal structure. Cognition and Emotion, 11, 1–23. 6 7 8 Dobs, K., Isik, L., Pantazis, D., & Kanwisher, N. (2019). How face perception unfolds over time. 9 10 Nature Communications, 10(1), 1258. 11 12 13 14 Dosenbach, N.U., Fair, D.A., Miezin, F.M., Cohen, A.L., Wenger, K.K., Dosenbach, R.A., Fox, 15 M.D., Snyder, A.Z., Vincent, J.L., Raichle, M.E., Schlaggar, B.L., & Petersen, S.E. (2007). 16 17 Distinct brain networks for adaptive and stable task control in humans. Proceedings of the 18 19 National Academy of SciencesFor, 104 Peer, 11073–11078. Review 20 21 22 Ekman, P. (1993). Facial expression of emotion. American psychologist, 48, 384-392. 23 24 25 26 Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal 27 of Personality and Social Psychology, 17, 124–129. 28 29 30 31 Ekman, P., & Friesen, W. V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting 32 33 Psychologists Press. 34 35 Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969). Pan-cultural elements in facial displays of 36 37 emotions. Science, 164, 86–88. 38 39 40 Etcoff, N.L., & Magee, J.J. (1992). Categorical perception of facial expressions. Cognition, 44, 41 42 227–240. 43 44 45 Fiske, S. T., Cuddy, A. J., & Glick, P. (2007). Universal dimensions of social cognition: Warmth 46 47 and competence. Trends in Cognitive Sciences, 11(2), 77-83. 48 49 50 51 Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype 52 content: Competence and warmth respectively follow from perceived status and competition. 53 54 Journal of Personality and Social Psychology, 82(6), 878-902. 55 56 57 58 59 22 60 http://mc.manuscriptcentral.com/scan Page 23 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 Freeman, J.B. (2018). Doing psychological science by hand. Current Directions in 4 5 Psychological Science, 27(5), 315-323. 6 7 8 Freeman, J.B. & Ambady, N. (2010). Mousetracker: Software for Studying Real-Time Mental 9 10 Processing Using a Computer Mouse-Tracking Method. Behavior Research Methods, 42(1) , 11 12 226-241. 13 14 15 Freeman, J. B., & Ambady, N. (2011). A dynamic interactive theory of person construal. 16 17 Psychological Review, 118, 247-279. 18 19 For Peer Review 20 21 Freeman, J.B. & Johnson, K.L. (2016). More than meets the eye: Split-second social perception. 22 Trends in Cognitive Sciences, 20, 362-374. 23 24 25 26 Golby, A.J., Gabrieli, J.D.E., Chiao, J.Y., & Eberhardt, J.L. (2001). Differential responses in the 27 fusiform region to same-race and other-race faces. Nature Neuroscience, 4, 845–850. 28 29 30 31 Gonzalez, B., & Chang, L. J. (2019, July 26). Computational models of mentalizing. 32 33 https://doi.org/10.31234/osf.io/4tyd9. 34 35 36 Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Cambridge: MIT Press. 37 38 39 40 Grootswaters, T., Cichy, R.M., & Carlson, T.A. (2018). Finding decodable information that can 41 be read out in behaviour. Neuroimage, 179, 252-262. 42 43 44 45 Hackel, L.M., & Amodio, D.M. (2018). Computational neuroscience approaches to social 46 cognition. Current Opinion in Psychology, 24, 92-97. 47 48 49 50 Harry, B., Williams, M.A., Davis, C., & Kim, J. (2013). Emotional expressions evoke a 51 52 differential response in the fusiform face area. Frontiers in Human Neuroscience, 7, 692. 53 54 55 56 57 58 59 23 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 24 of 32

1 2 3 Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired 4 5 artificial intelligence. Neuron, 95, 245-258. 6 7 8 Haxby, J. V., Connolly, A. C., Guntupalli, J. S. (2014). Decoding neural representational spaces 9 10 using multivari-ate pattern analysis. Annual Review of Neuroscience, 37, 435–456. 11 12 13 14 Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A.,Schouten, J. L., Pietrini, P. (2001). 15 Distributed and overlapping representations of faces and objects in ventral temporal cortex. 16 17 Science, 293(5539), 2425–2430. 18 19 For Peer Review 20 21 Haxby, J. V., Hoffman, E. A., & Gobbini, M. A. (2000). The distributed human neural system 22 for face perception. Trends in Cognitive Sciences, 4, 223–233. 23 24 25 26 Haxby, J. V., Hoffman, E. A., & Gobbini, M. A. (2002). Human neural systems for face 27 recognition and social communication. Biological Psychiatry, 51, 59–67. 28 29 30 31 Haynes, J.-D. (2015). A primer on pattern-based approaches to fMRI: principles, pitfalls, and 32 33 perspectives. Neuron, 87, 257-270. 34 35 36 Hugenberg, K., Young, S. G., Bernstein, M. J., & Sacco, D. F. (2010). The categorization- 37 38 individuation model: an integrative account of the other-race recognition deficit. Psychological 39 40 Review, 117(4), 1168–1187. 41 42 43 Józwik, K. M., Kriegeskorte, N., Storrs, K.R., & Mur, M. (2017). Deep convolutional neural 44 45 networks outperform feature-based but not categorical models in explaining object similarity 46 judgements. Frontiers in Psychology, 8, 1726. 47 48 49 50 Khaligh-Razavi, S.-M., & Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, 51 52 models may explain IT cortical representation. PloS Computational Biology, 10, e1003915. 53 54 55 56 57 58 59 24 60 http://mc.manuscriptcentral.com/scan Page 25 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 Kriegeskorte, N., & Douglas, P.K. (2019). Interpreting encoding and decoding models. Current 4 5 Opinion in Neurobiology, 55, 167-179. 6 7 8 Kriegeskorte, N., & Golan, T. (2019). Neural network models and deep learning. Current 9 10 Biology, 29(7), PR231-R236. 11 12 13 14 Kriegeskorte, N., Mur, M., & Bandettini, P. (2008). Representational similarity analysis - 15 connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 4. 16 17 18 19 Lewis-Peacock, J. A., & Norman,For K.Peer A. (2014). ReviewMulti-voxel pattern analysis of fMRI 20 21 data. In M. S. Gazzaniga & G. R. Mangun (Eds.), The Cognitive Neurosciences (5th 22 ed.). MIT Press. 23 24 25 26 Liang, Y. , Liu, B. , Xu, J. , Zhang, G. , Li, X. , Wang, P. & Wang, B. (2017), Decoding facial 27 expressions based on face‐selective and motion‐sensitive areas. Human Brain Mapping, 38, 28 29 3113-3125. 30 31 32 33 Liang, Y., Liu, B., Li, X., & Wang, P. (2018). Multivariate pattern classification of facial 34 expressions based on large-scale functional connectivity. Frontiers in Human Neuroscience, 12, 35 36 94. 37 38 39 40 Lieberman, M.D., Hariri, A., Jarcho, J.M., Eisenberger, N.I., & Bookheimer, S.Y. (2005). An 41 fMRI investigation of race-related amygdala activity in African-American and Caucasian- 42 43 American individuals. Nature Neuroscience, 8, 720–722. 44 45 46 Lindquist, K.A., Wager, T.D., Kober, H., Bliss-Moreau, E., & Barrett, L.F. (2012). The brain 47 48 basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35, 121-143. 49 50 51 52 Locke, V., Macrae, C.N., Eaton, J.L. (2005). Is person categorization modulated by exemplar 53 typicality? Social Cognition, 23, 417–428. 54 55 56 57 58 59 25 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 26 of 32

1 2 3 Logothetis, N.K. (2008). What we can do and what we cannot do with fMRI. Nature, 453, 869- 4 5 878. 6 7 8 Meissner, C. A., & Brigham, J. C. (2001). Thirty years of investigating the own-race bias in 9 10 memory for faces: a meta-analytic review. Psychology, Public Policy, and Law, 7, 3–35. 11 12 13 14 Mitchell, J. P. (2008). Contributions of functional neuroimaging to the study of social cognition. 15 Current Directions in Psychological Science, 17(2), 142–146. 16 17 18 19 Oosterhof, N. N., & Todorov,For A. (2008). Peer The functional Review basis of face evaluation. Proceedings of 20 21 the National Academy of Sciences, 105, 11087-11092. 22 23 24 Parkinson, C., Kleinbaum, A.M., & Wheatley, T. (2017). Spontaneous neural encoding of social 25 26 network position. Nature Human Behaviour, 1, 72. 27 28 29 Petro, L.S., Smith, F.W., Schyns, P.G., & Muckli, L. (2013). Decoding face categories in 30 31 diagnostic subregions of primary visual cortex. European Journal of Neuroscience, 37, 1130– 32 33 1139. 34 35 36 Popal, H., Wang, Y., & Olson, I.R. (2019). A guide to representational similarity analysis for 37 38 social neuroscience. Social Cognitive and Affective Neuroscience, 14(11), 1243-1253. 39 40 41 Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural 42 43 representations. Neuroimage, 174, 340-351. 44 45 46 Ritchie, J.B., Kaplan, D.M., & Klein, C. (2019). Decoding the brain: neural representation and 47 48 the limits of multivariate pattern analysis in cognitive neuroscience. The British Journal for the 49 50 Philosophy of Science, 70, 581-607. 51 52 53 Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social 54 55 Psychology, 39(6), 1161. 56 57 58 59 26 60 http://mc.manuscriptcentral.com/scan Page 27 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 4 5 Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological 6 7 Review, 110, 145–172. 8 9 10 Said, C.P., Moore, C.D., Engell, A.D., Todorov, A., & Haxby, J.V. (2010). Distributed 11 12 representations of dynamic facial expressions in the superior temporal sulcus. Journal of Vision, 13 14 10, 11. 15 16 17 Saxe, R., & Kanwisher N. (2003). People thinking about thinking people: The role of the 18 19 temporo-parietal junctionFor in “theory Peer of mind”. NeuroImageReview, 19(4), 1835 - 1842. 20 21 22 Srinivasan, R., Golomb, J.D., & Martinez, A.M. (2016). A neural basis of facial action 23 24 recognition in humans. Journal of Neuroscience, 36(16), 4434-4442. 25 26 27 Stolier, R.M. & Freeman, J.B. (2016). Neural pattern similarity reveals the inherent intersection 28 29 of social categories. Nature Neuroscience, 19, 795-797. 30 31 32 33 Stolier, R. M. & Freeman, J. B. (2017). A neural mechanism of social categorization. Journal of 34 Neuroscience, 37(23), 5711-5721. 35 36 37 38 Storrs, K., Mehrer, J., Walter, A., & Kriegeskorte, N. (2017). Category-specialised neural 39 40 networks best explain representations in category-selective visual areas. Perception, 46, 1217– 41 1218. 42 43 44 45 Summerfield, C., & Egner, T. (2009). Expectation (and attention) in visual cognition. Trends in 46 Cognitive Sciences, 13(9), 403- 409. 47 48 49 50 Tamir, D.I., Thornton, M.A., Contreras, J.M., & Mitchell, J.P. (2016). Neural evidence that three 51 52 dimensions organize mental state representation: rationality, social impact, and valence. 53 Proceedings of the National Academy of Sciences, 113(1), 194-199. 54 55 56 57 58 59 27 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 28 of 32

1 2 3 Thornton, M. A., & Mitchell, J. P. (2017). Consistent neural activity patterns represent 4 5 personally familiar people. Journal of Cognitive Neuroscience, 29(9), 1583-1594. 6 7 8 Thornton, M. A., & Mitchell, J. P. (2018). Theories of person perception predict patterns of 9 10 neural activity during mentalizing. Cerebral Cortex, 28(10), 3505-3520. 11 12 13 14 Thornton, M. A., Weaverdyck, M. E., & Tamir, D. I. (2019a). The brain represents people as the 15 mental states they habitually experience. Nature Communications, 10, 2291. 16 17 18 19 Thornton, M. A., Weaverdyck,For M. E.,Peer Mildner, J.Review N., & Tamir, D. I. (2019b). People represent 20 21 their own mental states more distinctly than those of others. Nature Communications, 10, 2117. 22 23 24 Todd, M. T., Nystrom, L. E., & Cohen, J. D. (2013). Confounds in multivariate pattern analysis: 25 26 Theory and rule representation case study. NeuroImage, 15(77), 157-165. 27 28 29 Verosky, S. C., Todorov, A., & Turk-Browne, N. B. (2013). Representations of individuals in 30 31 ventral temporal cortex defined by faces and biographies. Neuropsychologia, 51(11), 2100-2108. 32 33 34 Visconti di Oleggio Castello, M., Halchenko, Y. O.,Guntupalli, J. S., Gors, J. D., Gobbini, M. I. 35 36 (2017). The neural representation of personally familiar and unfamiliar faces in the distributed 37 38 system for face perception. Scientific Reports, 7(1), 12237. 39 40 41 Wagner, D. D., Chavez, R. S., & Broom, T. W. (2019). Decoding the neural representation of 42 43 self and person knowledge with multivariate pattern analysis and data‐driven approaches. Wiley 44 45 Interdisciplinary Reviews: Cognitive Science, 10(1), e1482. 46 47 48 M. Wegrzyn et al., Investigating the brain basis of facial expression perception using multi-voxel 49 50 pattern analysis. Cortex 69, 131–140 (2015a). 51 52 53 54 55 56 57 58 59 28 60 http://mc.manuscriptcentral.com/scan Page 29 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 Wegrzyn, M., Bruckhaus, I., & Kissler, J. (2015b). Categorical Perception of Fear and Anger 4 5 Expressions in Whole, Masked and Composite Faces. PloS one, 10(8), e0134790. 6 7 doi:10.1371/journal.pone.0134790. 8 9 10 11 12 13 14 15 16 17 18 19 For Peer Review 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 29 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 30 of 32

1 2 3 Figure Legends 4 5 Figure 1. MVPA shows that stereotypes and emotion concepts shape representations of 6 other people’s faces in the FG. Stolier and Freeman (2016) used multiple regression RSA on 7 fMRI data from a task in which subjects viewed faces, showing that stereotypes partially 8 structure how face’s social categories are represented in regions important for face perception 9 10 such as the FG. a) An example pair of corresponding dissimilarity matrices (DMs), depicting the 11 corresponding representational structures of social categories in both stereotypes and subjective 12 face perception. b) Results from a whole-brain searchlight analysis which performed multiple 13 regression RSA at each searchlight sphere, measuring the correspondence between the subjective 14 perceptual and neural DMs while controlling for three models of visual similarity. This analysis 15 revealed that the right fusiform gyrus (rFG) and orbitofrontal cortex (OFC) represent social 16 17 categories in a manner consistent with the influence of stereotypes on processing of faces’ social 18 categories. c) Similar results are shown from Brooks and colleagues (2019), which reported an 19 fMRI study in which subjectsFor passively Peer viewed facesReview varying in emotion expression. The 20 researchers also measured subjects’ conceptual similarity between the emotion categories Anger, 21 Disgust, Fear, Happiness, Sadness, and Surprise (corresponding to the facial expressions shown 22 in the scanner). The correspondence between this idiosyncratic conceptual DM and the brain’s 23 representational structure (neural DM) was measured using multiple regression RSA in a whole- 24 25 brain searchlight analysis, also controlling for three visual similarity models. This analysis 26 revealed that the rFG represents facial emotion categories in a manner consistent with the 27 influence of a perceiver’s conceptual knowledge on processing of facial emotion. Figure adapted 28 from Stolier and Freeman (2016) and Brooks and colleagues (2019). 29 30 31 Figure 2. MVPA sheds new light on how “social brain network” regions represent other 32 33 people. The “social brain network”, including regions such as the medial prefrontal cortex 34 (MPFC), precuneus/posterior cingulate cortex (PCC), anterior (ATL), and 35 temporoparietal junction (TPJ), has long been known to be involved in social cognition and 36 person perception. MVPA and RSA have been important tools in recent progress made in 37 understanding how this network of regions specifically represents and computes social 38 information. In a study by Thornton and colleagues (2019a), RSA was used to show that these 39 40 regions represent individual identities in a manner consistent with the sum of a person's mental 41 state representations. Other studies have found that person and identity representations in these 42 regions are structured by high-level social cognitive factors such as social network characteristics 43 (Parkinson et al., 2017). In separate lines of work, MVPA and RSA have also proven helpful in 44 disentangling the computational roles of these regions, e.g. suggesting that the MPFC is involved 45 in representing information about individual people, while medial parietal regions such as the 46 precuneus/PCC are more involved in representing information about the social context (Thornton 47 48 & Mitchell, 2017). Figure adapted from Thornton and colleagues (2019a). 49 50 51 52 53 54 55 56 57 58 59 30 60 http://mc.manuscriptcentral.com/scan Page 31 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 For Peer Review 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 Figure 1. MVPA shows that stereotypes and emotion concepts shape representations of other people’s faces 46 in the FG. Stolier and Freeman (2016) used multiple regression RSA on fMRI data from a task in which 47 subjects viewed faces, showing that stereotypes partially structure how face’s social categories are represented in regions important for face perception such as the FG. a) An example pair of corresponding 48 dissimilarity matrices (DMs), depicting the corresponding representational structures of social categories in 49 both stereotypes and subjective face perception. b) Results from a whole-brain searchlight analysis which 50 performed multiple regression RSA at each searchlight sphere, measuring the correspondence between the 51 subjective perceptual and neural DMs while controlling for three models of visual similarity. This analysis 52 revealed that the right fusiform gyrus (rFG) and orbitofrontal cortex (OFC) represent social categories in a 53 manner consistent with the influence of stereotypes on processing of faces’ social categories. c) Similar results are shown from Brooks and colleagues (2019), which reported an fMRI study in which subjects 54 passively viewed faces varying in emotion expression. The researchers also measured subjects’ conceptual 55 similarity between the emotion categories Anger, Disgust, Fear, Happiness, Sadness, and Surprise 56 (corresponding to the facial expressions shown in the scanner). The correspondence between this 57 58 59 60 http://mc.manuscriptcentral.com/scan Manuscripts submitted to Social Cognitive and Affective Neuroscience Page 32 of 32

1 2 3 idiosyncratic conceptual DM and the brain’s representational structure (neural DM) was measured using 4 multiple regression RSA in a whole-brain searchlight analysis, also controlling for three visual similarity 5 models. This analysis revealed that the rFG represents facial emotion categories in a manner consistent with 6 the influence of a perceiver’s conceptual knowledge on processing of facial emotion. Figure adapted from 7 Stolier and Freeman (2016) and Brooks and colleagues (2019). 8 9 10 11 12 13 14 15 16 17 18 19 For Peer Review 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 http://mc.manuscriptcentral.com/scan Page 33 of 32 Manuscripts submitted to Social Cognitive and Affective Neuroscience

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 For Peer Review 20 21 22 23 24 25 26 27 Figure 2. MVPA sheds new light on how “social brain network” regions represent other people. The “social 28 brain network”, including regions such as the medial prefrontal cortex (MPFC), precuneus/posterior cingulate 29 cortex (PCC), anterior temporal lobe (ATL), and temporoparietal junction (TPJ), has long been known to be 30 involved in social cognition and person perception. MVPA and RSA have been important tools in recent 31 progress made in understanding how this network of regions specifically represents and computes social 32 information. In a study by Thornton and colleagues (2019), RSA was used to show that these regions 33 represent individual identities in a manner consistent with the sum of a person's mental state 34 representations. Other studies have found that person and identity representations in these regions are structured by high-level social cognitive factors such as social network characteristics (Parkinson et al., 35 2017). In separate lines of work, MVPA and RSA have also proven helpful in disentangling the computational 36 roles of these regions, e.g. suggesting that the MPFC is involved in representing information about individual 37 people, while medial parietal regions such as the precuneus/PCC are more involved in representing 38 information about the social context (Thornton & Mitchell, 2017). Figure adapted from Thornton and 39 colleagues (2019). 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 http://mc.manuscriptcentral.com/scan