<<

Emotion in Habitual Players of Action Video Games Swann Pichon, Benoit Bediou, Lia Antico, Rachael Jack, Oliver Garrod, Chris Sims, C. Shawn Green, Philippe Schyns, and Daphne Bavelier Online First Publication, July 6, 2020. http://dx.doi.org/10.1037/emo0000740

CITATION Pichon, S., Bediou, B., Antico, L., Jack, R., Garrod, O., Sims, C., Green, C. S., Schyns, P., & Bavelier, D. (2020, July 6). Emotion Perception in Habitual Players of Action Video Games. Emotion. Advance online publication. http://dx.doi.org/10.1037/emo0000740 Emotion

© 2020 American Psychological Association 2020, Vol. 2, No. 999, 000 ISSN: 1528-3542 http://dx.doi.org/10.1037/emo0000740

Emotion Perception in Habitual Players of Action Video Games

Swann Pichon, Benoit Bediou, and Lia Antico Rachael Jack and Oliver Garrod University of Geneva, Campus Biotech Glasgow University

Chris Sims C. Shawn Green Rensselaer Polytechnic Institute University of Wisconsin–Madison

Philippe Schyns Daphne Bavelier Glasgow University University of Geneva, Campus Biotech

Action video game players (AVGPs) display superior performance in various aspects of , especially in perception and top-down . The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present 2 cross-sectional studies contrasting AVGPs and nonvideo game players (NVGPs) in their ability to perceive facial . Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for -related expressions such as , , and while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs.

Keywords: action video game, , emotion perception, drift diffusion modeling, reverse inference

Supplemental materials: http://dx.doi.org/10.1037/emo0000740.supp

Editor’s Note. Piercarlo Valdesolo served as the action editor for this This work was funded by the Swiss National Science Foundation (Am-

This document is copyrighted by the American Psychological Association or one of its alliedarticle.—PRP publishers. bizione SNF Grant N° PZ00P1_148035 to Swann Pichon, SNF Grant N°

This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 100014_172574 to Daphne Bavelier). The authors would like to thank Jean Decety for sharing his stimuli database. We thank Nuhamin Gebrewold Petros (SNF Grant N° 100014_140676) for the help provided during X Swann Pichon, Faculty of and Educational Sciences and Swiss recruitment. We also thank the National Centre of Competence in Research Center for Affective Sciences, University of Geneva, Campus Biotech; X Benoit for Affective Sciences, financed by the Swiss National Science Foundation Bediou, Faculty of Psychology and Educational Sciences, University of Geneva, (SNF Grant N° 51NF40_104897), which supported the initial stages of the Campus Biotech; Lia Antico, Faculty of Psychology and Educational Sciences and project. Swiss Center for Affective Sciences, University of Geneva, Campus Biotech; Swann Pichon and Benoit Bediou share co-first/equal authorship. X Rachael Jack and Oliver Garrod, School of Psychology and Institute of Daphne Bavelier declares she is a member of the scientific advisory Neuroscience and Psychology, Glasgow University; Chris Sims, Depart- board of Akili Interactive, Boston, and acts as a scientific advisor to ment of Cognitive Science, Rensselaer Polytechnic Institute; C. Shawn RedBull, Los Angeles. There is no other to declare. Green, Department of Psychology, University of Wisconsin–Madison; Correspondence concerning this article should be addressed to Swann Philippe Schyns, School of Psychology and Institute of Neuroscience and Pichon, Faculty of Psychology and Educational Sciences, University of Psychology, Glasgow University; X Daphne Bavelier, Faculty of Psychol- Geneva, Campus Biotech, H8-3, Chemin des Mines 9, 1202 Geneva, ogy and Educational Sciences, University of Geneva, Campus Biotech. Switzerland. E-mail: [email protected]

1 2 PICHON ET AL.

Over the past 15 years, a growing body of work within the A recurrent finding in the AVGPs literature is that of enhanced domain of has examined the effects of play- information processing. Dye et al. (2009) showed that AVGPs ing video games on cognitive function. One subtype of video responded about 10% faster than NVGPs across a wide range of game, termed action video games (AVG), has been a primary cognitive tasks. This same advantage was seen for tasks where RTs focus in the field. Recent meta-analytic work indicates that habit- were very fast (a few hundreds of milliseconds) to those where ual action video game players outperform nonaction video game responses were relatively slower (RTs on the order of seconds). It players by approximately half of a standard deviation across a was further shown that these faster RTs did not entail a cost in wide variety of cognitive skills (Bediou et al., 2018). Consistent terms of task accuracy. Later work used drift-diffusion modeling to with broader theories in cognitive psychology, action games have partition the overall RTs into component processes and to examine been distinguished from other game types based upon the load that which processes were altered in AVGPs. Here, action video game they place on core cognitive processes (Cardoso-Leite, Joessel, & play was associated primarily with greater evidence accumulation Bavelier, 2020). This differs from other areas of psychology where rates. For example, higher rate of evidence accumulation in games are selected for research based upon differences in content AVGPs has been observed for motion perception and auditory (e.g., violent content, prosocial content, educational content, etc.). discrimination (Green, Pouget, & Bavelier, 2010; but see van Key characteristics of interest to cognitive that are Ravenzwaaij, Boekel, Forstmann, Ratcliff, & Wagenmakers, inherent in action games include the need to track multiple targets 2014), for contrast sensitivity (Li et al., 2009, critical duration simultaneously (cognitive load), to select targets from within clut- task), and for top-down attentional mechanisms (Belchior et al., tered displays (attentional load), to identify targets based upon 2013; Hubert-Wallander, Green, Sugarman, & Bavelier, 2011). fine-grained features (perceptual load), and to perform these com- Higher capacity in information processing in AVGPs is also illus- putations in the service of producing effective decisions and ac- trated by more precise memory representations, as exemplified by tions (planning/motor load). The most common game types that fit performance on tasks involving memory for motion (Pavan et al., these characteristics are first- and third-person shooter video 2019; Wilms, Petersen, & Vangkilde, 2013) and memory for color games, which together comprise the majority of games that would (Sungur & Boduroglu, 2012). Altogether, this work points to a be labeled as action video games. rather global advantage in information processing, with AVGPs extracting information from the environment at a faster rate than Playing action video games has been shown to be both associ- NVGPs, along with showing greater precision of the encoded ated with (i.e., in cross-sectional research designs), and to cause memory representations during cognitive tasks. (i.e., in experimental research designs) a range of enhancements in An unresolved question concerns the breadth of the changes perceptual, top-down attentional, and spatial cognitive skills. Crit- observed in AVGPs. Thus far, behavioral differences between ically, these enhancements are not seen with other types of video AVGPs and NVGPs have been almost exclusively examined in the games, such as life simulation games or puzzle games, for exam- context of well-known tasks in cognitive psychology. Yet, whether ple. The perceptual benefits associated with AVG play range from this advantage extends to the emotional domain remains largely enhanced contrast sensitivity (Li, Polat, Makous, & Bavelier, unchartered. Chisholm and Kingstone (2015) found that the greater 2009), to higher spatial resolution of vision (Green & Bavelier, top-down attentional control abilities of AVGPs extend to visual 2006a, 2007; Latham, Patston, & Tippett, 2014; Schubert et al., search tasks using schematic facial emotional expressions as dis- 2015), to reduced lateral and backward masking (Li, Polat, Scalzo, tractors. Specifically, AVGPs were better than NVGPs at ignoring & Bavelier, 2010; Pohl et al., 2014). Action video game play has abrupt onset distractors, irrespective of whether the distractor was also been associated with superior performance in top-down atten- a neutral, happy, or inverted face. This result, a replication of tion tasks (Feng & Spence, 2018; Green & Bavelier, 2003; but see previous work with nonemotional stimuli (Chisholm & Kingstone, Roque & Boot, 2018), from demanding visual searches (Chisholm, 2012), was interpreted as a sign of greater top-down attentional Hickey, Theeuwes, & Kingstone, 2010; Chisholm & Kingstone, control (as measured by the ability to ignore distraction) in 2012; Wu & Spence, 2013), to tasks meant to measure attentional AVGPs. However, because facial emotion in this study was always control (Cain, Prinzmetal, Shimamura, & Landau, 2014; Dye, task-irrelevant, these results are not necessarily informative with Green, & Bavelier, 2009; Föcker, Mortazavi, Khoe, Hillyard, & regard to the processing of facial emotions in AVGPs per se. Bavelier, 2019), or tasks that measure flexible allocation of atten- Bailey and West (2013) studied changes in event-related potential This document is copyrighted by the American Psychological Association or one of its allied publishers. tion to objects (Green & Bavelier, 2006b). These instances of responses to emotional faces following 10 hr of playing an action This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. superior performance reflect a greater ability to focus on the task versus a nonaction video game. Participants performed a visual and important stimuli at hand while ignoring sources of noise, search task with schematic emotional faces. Their task was to look distractions, or interruptions. For example, using the SSVEP tech- for a happy or an angry facial target in an array of neutral nique, AVGPs have been observed to better filter out distractors distractors. Ten hours of action video game training resulted in during perceptually demanding tasks (Krishnan, Kang, Sperling, & increased neural responses to both emotional targets (happy and Srinivasan, 2013; Mishra, Zinni, Bavelier, & Hillyard, 2011). angry faces) and nonemotional nontargets (neutral faces) in right Finally, in the domain of visuospatial cognition, AVGPs have been frontal and occipito-parietal brain areas compared to nonaction found to outperform NVGPs on mental rotation tasks as well as video game training. While shorter P3 latency at posttest compared short-term memory (STM) and working memory tasks using visu- to pretest was observed in action trainees only, there was no visible ospatial stimuli (Blacker & Curby, 2013; Feng, Spence, & Pratt, impact on behavioral measures and no differences across stimulus 2007; Green & Bavelier, 2006b). This work points to action category (i.e., emotional vs. nonemotional). Although certainly not gaming being associated with more precise and flexible memory diagnostic, these few results are in line with the hypothesis of representations. enhanced information processing that extends from cognitive to FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 3

emotional tasks. In that respect, we may expect similar findings in consumption. Similar results were found a few minutes after the emotional domain as in the cognitive domain - namely a playing a violent video game for 15 min (Kirsh & Mounts, 2007). greater rate of evidence accumulation and higher memory preci- While these studies document differences in the speed of emotion sion in AVGPs when processing emotional faces. processing, others have reported differences in emotion recogni- An alternative prediction, however, comes from a line of re- tion accuracy as a function of violent video game experience, search in , which proposes that repeated expo- looking at a larger range of emotions. Diaz, Wong, Hodgins, Chiu, sure to video games with violent content might be associated with and Goghari (2016) found that violent video game players recog- individual differences, or possibly changes in social behavior, nized disgusted faces less accurately, and fearful faces more ac- together with increased sensitivity to social cues signaling aggres- curately (and also faster) than nonvideo game players. Thus, the sion and decreased sensitivity to social cues signaling distress. effects of violent video games on emotion processing are likely While the cognitive psychology literature has focused on differ- subtler than just assuming general blunting. Instead, the model entiating games based upon cognitive processing demands, it is predicts enhanced and possibly other negatively valenced almost certainly the case that individuals who are categorized as emotion perception, at the expense of hindered happiness, pain or AVGPs have had exposure to different video game content than sadness perception. individuals categorized as NVGPs. In particular, although violence Finally, a third possible prediction comes from the proposal that is not a necessary characteristic of action games (i.e., there are the cognitive benefits documented in AVGPs discussed above nonviolent action video games, such as MarioKart or Splatoon), it arise through a learning mechanism that is common across a is nonetheless the case that most action video games do contain variety of tasks. In this “learning to learn” hypothesis proposed by violence. Furthermore, because NVGPs (by definition) will have Bavelier, Green, Pouget, and Schrater (2012), AVGPs would more less total game experience than AVGPs, they will also, necessarily, readily learn the particular statistical regularities of the new tasks have had less total exposure to violent video game content. Thus, and stimuli they are presented with. This would then, in turn, while the AVGPs/NVGPs categorization scheme utilized in cog- manifest as AVGPs outperforming NVGPs, and would include nitive psychology is not a perfect match to the violent/nonviolent AVGPs exhibiting faster response or more precise memory categorization scheme used in social psychology, it is interesting to representations. A distinctive feature of this learning to learn view also consider the predictions regarding emotional processing aris- is that group differences should be the greatest at intermediate ing from this line of research, as they contrast with the predictions stages of learning, when using stimuli and task configurations that arising from the cognitive domain. are rather unfamiliar, such as random dot kinematograms, or the For instance, the General Aggression Model (GAM) predicts N-back task. Because differences between AVGPs and NVGPs in that exposure to rewarded violence in video games is likely to this framework arise due to differences in learning rate, this theory increase gamers’ aggressive (i.e., hostile ), aggres- anticipates that AVGPs are unlikely to outperform NVGPs for sive (i.e., hostile thoughts), and aggressive behaviors, at tasks or stimuli that are highly overlearned in adults, such as least in the short term (Anderson & Bushman, 2002; Carnagey & categorizing facial emotions. Anderson, 2005), while prosocial games seem to promote proso- Here we sought to adjudicate between the three overarching cial behaviors in the short term (Gentile, 2009; Greitemeyer, hypotheses discussed above: enhanced information processing, Osswald, & Brauer, 2010). In the long-term, the same model GAM, and learning to learn. In a first study (n ϭ 97), we con- predicts that repeated exposure to media violence may influence trasted AVGPs and NVGPs as they discriminated static facial aggressive behavior by promoting aggressive beliefs and reinforc- expressions with different intensities of anger, happiness, sadness, ing aggressive expectations and scripts, as well as desensitizing and pain. We applied a drift-diffusion model (DDM) to character- individuals to aggression. In line with the desensitization view, ize the different stages of decision-making in that emotional task. some studies have reported reduced amplitudes of brain responses A key advantage of DDMs over standard methods is that they to pictures depicting violent content among chronic players of aggregate the complex pattern of RTs and accuracies into a rela- violent video games and in players transiently exposed to violent tively small number of parameters that together capture different video games (Bailey, West, & Anderson, 2011; Bartholow, Bush- stages of the decision-making process. Another advantage of man, & Sestir, 2006; Engelhardt, Bartholow, Kerr, & Bushman, DDMs is that they are more sensitive for detecting potential 2011). Another extension of this model proposes that, when acti- group-level differences than standard behavioral measures (e.g., This document is copyrighted by the American Psychological Association or one of its allied publishers. vated, these hostile representations may affect the processing of White, Ratcliff, Vasey, & McKoon, 2010). We can thus ask This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. perceptual information by creating an attentional () bias, whether potential differences in discrimination performance arise which increases the processing of information signaling social from the rate of evidence accumulation, or from response biases threat. In that view, repeated exposure to violent video games may such as response caution (boundary separation), motor execution influence in real life, as individuals may be more (nondecision time), or a possible bias in the starting point of likely to pay attention to social cues and interpret them negatively the diffusion process (starting point bias). The enhanced informa- as signaling hostile intentions. At the same time, these hostile tion processing view predicts group differences in evidence accu- representations are proposed to reduce individuals’ sensitivity to mulation for all emotions. In contrast, the GAM predicts higher emotional signals, which have the potential to trigger empathic evidence accumulation rates and higher starting point bias for responding such as sad, painful, or happy expressions. angry expressions, and possibly lower evidence accumulation rates Consistent with these ideas, Kirsh, Mounts, and Olczak (2006) and lower starting point bias for happiness, sadness, and other found that individuals who were high in violent media consump- empathy-related emotions such as pain, in AVGPs compared to tion were faster at identifying anger and/or slower at identifying NVGPs. Finally, the learning to learn hypothesis predicts no group happiness compared to individuals who were low in violent media differences in any of the aspects of decision-making, given that the 4 PICHON ET AL.

task decisions involve well-tuned templates for expressions that A have been overlearned. Moreover, as small to medium cross- sectional differences in lab-based measures of aggression have … … … … been reported by meta-analyses between frequent players of vio- lent video games and NVGPs (Hilgard, Engelhardt, & Rouder, 2017), a further distinctive feature of this study was to ask whether 100% sad 100% angry the same cross-sectional difference was observed between AVGPs and NVGPs. To this end, we made use of the Competitive Reaction-Time task (CRT) task. This is a lab-based paradigm that … … … … has often been used to measure reactive aggression, which has been shown to correlate with certain dimensions of trait aggression 100% pain15 levels of morph, 16 identities 100% happy (Giancola & Zeichner, 1995; but see Ferguson & Rueda, 2009). In a second study (n ϭ 54), we contrasted the mental represen- B tations that underlie the categorization of facial emotions in an- other sample of AVGPs and NVGPs using the reverse inference technique. This technique has its foundation in the well-accepted + finding that facial emotional expressions are typically communi- cated through complex patterns of muscle movements rather than Sadness or Anger? Sadness or Anger? just the end-points of facial expressions, which is what is examined 500 ms 500 ms No time limit when using static photographs. The reverse inference technique Face classification task, 240 trials per continuum provides a characterization of the spatial and temporal patterns of facial muscle activation—also called action units (AUs)—that give Figure 1. Stimuli and protocol used in Study 1. (A) Example of pain- rise to the recognition of dynamic facial expressions signaling happy and sad-angry continua used in Session 1. (B) Time course of a trial happiness, , , anger, , and sadness. Previous for the emotion perception tasks of Sessions 1 and 2. Stimuli were taken work suggests that this technique is sufficiently sensitive to iden- from a database created and maintained by J. Decety and colleagues tify large-scale differences in experiences. For instance, it has been (Lamm, Batson, & Decety, 2007); all actors within the database consented to the use of their image for research purposes and in publications. previously used to characterize cultural differences in facial emo- tion representations (Jack, Garrod, Yu, Caldara, & Schyns, 2012). On this task, to the extent that AVGPs benefit from enhanced included four emotion perception tasks using the same four emo- information processing, we would expect group differences, espe- tions as in Session 1, but with the discrimination being performed cially higher fidelity of each emotion representations in AVGPs as against neutral expressions. compared to NVGPs. In contrast, the GAM again predicts a Sample size. Recent meta-analysis in AVG research has high- differential view with sharper representations of anger (and pos- lighted superior performance (d ϭ 0.55) in cross-sectional studies sibly fear and surprise), but more blunted ones for happiness comparing AVGP and NVGPs in various cognitive and perceptual sadness, and possibly disgust. Finally, if the noted cognitive ad- tasks (Bediou et al., 2018). Estimation of sample size with G vantage of AVGPs was due to learning to learn, the high expertise Power 3 (Faul, Erdfelder, Lang, & Buchner, 2007) comparing two of humans in facial predicts equivalent rep- group means showed that for the design of Study 1 Session 1, resentations across groups. respectively Study 1 Session 2, Ns of 37 and 28 participants per group were needed to achieve 80% power at 5% significance level Study 1 (repeated-measures ANOVA with a 2-levels group factor and a 2 (Session 1) or 4-levels (Session 2) repeated-measures emotion ϭ ϭ Method and Materials factor, d 0.55, correlation r .35 among repeated measures based on an independent dataset (n ϭ 160) with the same stimuli). In Study 1, participants completed 2 sessions separated by Participants. The final sample consisted of 47 AVGPs (mean several weeks. Sessions were composed of tasks assessing emotion age Ϯ SD 22.3 Ϯ 4.3 years old) and 50 NVGPs (24.2 Ϯ 5.1 y.o.) This document is copyrighted by the American Psychological Association or one of its allied publishers. perception skills. Session 1 (N ϭ 97) included two emotion per- at Session 1. At Session 2, 38 AVGPs (22.3 Ϯ 4.3 y.o.) and 46 This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. ception tasks, using sad-to-angry and pain-to-happy morph conti- NVGPs, (24.1 Ϯ 5.3 y.o.) returned. All participants were male and nua. Participants discriminated stimuli in a two alternative forced of Caucasian descent. The two groups were roughly matched in choice tasks as in (Qiao-Tasserit et al., 2017). Stimuli were pre- age (at Session 1, t(95) ϭϪ2, p ϭ .044; and at Session 2 sented in blocks of 240 faces per continuum, 16 identities/contin- t(82) ϭϪ1.6, p ϭ .1).This study was approved by the ethical uum, 15 levels/morph continuum, time of presentation 500ms, committee of the University of Geneva, which abides by the without a response time limit (see Figure 1). Participants also Declaration of Helsinki. Subjects received compensation for their completed the Competitive Reaction Time task, which evaluates participation. We recruited a total of 104 male subjects. Seven how participants react to unjustified aggression (Whitaker & Bush- subjects were excluded due to the following reasons: 3 participants man, 2012), as well as social tests and questionnaires assessing were erroneously classified as AVGPs or NVGPs at the recruit- different aspects of personality (, , aggressive ment stage (see criteria below), 1 participant was an outlier on age personality, competitiveness, and empathy), which will be reported (45 y.o., while the age range of other participants was 18–35 y.o.), elsewhere (Pichon, Antico, Chanal, Singer, & Bavelier, 2020). At and 3 participants did not comply to instructions, responding Session 2, 84 of the original 97 participants returned. Session 2 randomly. FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 5

Our laboratory maintains a database of participants populated each of the four emotions used at Session 1. Each continuum was via responses from local ads. Some of these ads specifically target composed of 15 levels of morphs. experienced and less experienced video-game players, as numbers The Competitive Reaction Time task (CRT). The CRT task in these extreme categories are otherwise too low to carry out a is one of the most commonly used measures of laboratory aggres- well-powered study. Participants are recruited year-round in that sion (McCarthy & Elson, 2018). It is a 25-trial competitive game database by a lab assistant. As part of this process, participants fill that requires participants to respond to a visual cue faster than their out a battery of questionnaires, including questionnaires on leisure partner (Whitaker & Bushman, 2012). We used the computerized activities (sports practice, , media habits and video-game version of the CRT task (Version 3.4.2). Given that our goal was play) and cultural background. A list of participants that matched to assess whether AVGPs/NVGPs (as defined in the cognitive the inclusion/exclusion criteria of the present study was generated psychology literature) differed from one another in the same ways from that database. To avoid introducing demand characteristics, as has been previously reported for violent/nonviolent game play- these participants were invited in the study by an experimenter ers (as defined in the social psychology literature), the CRT task unrelated to the lab assistant responsible for the database. Further- was thus well-suited for these needs. We used the computerized more, participants were told that the study aimed to investigate version of the CRT task developed by Brad Bushman and Scott cross-cultural differences in interpersonal behaviors. The experi- Saults (Version 3.4.2, https://uk.groups.yahoo.com/neo/groups/ menters were kept blind to participants’ video-game play status CRTRP/info). during the study. Participants were told they would be connected via our online The inclusion/exclusion criteria to be considered an AVGP or an platform to a same-sex opponent from another research institute in NVGP were the same as those used in the Bavelier lab in the past. Europe (actually a computer confederate). Before playing each In order to be classified as an AVGP, an individual would need to trial, the player determines the sound intensity and duration of the have played at least 5 hr per week of first- or third-shooter video noise blast that the opponent would receive in case he loses the games in the past year and at most, reported 1–3 hr per week of trial. Noise blasts range between 60dB and 105dB (in 5 dB play in each of these other game genres: turn-based strategy increments, a 0dB nonaggressive level was also available) and games, action-sports games, real-time strategy games, fantasy/role could last between 0 and 5s. Unknown to the participant, the sequence of wins and losses is predetermined. To provoke aggres- playing games and music games (i.e., they needed to be what we sion from the participant, the first trial ends in a loss for the refer to as “genre pure” players—only playing action video games participant, who receives a punishing noise blast with intensity and and no other types of games; see Dale & Shawn Green, 2017). duration set at maximum. The preprogrammed sequence of won or Participants who played 3–5 hr of AVG a week in the past year lost trials (W/L) was the same for all subjects. We used the were also classified as an AVGP if they played first- or third- following sequence with the corresponding levels of punishment person shooters more than 5 hr a week before the past year. The intensities and durations set by the opponent: [L-10–10, L-9–10, criterion to be considered a NVGP was to play at most 1 hr per W-8–7, W-7–8, L-6–6, W-5–4, L-4–2, L-3–3, W-2–5, W-2–5, week for each game genres listed above and no more than 5 hr total W-4–3, L-3–2, W-5–4, L-6–6, L-8–8, W-9–9, L-7–7, W-7–9, per week across all game genres in the past year as well as the year L-9–8, W-8–6, W-6–7, L-5–5, L-3–4, W-4–3, L-2–2]. Noise before past. Note that only males were tested because of the blasts were calibrated by measuring the volume of the headphone relative scarcity of female AVGPs. system with a sound level meter (NTI-XL2, with an impulse Face stimuli. We used black and white pictures extracted measure LAImax and a temporal integration window of 35ms). from a dataset of video clips containing actors expressing pain, Different strategies have been used to quantify aggressive be- sad, happy and angry facial expressions. This dataset has been havior from the CRT task (Elson, Mohseni, Breuer, Scharkow, & validated for and intensity in previous behavioral and Quandt, 2014; Hyatt, Chester, Zeichner, & Miller, 2019). Here we functional studies (Decety, Echols, & Correll, 2010; report the mean volume intensity and mean duration averaged Gleichgerrcht & Decety, 2014; Lamm, Batson, & Decety, 2007). across all 25 trials separately, which is one of the most frequently We extracted one frame at the apex of each expression, and used strategies according to Elson (2017) and has been proposed as selected expressions recognized above 70% accuracy (chance level standardized measurement for the CRT task (Ferguson, Smith, 20%) in a 5-AFC discrimination pilot study. For Session 1, we Miller-Stratton, Fritz, & Heinrich, 2008). Among the 97 partici- This document is copyrighted by the American Psychological Association or one of its allied publishers. generated sad and angry composite images from 16 identities (8 pants who completed the CRT task in Session 1, 13 participants (4 This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. female actors), showing either a sad or an angry expression using AVGPs, 9 NVGPs) expressed suspicions and were excluded from the Fantamorph software (Abrosoft Co.). We also generated pain its analysis (N ϭ 84 total, 43 AVGPs and 41 NVGPs). and happy composite images from the same identities expressing Drift-diffusion modeling of facial emotion discrimination. either a pain or a happy expression. These prototypical images For each emotional continuum, the distributions of classification were used as endpoints to generate, for each identity, morph responses and reaction time (RT) was jointly modeled using a sequences with 15 steps, with intermediate images changing in- DDM (Ratcliff & McKoon, 2008; Ratcliff & Rouder, 1998). The crementally from unambiguously sad (respectively pain) to unam- DDM assumes that evidence is integrated until a decision bound- biguously angry (respectively happy), with emotionally ambigu- ary is reached, at which point a response is generated. Because of ous images in the middle. A gray mask surrounded each face. noise in the drift process, stochasticity in RT and in accuracies Luminance and contrast were equated for all faces. We used a emerges even for fixed stimuli. The important parameters of the similar approach to generate morphed stimuli at Session 2. We DDM are the mean evidence accumulation rate (the rate at which used the same identities to generate four continua composed of evidence accumulates toward the decision boundary); the decision linear morph sequences between a set of neutral expressions and boundary separation (the amount of evidence required to commit 6 PICHON ET AL.

to one response, which reflects response caution); an additive linear relationship. A power-law relationship was previously component to response time called “non-decision time,” which shown to provide an excellent fit to perceptual decision making in reflects all the combined effects on RT that are not due to evidence other domains (Palmer, Huk, & Shadlen, 2005). In summary, this accumulation, such as motor planning and execution; and finally a equation parameterizes the evidence accumulation rate for the 15 possible initial bias (i.e., starting point) toward one of the two morph-values in terms of three underlying parameters. This re- response alternatives. The latter parameter captures the fact that duces the number of model parameters and avoids potential over- the diffusion process may start closer to one of the two decision fitting. We further increased the statistical power of the model by boundaries, which results in faster responses to that alternative, as enforcing that stimuli from all morph values constrain the esti- well as potentially higher error rates. We provide the of the mates of the common underlying parameters that are response bias, DDM in the online supplementary material. boundary separation, and nondecision times. Indeed, and unlike Previous work (Ratcliff & Tuerlinckx, 2002) has shown that the accumulation rate, these were held constant across all stimuli outliers (trials with unusually fast or slow response times) can bias within each emotional continuum. the estimation of parameters within the DDM. Hence, we removed Data analysis. Parameters of the DDMs (evidence accumula- the slowest and fastest 2.5% of trials for each subject and contin- tion rates, decision boundaries, nondecision times, response bias) uum. and the intensity and duration variables from the CRT task were We estimated parameters of the DDM using Bayesian inference. analyzed with ANOVAs and T-Tests to test for effects of group, More specifically, we utilized the diffusion model extensions emotion, and morph. Frequentist analyses were complemented developed for the JAGS statistical software package (Wabersich & with Bayesian statistics to examine the strength of evidence in Vandekerckhove, 2014). Bayesian parameter estimation allows a favor or against our main hypotheses about the group level Ͼ posterior distribution over parameter estimates to be obtained, in (AVGP/NVGP). By convention, a Bayes factor (BF10) 3, rep- contrast to other approaches that yield only a point estimate. resents substantial evidence for the alternative hypothesis, while a Ͻ Although it is also possible to develop hierarchical models within Bayes factor (BF10) 0.3 represents substantial evidence for the the Bayesian framework (Wiecki, Sofer, & Frank, 2013), for this null hypothesis. Anything between these values expresses weak or application, we adopted a nonhierarchical analysis framework in anecdotal evidence (Dienes, 2014). which parameters were estimated independently for each partici- (97 ؍ pant. Results Session 1 (N Code implementing the DDM was implemented in the JAGS model description language (Plummer, 2003). The model specifies Emotion discrimination task. Group differences in mean a prior probability distribution over each of the model parameters, evidence accumulation rates obtained from the DDM were as- as well as the likelihood of the observed data given the model and sessed through an omnibus ANOVA (n ϭ 97) carried out with its parameterization. Markov Chain Monte Carlo (MCMC) is then group (AVGP/NVGP), emotion continuum (pain-happy/sad- used to approximate the Bayesian posterior distribution over pa- angry), and morph levels (15 levels). As expected, we observed a rameter values. In our application we assumed that all model main effect of morph level (F(14, 1330) ϭ 1185.4, p Ͻ .0001, 2 parameters had flat (uniform) priors over a plausible range of Greenhouse-Geisser [GG] corrected p Ͻ .0001, ␩p ϭ 0.84, see values. Complete code implementing the model is provided. Pa- Figure 2, panels A-B), indicating greater evidence accumulation rameters obtained from the analysis were examined at both the rates at the extremes of the morph continua. There was no effect of individual level (95% intervals around parameter esti- emotion, F(1, 95) ϭ 2.3, p ϭ .13. The only significant interactions, mates) and group-level. For group-level analyses, we computed which survived GG correction, was that of emotion by morph the posterior mean parameter estimates for each subject, and then levels, F(14, 1330) ϭ 80.7, p Ͻ .0001, GG corrected p Ͻ0.0001, 2 performed standard statistical analyses (ANOVA) on the estimated ␩p ϭ 0.12, signaling higher evidence accumulation rate at ex- subject-level parameters. In our implementation of the DDM, we tremes values of morph for the pain-happy continuum as compared assumed that the evidence accumulation rate depended on the to the sad-angry continuum. specific morph-value of the visual stimulus. As an intuitive exam- Importantly, videogame group had no reliable effect as indicated 2 ple, some images of faces are clearly angry or sad, while others are by a lack of group effect, F(1, 95) ϭ 0.5, p ϭ .5, ␩p ϭ 0.001, more ambiguous. This difference between stimuli is modeled as a BF10 ϭ 0.07. Interactions with group, when present in the raw This document is copyrighted by the American Psychological Association or one of its allied publishers. difference in how quickly evidence is accumulated in the diffusion analyses, did not survive the GG corrections group by-morph, This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. 2 process. In order to minimize the number of parameters in the F(14, 1330) ϭ 1.8, p ϭ .032, GG corrected p ϭ .18, ␩p ϭ 0.008; model, we further assumed that the relationship between the stim- group-by-emotion-by-morph, F(14, 1330) ϭ 2.5, p ϭ .001, GG 2 ulus morph value, and the evidence accumulation rate, followed a corrected p ϭ .08, ␩p ϭ 0.004. These nonsignificant group trends power-law relationship: arise from slightly higher evidence accumulation rate at extremes of the morph levels in AVGPs as compared to NVGPs, especially ␦ ϭ Ϫ␮ ␬ Ϫ␮ ␥ (j) sign(j )· · | j | . in the sad-angry continuum. In the above, ␦(j) indicates the evidence accumulation rate (also In addition to evidence accumulation rate, the DDM also pro- called the drift diffusion rate) for stimulus morph-value j, ␮ vides estimates of decision boundary, nondecision time, and re- indicates the morph value of subjective equality along the contin- sponse bias. Separate ANOVAs for each of these dependent vari- uum (the morph-value which is perceived as intermediate between ables with the factors group and emotion continuum revealed a the two alternatives), ␬ controls the scaling of the power function, main effect of emotion in boundary separation, F(1, 95) ϭ 10.8, 2 and ␥ is a scaling exponent which determines the curvature of the p ϭ .001, ␩p ϭ 0.02 and in nondecision time, F(1, 95) ϭ 31, p Ͻ 2 power-law relationship. When ␥ equals 1, the model reduces to a .0001, ␩p ϭ 0.06, diagnostic of more cautious responding (mean: FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 7

A BDC

Pain-Happy Emp. AVG players (n=47) Neutral-Pain Emp. AVG players (n=38) Neutral-Happy Emp. NVG players (n=47) Sad-Angry Emp. NVG players (n=50) Model AVG players Model AVG players Neutral-Angry Model, NVG players Neutral-Sad Model NVG players

P(Response = 1) P(Response = 1) P(Response = 1) P(Response = 1) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

RT (ms) RT (ms) RT (ms) RT (ms) RT 600 1000 1400 600 1000 1400 600 1000 1400 600 1000 1400 024 −2 Evidence Accumulation rate (delta) Evidence Accumulation rate (delta) Evidence Accumulation rate (delta) Evidence Accumulation rate (delta) −4 −4 −2 0 2 4 −4 −2 0 2 4 −4 −2 0 2 4 100% 50% 100% 100% 50% 100% 3 5 11 13 3 5 11 13 100% 3 550% 11 13 100% 100% 3 550% 11 13 100% pain each happy pain each happy neutral each emotion neutral each emotion or sad Morph value or angry or sad Morph value or angry Morph value Morph value

Figure 2. Results of Study 1: Each column plots psychometric, chronometric, and evidence accumulation rates for the 15 morph-values. (A) Session 1 data and DDM fit illustrating higher evidence accumulation at the extremities of the pain-happy as compared to the sad-angry continuum. (B) Session 1 data and DDM fits illustrating the comparable emotion decision-making skills of AVGPs and NVGPs. (C) Session 2 data and DDM fits illustrating higher evidence accumulation for pain-neutral and happy-neutral continua as compared to sad-neutral and angry-neutral continua. (D) Session 2 data and DDM fits illustrating the comparable emotion decision-making skills of AVGPs and NVGPs. Model fit and empirical means are displayed with error bars/shaded areas indicating 95% confidence intervals. AVG ϭ action video games; NVG ϭ nonvideo game. See the online article for the color version of this figure.

1.65 vs. 1.55), and slower motor execution times (0.54 vs. 0.49) in difference was not significant (AVGPs: 5.06 Ϯ .31 vs. NVGPs: the sad-angry versus the pain-happy continuum. A main effect of 4.26 Ϯ 0.34, t(82) ϭ 1.73, p ϭ .09, d ϭ 0.37). 2 emotion for response bias, F(1, 95) ϭ 11, p ϭ .001, ␩p ϭ 0.06 was (84 ؍ also visible (sad-angry: 0.53, happy-pain: 0.51); we note this effect Results Session 2 (N is difficult to interpret as bias estimates are specific to each emotion continuum. As with the accumulation rate, these results In Session 2, participants performed four additional emotion are in line with the greater discriminability of the pain-happy as discrimination tasks with each continuum varying from neutral compared to the sad-angry continuum. More relevant to our aim, faces to our four emotions (pain, happy, fear, sad). We entered This document is copyrighted by the American Psychological Association or one of its allied publishers. there was yet again no effect of group in boundary separation (p Ͼ estimated evidence accumulation rates into an ANOVA (n ϭ 84) This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. .99, BF10 ϭ 0.29), in nondecision time (p Ͼ .53, BF10 ϭ 0.6), or with the factors group (AVGP, NVGP), emotion continuum in response bias (p ϭ .06, BF10 ϭ 0.57), nor any interaction (neutral-pain, neutral-happy, neutral-fear, neutral-sad), and morph between group and emotion (all ps Ͼ 0.14). levels (15 levels). As expected, there was a main effect of morph The Competitive Reaction Time (CRT) task. As mentioned level, F(14, 1148) ϭ 1317.7, p Ͻ .0001, GG corrected p Ͻ .0001, 2 above, only nonsuspicious subjects were analyzed (N ϭ 84). ␩p ϭ 0.84. There was also a main effect of emotion, F(3, 246) ϭ 2 AVGPs inflicted higher levels of aggression intensity than NVGPs 121.3, p Ͻ .0001, GG corrected p Ͻ .0001, ␩p ϭ 0.24, which (AVGPs: 5.5 Ϯ 0.34 vs. NVGPs: 4.45 Ϯ 0.36, t(82) ϭ 2.14, p ϭ reflected higher drift rates in the Neutral-Happy and Neutral-Pain .035, d ϭ 0.47). This difference remained significant after ac- continua compared with Neutral-Sad and Neutral-Anger, and an counting for auditory discomfort (ANCOVA: F[1, 81] ϭ 4.1, p ϭ interaction between morph level and emotion, F(42, 3444) ϭ 2 2 2 .046, ␩p ϭ 0.05), trait anxiety, F(1, 81) ϭ 4.8, p ϭ .03, ␩p ϭ 0.05, 139.5, p Ͻ .0001, GG corrected p Ͻ 0.0001, ␩p ϭ 0.28. Of interest 2 state anxiety, F(1, 81) ϭ 4.7, p ϭ .03, ␩p ϭ 0.05, and depression, to our primary goals though, we again found no effect of group 2 F(1, 81) ϭ 4.3, p ϭ .04, ␩p ϭ 0.05. Duration of noise blasts was (p ϭ .43, BF10 ϭ 0.59), nor any interaction involving the factor numerically higher in AVGPs compared to NVGPs; however, this Group (all ps’ Ͼ 0.45), suggesting equivalent evidence accumu- 8 PICHON ET AL.

lation about facial emotions in AVGPs and NVGPs (see Figure 2, Ferguson and colleagues, family environment, mental health prob- panels C-F). lems, or personality factors such as competitiveness (Ferguson, The boundary, nondecision time and bias parameters were an- Cruz, et al., 2008; Ferguson, San Miguel, Garza, & Jerabeck, 2012; alyzed via separate ANOVAs with the factors group and emotion. Lobel, Engels, Stone, Burk, & Granic, 2017) may influence par- We found a main effect of emotion in boundary separation, F(3, ticipants’ willingness to play violent games or their willingness to 2 246) ϭ 29.4, p Ͻ .0001, GG corrected p Ͻ .0001, ␩p ϭ .05, respond to provocation. The present study cannot speak to this highlighting more cautious responding for the Neutral-Happy issue. (mean:1.51) and Neutral-Pain continuum (1.58) compared with the Importantly for our aim, our results highlight that despite being Neutral-Anger (1.40) and Neutral-Sad (1.42) continuum. We also able to detect a small-to-medium effect size in CRT task between found an effect of emotion in nondecision time, F(3, 246) ϭ 54.3, AVGPs and NVGPs, the two groups displayed comparable facial 2 p Ͻ .0001, GG corrected p Ͻ .0001, ␩p ϭ .17, highlighting longer emotion processing abilities. The null effect of group in terms of nondecision times in the Neutral-Sad (0.5) and the Neutral- emotion perception thus appears unlikely to be due to a lack of Anger (0.47) continuum compared to the Neutral-Pain (0.41) and the sensitivity in our approach. Our null finding dovetails with recent Neutral-Happy (0.41) continuum. We found an effect of emotion in brain imaging reports which found no evidence of reduced func- the bias parameter, F(3, 246) ϭ 3.5, p Ͻ .01, GG corrected p Ͻ tioning in brain networks important for emotion perception and for 2 .02, ␩p ϭ .02, highlighting slightly different bias across the four evaluating the emotional content of social situations in frequent continua. Importantly, we found again no effects involving the users of violent video games (Szycik, Mohammadi, Hake, et al., group factor (all ps Ͼ 0.15, all BF10s Ͻ 0.56), nor any interaction 2017; Szycik, Mohammadi, Münte, & Te Wildt, 2017). Our null between emotion and group (all ps Ͼ 0.44) in any of these finding, however, contrasts with the proposal of enhanced percep- parameters. tual abilities in AVGPs. Indeed, enhanced perception in AVGPs as a result of less noisy processing predicts better facial emotion Discussion discrimination, such as in the task used here. To better understand the robustness of this null finding, Study 2 uses a different ap- Six different emotion discrimination tasks were used to test the proach, reverse inference, which more directly probes the quality predicted difference in facial emotion processing between AVGPs of perceptual representations, and in this case the mental models and NVGPs. Contrary to predictions from the “overall better for each of the 6 basic facial emotions. perception” framework as well as from the GAM framework, we did not observe any evidence for a faster rate of information Study 2 accumulation in AVGPs, nor did any other processing stages of decision-making differ. Overall, we found highly similar facial Participants were presented with faces displaying random com- emotion perception performance between the two groups. binations of dynamically animated action units and were asked to The higher intensity in reactive aggression found in AVGPs, categorize the dynamic facial expressions along the six possible though, is consistent with the higher aggression found in recent basic emotions of happiness, fear, surprise, anger, disgust, and meta-analyses of cross-sectional studies contrasting this same ag- sadness (see Figure 3). Based on the participants’ responses, which gression measure in frequent and infrequent players of violent included not only categorization but also intensity judgments, to video games (Hilgard et al., 2017). According to Hilgard et al. 2400 dynamic facial stimuli (6 hr of in-laboratory testing per (2017), this cross-sectional effect ranges between d ϭ 0.44 and participant, spread over 2 to 3 sessions), the unique spatiotemporal d ϭ 0.6, which is similar to the effect size we found in both of our patterns of action unit activations associated with each of the six aggression measures (d ϭ .47 and d ϭ .37, respectively). This basic emotions were recovered separately for each participant cross-sectional difference is consistent with the fact that our (Jack et al., 2012). Using reverse correlation, we then estimated the AVGP categorization scheme at least partly overlaps with the internal models of our participants for each of the six basic violent video game categorization scheme. emotions and analyzed these as a function of video game group. As Note that this cross-sectional difference in aggression between for Study 1, to the extent that AVGPs benefit from enhanced AVGPs and NVGPs must not be interpreted as causal evidence perception, we could expect more precise emotional representa- that frequent AVG play influences aggression. Evidence that long- tions in AVGPs than in NVGPs. Yet, the GAM rather predicted a This document is copyrighted by the American Psychological Association or one of its allied publishers. term violent video game play is durably associated with increases differential view with sharper representations of anger and possi- This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. in aggression remains highly controversial (Hilgard et al., 2017; bly fear and surprise, but more blunted ones for happiness, sad- Kepes, Bushman, & Anderson, 2017; Prescott, Sargent, & Hull, ness, and possibly disgust. Finally, if the perceptual advantage of 2018; but see Ferguson, 2015; Kühn et al., 2018). For instance, a AVGPs was due to their ability to learning to learn, the already recent intervention study showed that playing a violent game high expertise of humans in facial emotion recognition predicted (GTA-V) for more than 30 hr over 8 weeks did not cause any little differences across groups. increase in impulsivity, in aggression, in empathy or interpersonal competencies, in mental health (anxiety and depression) nor any Sample Size change in the tasks the authors used to assess executive functions, compared with an active group who played a social game (The Estimation of sample size comparing two group means showed Sims) or a passive group (Kühn et al., 2018). Cross-sectional an N of 25 participants per group was needed to achieve 80% differences in aggression between users of violent media and video power at 5% significance level (repeated-measures ANOVA with games might thus reflect influences from other factors that typi- a 2-levels group factor and a 6-levels repeated-measures emotion cally co-occur with video game play. As extensively discussed by factor, d ϭ 0.55, correlation r ϭ .35 among repeated measures). FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 9

peak amplitude Upper Lid deceleration Emotion Intensity Raiser Happy AU5 Very strong acceleration Surprise onset peak offset latency latency latency Strong Fear

Nose Disgust Medium Wrinkler Sad AU9 Weak Anger Very weak Other Upper Lip Raiser AU10

00.5 1.25s



00.5 1.25s

Mental Stimulus Representations

Figure 3. Stimuli used in Study 2. In each trial, a Generative Face Grammar (GFG) platform (Yu et al., 2012) randomly selects a subset of AUs (red, green, and blue labels), in this case Upper Lid Raiser (AU5), Nose Wrinkler (AU9), and Upper Lip Raiser (AU10), from a core set of 42 AUs and assigns a random movement to each AU individually using six temporal parameters: onset latency, acceleration, peak amplitude, peak latency, deceleration, and offset latency (labels over the red curve). The GFG then combines the randomly activated AUs to produce a photorealistic facial animation (shown with four snapshots across time). As in Figures 1 and 2, the receiver categorizes the stimulus as socially meaningful (disgust) and rates the intensity of the perceived emotion (strong) when the dynamic pattern correlates with their mental representation of that facial expression. Building a relationship between the dynamic AUs presented in each trial and the receiver’s categorical responses produces a mathematical model of each dynamic facial expression of emotion. See the online article for the color version of this figure.

Method and Materials tory (Ophir, Nass, & Wagner, 2009). Four subjects (1 AVGP and 3 NVGPs) who failed to complete the Media Multitasking Inven- Participants. The final sample included in the analyses con- tory (MMI), and 1 AVGP and 1 NVGP whose MMI score were sists of 27 AVGPs (mean age Ϯ SD 22.81 Ϯ 2.69 y.o) and 27 higher than 6, were excluded. Finally, participants with incomplete NVGPs (22.96 Ϯ 3.66 y.o.), who were similar in age (t Ͻ 1, d ϭ or corrupted data were removed: five subjects (2 AVGPs and 3 0.05). This study was approved by the ethical committee of the This document is copyrighted by the American Psychological Association or one of its allied publishers. NVGPs) who did not complete the 2400 trials of the task; 1 NVGP University of Geneva and that of the University of Wisconsin This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. who gave the same response throughout the 2400 trials of the task; (each group includes 18 participants from Geneva and 9 from Wisconsin), which abide by the Declaration of Helsinki. Subjects 1 NVGP whose first analyses steps could not identify all six received compensation for their participation. emotion categories. Recruitment proceeded as in Study 1, with some participants Study procedure. The stimuli and task were identical to Jack recruited at the University of Geneva and others at the University et al. (2012). On each of the 2400 trials (see Figure 3), a facial of Wisconsin. Initially, 88 participants were recruited. Based on animation was shown, consisting of 3 facial muscles (action previous work showing cultural differences in this task (Jack et al., units, AU) chosen randomly from a set of 42 possible AUs. 2012), our inclusion/exclusion criteria called for male, Caucasian Each AU was animated according to a specific dynamic se- participants only (12 AVGPs and 8 NVGPs who were non- quence characterized by 6 temporal parameters: onset/peak/ and Caucasian had to be excluded) between 18 and 35 years of age offset latency, peak amplitude, acceleration and deceleration as (one 47 years old NVGP was excluded). Video game status was in Yu, Garrod, and Schyns (2012). After each animation, par- assessed as in E1. Participants were also selected so that they were ticipants were asked to categorize the not high media multitaskers, as per the media multitasking inven- among 7 possible choices (happy, sad, angry, fearful, surprise, 10 PICHON ET AL.

disgust, or other), and then to rate its intensity (using a 5-point 12 scale, with anchors: very weak, weak, medium, strong, very AVGPs, n=27 strong). We used 8 different identities (4 women and 4 men) to 10 NVGPs, n=27 generate random facial gestures by random selection of AUs and parameters. Data analysis. Data analysis consisted of reverse correlating 8 the spatial (42 AUs) and temporal parameters (6 parameters for each AU, defining the time course) on participants’ responses. The (mean±SD) 6 responses were z-scored for each participant to reduce individual and identity biases. 4 First, we extracted spatial and temporal components. Spatial com- ponents reflect the nonparametric correlation (i.e., Spearman’s rho)

between the presence of a specific AU and a given emotion (e.g., how 2 per emotion emotion per much the presence of AU-12 is associated with greater probability of

responding “Happy”), whereas temporal components relate to the six AUs sensitive of number Mean 0 parameters defining the dynamics of each AU (e.g., whether the peak hap sur fea dis ang sad amplitude latency correlates with perceived intensity judgment for a given emotion). In each case, these component matrices were derived Figure 4. Mean number of diagnostic AUs per emotion. Error bars are ϭ ϭ ϭ ϭ by conducting a multivariate regression analysis and measuring the standard deviations (hap happiness; sur surprise; fea fear; dis disgust; ang ϭ anger). AVGPs ϭ Action video game players; NVGPs ϭ semipartial correlation (i.e., the unique contribution in terms of the nonvideo game players. See the online article for the color version of this variance) of the specific spatial or temporal parameters, after partial- figure. ing out all other parameters. Next, these component matrices were used to evaluate (ai) the number of diagnostic units (defined by those with significant positive correlations), (b) the dissimilarity between matrix) and each group, was compared with an omnibus individual emotional models (corresponding to a vector of 42 corre- ANOVA, followed by independent sample T tests (Figure 5B). lation coefficients for each participant and emotion), and (c) their The ANOVA revealed an overall main effect of emotion (F[5, 2 clustering into emotions (K-means clustering). 260] ϭ 69.75, p Ͻ .001, GG corrected p Ͻ .001, ␩p ϭ 0.57, For these various dependent measures (number of diagnostic BF10 ϭ 8.88E ϩ 43), reflecting smaller distances for happiness AUs, euclidean distances between models, or matrices), (i.e., more similar models) than for other emotions across all classical statistical tests, including ANOVAs, T-Tests, or chi- participants. More relevant to our aim, this analysis revealed no square tests were used to test for effects of group or/and of main effect of group (F Ͻ 1, BF10 ϭ 0.162). A significant emotion. Classical (i.e., frequentist) statistical analyses were com- group x emotion interaction, F(5, 260) ϭ 2.50, p ϭ .03, GG 2 plemented with Bayesian statistics to examine the strength of corrected p ϭ .04, ␩p ϭ 0.57, BF10 ϭ 1.13 was found, which evidence in favor or against our main hypotheses on group differ- was driven by a pattern of nonsignificant group differences, ences. with numerically smaller values (i.e., greater homogeneity) in the NVGP group for both sad and fear expressions (respectively ϭ ϭ ϭ ϭ Results p .09, BF10 0.92 and p .12, BF10 0.76), and numerically, though nonsignificantly, greater homogeneity in Number of diagnostic AUs (amount of information per the AVGP group for happiness, anger, disgust, and surprise (all model). Figure 4 shows the mean number of diagnostic AUs for ps Ͼ 0.15, all BF10s Ͻ 0.64). each emotion per group, or in other words, the number of AUs that Confusions (quality of models). In order to examine the correlated significantly with a given emotion response. Fear was distinctiveness of individual emotional models, we performed a associated with fewest diagnostic AUs, whereas disgust was asso- K-means clustering analysis of the euclidean distances between ciated with the greatest number of diagnostic AUs. An ANOVA the emotional models recovered for each participant (Figure with emotion and group as factors indicated an effect of emotion 5C), and examined common confusions; that is the percentage This document is copyrighted by the American Psychological Association or one of its allied publishers. 2 (F(5, 260) ϭ 16.62, p Ͻ .001, GG corrected p Ͻ .001, ␩p ϭ 0.24, of models that were misclassified as belonging to another This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. BF10 ϭ 2.30E ϩ 12), but no effect of group and no interaction emotion’s cluster (Figure 5D). An ANOVA of the within- (Fs Ͻ1, all BF10s Ͻ 0.2). emotion clusters (i.e., the diagonal of the K-means matrix in Configuration of AUs (homogeneity of models across Figure 5D) with emotion and group as factors revealed main individuals). The interindividual variability of spatial configu- effect of emotion (F(5, 260) ϭ 39.71, p Ͻ .001, GG corrected 2 rations provides an index of the homogeneity of the representa- p Ͻ .001, ␩p ϭ 0.43, BF10 ϭ 3.25E ϩ 29), reflecting differ- tions within each group (AVGPs vs. NVGPs). Thus, smaller dis- ences in classification accuracy across emotions. The effect of tances indicate greater model similarities across the considered emotion was qualified by a group x emotion interaction, F(5, 2 individuals. 260) ϭ 2.74, p ϭ .02, GG corrected p ϭ .05, ␩p ϭ 0.05, Dissimilarity matrices between individual emotional models BF10 ϭ 2.57. This interaction was driven by AVGPs having (i.e., one model per emotion and per participants) were com- better models of surprise (i.e., fewer confusions, p ϭ .04, puted using euclidean distances, considering AVGPs and BF10 ϭ 1.71) and a trend toward NVGPs having better models NVGPs separately (Figure 5A). The mean euclidean distance for anger relative to AVGPs (p ϭ .09, BF10 ϭ 0.90). No other for each within-emotion model (i.e., diagonal of dissimilarity group difference approached significance (all other ps Ͼ 0.16, FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 11

A Discussion AVGPs (n=27) 42 AUs NVGPs (n=27) 42 AUs

hap 1 hap 1 Study 2 aligns with and extends the results seen in Study 1, which indicated similar facial emotion representations in sur 0.8 sur 0.8 AVGPs and NVGPs. The absence of group effects raises the fea fea 0.6 0.6 question of whether these may have been masked by some of dis

dis models 0.4 0.4 our experimental choices. For example, the present study used ang ang photorealistic facial animations as stimuli, whose ecological 0.2 0.2 sad sad validity remains unknown. Yet, several studies have shown that 0 50 100 150 50 100 15050 100 emotional expressions from both static (Dyck et al., 2008) and B AUs AUs AUs dynamic virtual computer-animated avatars (Faita et al., 2015) 0.6 are comparable to those from real human emotions in terms of ±SD) AVGPs (n=27) recognition accuracy and other social judgments, including the 0.5 NVGPs (n=27) ability to replicate impairments found in clinical populations 0.4 (Dyck, Winbeck, Leiberg, Chen, & Mathiak, 2010). Moreover, this set of stimuli is known to be sufficiently sensitive to reveal 0.3 the existence of cultural and individual differences in how

0.2 information from faces is extracted in order to make emotional judgments (Jack & Schyns, 2017). In sum, Study 2 reinforces 0.1 the lack of group differences documented in Study 1 when it

Euclidean Distances (mean comes to facial emotion representations. 0 hap sur fea dis ang sad C General Discussion hap In a series of two studies, we investigated whether AVGPs sur process static and dynamic emotional stimuli better, similarly, fea or worse than NVGPs, using two complementary psychophys- ical methods that are sensitive to interindividual differences in dis emotion perception. In doing so, we contrasted three hypothe- ang ses. Less noisy perceptual processing across all domains of sad perception in AVGPs predicted that AVGPs would be globally D 50 100 150 50 100 150 more accurate at recognizing facial emotions than NVGPs (Study1), and accordingly display more precise mental models hap 27 0 0 0 0 0 26 0 1 0 0 0 25 or templates of facial emotion (Study 2). Our results did not sur 02700 0 0 0232 0 2 0 20 support either of these predictions. Our results similarly failed to support the hypothesis, from social theories of media vio- fea 0 1 22 0 4 0 0125010 15 lence, that AVGPs may display higher sensitivity to anger due 0002610 0002700 dis 10 to a learned attention bias for hostile emotions, and/or blunted 0002070 0 0 014130 ang 5 processing of emotions signaling distress. We found no evi- sad 0010026 0000027 dence for such emotion-specific group differences. Instead, our 0 246 246 results are more in line with the learning to learn hypothesis, or cluster cluster the proposal that AVGPs should outperform NVGPs due to Figure 5. Participants’ emotional models, separately for AVGPs and higher learning abilities when faced with new information. To NVGPs. (A) Dissimilarity matrices using euclidean distances applied to the extent that facial emotion perception is an overlearned the vectors of correlations estimating the relationship between the visual function in adults, this view predicted that no group presence of a specific AU and participants’ categorical responses (27 difference would be observed in either study. However, we note This document is copyrighted by the American Psychological Association or one of its allied publishers. participants by 6 emotions). (B) Mean euclidean distance for each that predicted nulls in some ways offer less strong support for This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. within-emotion models (i.e., mean and SD for the diagonal of dissim- a theory than predicted positive results. Thus, future work may ilarity matrix displayed in panel A). (C) K-means clustering of indi- wish to directly contrast overlearned emotional expressions viduals’ emotion models based on euclidean distances. (D) Confusion with emotional content to which participants are naïve. matrices based on k-means clustering of euclidean distances (the color The results reported in both studies do not support heightened scale here indicates the number of subjects). AVGPs ϭ Action video game players; NVGPs ϭ nonvideo game players. See the online article emotional facial processing in AVGPs relative to NVGPs. for the color version of this figure. Overall, our study contrasts with a growing body of work showing that AVGPs outperform NVGPs on a variety of per- ceptual, attentional and cognitive skills (Bediou et al., 2018; but see Sala, Tatlidil, & Gobet, 2018). Regarding visual processing all BF10s Ͻ 0.41). Finally, chi-square analysis confirmed that in particular, AVGPs have shown superior performance in a the confusion rates did not differ between groups (␹2(5) ϭ 2, number of perceptual decision making tasks measuring contrast p ϭ .8, BF10 ϭ 0.003), providing strong evidence in favor of sensitivity (Li et al., 2009), orientation identification (Bejjanki the null hypothesis. et al., 2014; Berard et al., 2015), or motion perception (Green 12 PICHON ET AL.

et al., 2010; Hutchinson & Stocks, 2013). These broad improve- reported that RTs to happy faces are typically shorter than those ments in low-level aspects of visual perception are thought to to angry faces, a result also observed in Study 1. Of greater arise through an increased capacity to separate relevant (i.e., interest is the size of this happy face advantage as a function of diagnostic) information from distraction (i.e., noise), allowing gaming group. Unlike our modeling approach of RT and accu- perceptual information to accumulate faster in the service of racy, considering each emotion separately, the Kirsh et al. decision-making (Bavelier et al., 2012; Green et al., 2010). It studies used a composite score where mean RT for recognizing could be simply that the benefits of AVG play, in terms of happy faces was subtracted from mean RT for recognizing information processing, do not extend to the high-level percep- angry faces. These authors reported that RTs to happy faces tion of visual stimuli, such as emotional faces. Another possi- were more similar to the angry faces RTs in the group that bility, which is compatible with the learning to learn hypothe- played violent video game (either cross-sectionally or in their sis, could be that while the benefits of AVG practice generalize intervention study). The interpretation of this difference score is to tasks and stimuli that are relatively novel, the expertise not straightforward. While the authors argued for a slowing reached by adults in processing facial emotions is likely to be down of RTs to happy faces in the violent gaming group, no the same, independent of video game expertise. In the case of pretest was performed, and their result could have been due to such overlearned stimuli, AVGPs would perform no differently a speeding of RTs to angry faces in that same group. In our than NVGPs. research, we could not detect any group differences in the The absence of group differences across each and every processing or representations of happy or angry facial expres- emotion considered also speaks against the idea of emotion- sions as a function of video game group. This is despite our specific differences, as would be predicted by some authors in experimental approach being able to replicate the main happy the violent video game literature. Unlike Diaz and colleagues face advantage. (2016), we found no difference in the mental representations of A more recent 25-min intervention study contrasted the effect of fear and disgust facial expressions between AVGPs and playing a violent (n ϭ 23, 8 females) or a neutral video game (n ϭ NVGPs. These authors compared performance of violent video 22, 3 females) on behavioral and neural markers during two search game players (n ϭ 83) and NVGPs (n ϭ 69) in a facial emotion tasks with happy, angry, and neutral faces. The authors anticipated recognition task using static images of real faces. Violent video that violent video game play might reduce the attentional mani- game players performed better than NVGPs for fear (increased festation of the happy face advantage (Liu, Lan, Teng, Guo, & speed and accuracy), whereas they performed worse for disgust Yao, 2017). One task assessed attentional facilitation by emotional (accuracy only). Although Study 1 cannot speak to this issue, as faces and required participants to search for an emotional face it did not probe fear or disgust, Study 2 did, and provides no among neutral faces. A second task, which measured the difficulty support for such a group difference. If any group effects were to of disengaging attention from emotional distractors, required par- be present concerning these two emotions, they concerned ticipants to search for a neutral face among emotional ones. In the NVGPs exhibiting more similar representations for fear as a facilitation task, no group effect was detectable on RTs, accuracy, group, and AVGPs showing more similar representations for or the N2pc, an evoked potential indexing attention allocation. In disgust, again as a group. Yet, in terms of quality of the models, the disengagement task, no group effect was found on RTs or which is best measured by confusions, there was no evidence accuracy, and only the neutral video game group displayed a N2pc for any group differences along these two emotions. A worth- in response to happy faces, providing weak evidence that violent while consideration in future studies is the gender balance game play may specifically reduce the happy face advantage (or across groups. Our study exclusively included males as very alternatively, may potentiate an attention bias toward angry faces). few females are avid players of action video games. While At the level of behavioral markers, however, these results more players of violent video game are also predominantly males, directly mirror those of the present study of no difference as a many violent media studies do not systematically match for function of video game play type. It should be noted that our gender, making it difficult to evaluate the role of media use results are not in contradiction with previous work showing that versus gender in the effects reported. For example, the study by chronic violent video game play is linked with reduced P300 Diaz and colleagues discussed above included 69% males in responses to negative violent scenes (but not to negative nonvio- violent game players group, but only 23% males in the NVGPs lent scenes), which was interpreted as consistent with the hypoth- This document is copyrighted by the American Psychological Association or one of its allied publishers. group. Moreover, the gamer group was also more anxious and esis that chronic exposure to violent scenes may lead to desensi- This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. more depressed than the nongamer group. It would seem best tized responses to violence (Bailey et al., 2011; Bartholow et al., practice to tightly control for gender and mental health status 2006). when comparing groups selected based on their gaming habits. The overall lack of group difference across all emotions tested Another proposal in the violent media literature is that of a in the present study is all the more striking given that we used two reduced happy face advantage in violent media users. For robust approaches known to sample decision- example, Kirsh et al. (2006) contrasted the speeded recognition making and the representations they engage exhaustively. Accord- of happy and angry facial expressions in participants scoring ingly, both Studies 1 and 2 demonstrate multiple core known high (N ϭ 58) versus low (N ϭ 58) on a violent media usage effects in emotion processing. For instance, our DDM results are in questionnaire. They reported a reduced happy face advantage in line with reports of a happy-face advantage, compared to the the high scoring participants. In a follow-up study, Kirsh and identification of angry or sad faces (Kirita & Endo, 1995; Kirsh et Mounts (2007) also reported a reduced happy face advantage al., 2006; Leppänen, Tenhunen, & Hietanen, 2003). Indeed, we after having participants (N ϭ 197) play 15 min of a violent as found greater accumulation rates for happy faces as compared to compared to a nonviolent video game. Overall, these studies angry and sad ones in Study 1. Likewise, the greater homogeneity FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 13

in mental representations of happy faces, as indicated by the perceptual, attentional, and cognitive skills. Psychological Bulletin, 144, reduced euclidean distances relative to other emotions found in 77–110. http://dx.doi.org/10.1037/bul0000130 Study 2, is also compatible with the happy-face advantage. Al- Bejjanki, V. R., Zhang, R., Li, R., Pouget, A., Green, C. S., Lu, Z. L., & though the drift-diffusion model approach has not been commonly Bavelier, D. (2014). Action video game play facilitates the development applied to facial emotion decision making, its usefulness as a tool of better perceptual templates. Proceedings of the National Academy of to diagnose group differences in information processing is increas- Sciences of the United States of America, 111, 16961–16966. http://dx ingly recognized (White et al., 2010). Accordingly, Study 1 illus- .doi.org/10.1073/pnas.1417056111 Belchior, P., Marsiske, M., Sisco, S. M., Yam, A., Bavelier, D., Ball, K., trates its sensitivity to such known effects in the literature. & Mann, W. C. (2013). Video game training to improve selective visual Conclusion attention in older adults. Computers in Human Behavior, 29, 1318– 1324. http://dx.doi.org/10.1016/j.chb.2013.01.034 In conclusion, our studies show that AVG experience is not Berard, A. V., Cain, M. S., Watanabe, T., & Sasaki, Y. (2015). Frequent associated with differences in facial emotion decision-making nor video game players resist perceptual interference. PLoS ONE, 10(3), the mental representations of dynamic facial emotions. While this e0120011. http://dx.doi.org/10.1371/journal.pone.0120011 null effect could be attributed to our subject selection, we used the Blacker, K. J., & Curby, K. M. (2013). Enhanced visual short-term mem- same criterion used in the past and confirmed that in doing so, ory in action video game players. Attention, Perception, & Psychophys- ics, 75, 1128–1136. http://dx.doi.org/10.3758/s13414-013-0487-0 AVGPs do differ in their response to provocation in a competitive Cain, M. S., Prinzmetal, W., Shimamura, A. P., & Landau, A. N. (2014). laboratory task. Thus, the lack of group differences in emotion Improved control of exogenous attention in action video game players. processing reported here is unlikely attributable to a failure to Frontiers in Psychology, 5, 69. http://dx.doi.org/10.3389/fpsyg.2014 properly select our samples. Rather, this work suggests that the .00069 cognitive benefits documented in AVGPs do not readily extend to Cardoso-Leite, P., Joessel, A., & Bavelier, D. (2020). Games for enhancing the emotional domain, at least when using a task that participants cognitive abilities. In J. L. Plass, R. E. Mayer, & B. D. Homer (Eds.), are already experts at, such as facial emotion recognition. This MIT handbook of game-based learning. Cambridge, MA: MIT Press. comes as a surprise in the context of the enhanced perceptual skills Retrieved from https://mitpress.mit.edu/books/handbook-game-based- documented in AVGPs. Indeed, facial emotion identification is at learning its core, a perceptual skill. Yet, the proposal that AVGPs may not Carnagey, N. L., & Anderson, C. A. (2005). The effects of reward and benefit perceptually from the onset, but rather may learn faster punishment in violent video games on aggressive affect, cognition, and perceptual tasks as argued in Bavelier et al. (2012) and tested in behavior. Psychological Science, 16, 882–889. http://dx.doi.org/10 Bejjanki et al. (2014), seems aligned with the current null result. .1111/j.1467-9280.2005.01632.x Indeed, group differences, if due to faster learning in AVGPs, are Chisholm, J. D., Hickey, C., Theeuwes, J., & Kingstone, A. (2010). expected to be detectable at intermediate stage of learning, before Reduced attentional capture in action video game players. Attention, Perception, & Psychophysics, 72, 667–671. http://dx.doi.org/10.3758/ asymptotic performance or expertise has been achieved. The pres- APP.72.3.667 ent results thus invite further studies comparing AVGPs and Chisholm, J. D., & Kingstone, A. (2012). Improved top-down control NVGPs when using tasks and stimuli ranging from very novel to reduces oculomotor capture: The case of action video game players. highly familiar. It is also possible that the domain of emotional Attention, Perception, & Psychophysics, 74, 257–262. http://dx.doi.org/ processing differs from that of perceptual processing in terms of its 10.3758/s13414-011-0253-0 sensitivity to AVG play. This is an important avenue for future Chisholm, J. D., & Kingstone, A. (2015). Action video game players’ work, as it would provide important boundary conditions to further advantage extends to biologically relevant stimuli. Acta characterize the way AVG play affects information processing. Psychologica, 159, 93–99. http://dx.doi.org/10.1016/j.actpsy.2015.06 .001 References Dale, G., & Shawn Green, C. (2017). The changing face of video games and video gamers: Future directions in the scientific study of video game Anderson, C. A., & Bushman, B. J. (2002). Human aggression. Annual play and cognitive performance. Journal of Cognitive Enhancement, 1, Review of Psychology, 53, 27–51. http://dx.doi.org/10.1146/annurev 280–294. http://dx.doi.org/10.1007/s41465-017-0015-6 .psych.53.100901.135231 Decety, J., Echols, S., & Correll, J. (2010). The blame game: The effect of Bailey, K., & West, R. (2013). The effects of an action video game on responsibility and social stigma on empathy for pain. Journal of Cog-

This document is copyrighted by the American Psychological Association or one of its allied publishers. visual and affective information processing. Brain Research, 1504, nitive Neuroscience, 22, 985–997. http://dx.doi.org/10.1162/jocn.2009

This article is intended solely for the personal use of the individual user35–46. and is not to be disseminated broadly. http://dx.doi.org/10.1016/j.brainres.2013.02.019 Bailey, K., West, R., & Anderson, C. A. (2011). The association between .21266 chronic exposure to video game violence and affective picture process- Diaz, R. L., Wong, U., Hodgins, D. C., Chiu, C. G., & Goghari, V. M. ing: An ERP study. Cognitive, Affective & , 11, (2016). Violent video game players and non-players differ on facial 259–276. http://dx.doi.org/10.3758/s13415-011-0029-y emotion recognition. Aggressive Behavior, 42, 16–28. http://dx.doi.org/ Bartholow, B. D., Bushman, B. J., & Sestir, M. A. (2006). Chronic violent 10.1002/ab.21602 video game exposure and desensitization to violence: Behavioral and Dienes, Z. (2014). Using Bayes to get the most out of non-significant event-related brain potential data. Journal of Experimental Social Psy- results. Frontiers in Psychology, 5, 781. http://dx.doi.org/10.3389/fpsyg chology, 42, 532–539. http://dx.doi.org/10.1016/j.jesp.2005.08.006 .2014.00781 Bavelier, D., Green, C. S., Pouget, A., & Schrater, P. (2012). Brain Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., Gur, R. C. R. C., & plasticity through the life span: Learning to learn and action video Mathiak, K. (2008). Recognition profile of emotions in natural and games. Annual Review of Neuroscience, 35, 391–416. http://dx.doi.org/ virtual faces. PLoS ONE, 3(11), e3628. http://dx.doi.org/10.1371/journal 10.1146/annurev-neuro-060909-152832 .pone.0003628 Bediou, B., Adams, D. M., Mayer, R. E., Tipton, E., Green, C. S., & Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., & Mathiak, K. (2010). Bavelier, D. (2018). Meta-analysis of action video game impact on Virtual faces as a tool to study emotion recognition deficits in schizo- 14 PICHON ET AL.

phrenia. Psychiatry Research, 179, 247–252. http://dx.doi.org/10.1016/ of , 31, 377–389. http://dx.doi.org/10.1162/ j.psychres.2009.11.004 jocn_a_01230 Dye, M. W. G., Green, C. S., & Bavelier, D. (2009). The development of Gentile, D. (2009). Pathological video-game use among youth ages 8 to 18: attention skills in action video game players. Neuropsychologia, 47 A national study. Psychological Science, 20, 594–602. http://dx.doi.org/ (8–9), 1780–1789. http://dx.doi.org/10.1016/j.neuropsychologia.2009 10.1111/j.1467-9280.2009.02340.x .02.002 Giancola, P. R., & Zeichner, A. (1995). Construct validity of a competitive Elson, M. (2017). Competitive reaction time task. Retrieved from http:// reaction-time aggression paradigm. Aggressive Behavior, 21, 199–204. http:// www.flexiblemeasures.com/crtt/ dx.doi.org/10.1002/1098-2337(1995)21:3Ͻ199::AID-AB2480210303Ͼ3.0 Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). .CO;2-Q Press CRTT to measure aggressive behavior: The unstandardized use of Gleichgerrcht, E., & Decety, J. (2014). The relationship between different the competitive reaction time task in aggression research. Psychological facets of empathy, pain perception and fatigue among phy- Assessment, 26, 419–432. http://dx.doi.org/10.1037/a0035569 sicians. Frontiers in Behavioral Neuroscience, 8, 243. http://dx.doi.org/ Engelhardt, C. R., Bartholow, B. D., Kerr, G. T., & Bushman, B. J. (2011). 10.3389/fnbeh.2014.00243 This is your brain on violent video games: Neural desensitization to Green, C. S., & Bavelier, D. (2003). Action video game modifies visual violence predicts increased aggression following violent video game selective attention. Nature, 423, 534–537. http://dx.doi.org/10.1038/ exposure. Journal of Experimental Social Psychology, 47, 1033–1036. nature01647 http://dx.doi.org/10.1016/j.jesp.2011.03.027 Green, C. S., & Bavelier, D. (2006a). Effect of action video games on the Faita, C., Vanni, F., Lorenzini, C., Carrozzino, M., Tanca, C., & Bergam- spatial distribution of visuospatial attention. Journal of Experimental asco, M. (2015). Perception of basic emotions from facial expressions of Psychology: Human Perception and Performance, 32, 1465–1478. dynamic virtual avatars. In L. T. De Paolis & A. Mongelli (Eds.), Lecture notes in computer science (including subseries lecture notes in http://dx.doi.org/10.1037/0096-1523.32.6.1465 artificial and lecture notes in bioinformatics) (Vol. 9254, Green, C. S., & Bavelier, D. (2006b). Enumeration versus multiple object pp. 409–419). Lecce, Italy: Springer. http://dx.doi.org/10.1007/978-3- tracking: The case of action video game players. Cognition, 101, 217– 319-22888-4_30 245. http://dx.doi.org/10.1016/j.cognition.2005.10.004 Power 3: A Green, C. S., & Bavelier, D. (2007). Action-video-game experience altersءFaul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G flexible statistical power analysis program for the social, behavioral, and the spatial resolution of vision. Psychological Science, 18, 88–94. http:// biomedical sciences. Behavior Research Methods, 39, 175–191. http:// dx.doi.org/10.1111/j.1467-9280.2007.01853.x dx.doi.org/10.3758/BF03193146 Green, C. S., Pouget, A., & Bavelier, D. (2010). Improved probabilistic Feng, J., & Spence, I. (2018). Playing action video games boosts visual inference as a general learning mechanism with action video games. attention. In C. J. Ferguson (Ed.), Video game influences on aggression, Current , 20, 1573–1579. http://dx.doi.org/10.1016/j.cub.2010 cognition, and attention (pp. 93–104). Cham, Switzerland: Springer .07.040 International Publishing; http://dx.doi.org/10.1007/978-3-319-95495- Greitemeyer, T., Osswald, S., & Brauer, M. (2010). Playing prosocial 0_8 video games increases empathy and decreases . Emotion, Feng, J., Spence, I., & Pratt, J. (2007). Playing an action video game 10, 796–802. http://dx.doi.org/10.1037/a0020194 reduces gender differences in spatial cognition. Psychological Science, Hilgard, J., Engelhardt, C. R., & Rouder, J. N. (2017). Overstated evidence 18, 850–855. http://dx.doi.org/10.1111/j.1467-9280.2007.01990.x for short-term effects of violent games on affect and behavior: A Ferguson, C. J. (2015). Do angry birds make for angry children? A reanalysis of Anderson et al. (2010). Psychological Bulletin, 143, 757– meta-analysis of video game influences on children’s and adolescents’ 774. http://dx.doi.org/10.1037/bul0000074 aggression, mental health, prosocial behavior, and academic perfor- Hubert-Wallander, B., Green, C. S., Sugarman, M., & Bavelier, D. (2011). mance. Perspectives on Psychological Science, 10, 646–666. http://dx Changes in search rate but not in the dynamics of exogenous attention in .doi.org/10.1177/1745691615592234 action videogame players. Attention, Perception, & Psychophysics, 73, Ferguson, C. J., Cruz, A. M., Martinez, D., Rueda, S. M., Ferguson, D. E., 2399–2412. http://dx.doi.org/10.3758/s13414-011-0194-7 & Negy, C. (2008). Personality, parental, and media influences on Hutchinson, C. V., & Stocks, R. (2013). Selectively enhanced motion aggressive personality and violent crime in young adults. Journal of perception in core video gamers. Perception, 42, 675–677. http://dx.doi Aggression, Maltreatment & Trauma, 17, 395–414. http://dx.doi.org/10 .org/10.1068/p7411 .1080/10926770802471522 Hyatt, C. S., Chester, D. S., Zeichner, A., & Miller, J. D. (2019). Analytic Ferguson, C. J., & Rueda, S. M. (2009). Examining the validity of the flexibility in laboratory aggression paradigms: Relations with personal- modified Taylor competitive reaction time test of aggression. Journal of ity traits vary (slightly) by operationalization of aggression. Aggressive Experimental Criminology, 5, 121–137. http://dx.doi.org/10.1007/ This document is copyrighted by the American Psychological Association or one of its allied publishers. Behavior, 45, 377–388. http://dx.doi.org/10.1002/ab.21830 s11292-009-9069-5 This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Ferguson, C. J., San Miguel, C., Garza, A., & Jerabeck, J. M. (2012). A Facial expressions of emotion are not culturally universal. Proceedings longitudinal test of video game violence influences on dating and ag- gression: A 3-year longitudinal study of adolescents. Journal of Psychi- of the National Academy of Sciences of the United States of America, atric Research, 46, 141–146. http://dx.doi.org/10.1016/j.jpsychires.2011 109, 7241–7244. http://dx.doi.org/10.1073/pnas.1200155109 .10.014 Jack, R. E., & Schyns, P. G. (2017). Toward a social psychophysics of face Ferguson, C. J., Smith, S., Miller-Stratton, H., Fritz, S., & Heinrich, E. communication. Annual Review of Psychology, 68, 269–297. http://dx (2008). Aggression in the laboratory: Problems with the validity of .doi.org/10.1146/annurev-psych-010416-044242 the modified taylor competitive reaction time test as a measure of Kepes, S., Bushman, B. J., & Anderson, C. A. (2017). Violent video game aggression in media violence studies. Journal of Aggression, Mal- effects remain a societal concern: Reply to Hilgard, Engelhardt, and treatment & Trauma, 17, 118–132. http://dx.doi.org/10.1080/ Rouder (2017). Psychological Bulletin, 143, 775–782. http://dx.doi.org/ 10926770802250678 10.1037/bul0000112 Föcker, J., Mortazavi, M., Khoe, W., Hillyard, S. A., & Bavelier, D. Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facial (2019). Neural correlates of enhanced visual attentional control in expressions. Acta Psychologica, 89, 149–163. http://dx.doi.org/10.1016/ action video game players: An event-related potential study. Journal 0001-6918(94)00021-8 FACIAL EMOTION PERCEPTION IN ACTION GAME PLAYERS 15

Kirsh, S. J., & Mounts, J. R. W. (2007). Violent video game play impacts on Distributed Statistical Computing, 20–22. Retrieved from http:// facial emotion recognition. Aggressive Behavior, 33, 353–358. http://dx citeseerx.ist.psu.edu/viewdoc/summary?doiϭ10.1.1.13.3406 .doi.org/10.1002/ab.20191 Pohl, C., Kunde, W., Ganz, T., Conzelmann, A., Pauli, P., & Kiesel, A. Kirsh, S. J., Mounts, J. R. W., & Olczak, P. V. (2006). Violent media (2014). Gaming to see: Action video gaming is associated with enhanced consumption and the recognition of dynamic facial expressions. Journal processing of masked stimuli. Frontiers in Psychology, 5, 70. http://dx of Interpersonal Violence, 21, 571–584. http://dx.doi.org/10.1177/ .doi.org/10.3389/fpsyg.2014.00070 0886260506286840 Prescott, A. T., Sargent, J. D., & Hull, J. G. (2018). Metaanalysis of the Krishnan, L., Kang, A., Sperling, G., & Srinivasan, R. (2013). Neural relationship between violent video game play and physical aggression strategies for selective attention distinguish fast-action video game play- over time. Proceedings of the National Academy of Sciences of the ers. Brain Topography, 26, 83–97. http://dx.doi.org/10.1007/s10548- United States of America, 115, 9882–9888. http://dx.doi.org/10.1073/ 012-0232-3 pnas.1611617114 Kühn, S., Kugler, D. T., Schmalen, K., Weichenberger, M., Witt, C., & Qiao-Tasserit, E., Garcia Quesada, M., Antico, L., Bavelier, D., Vuil- Gallinat, J. (2018). Does playing violent video games cause aggression? leumier, P., & Pichon, S. (2017). Transient emotional events and indi- A longitudinal intervention study. Molecular Psychiatry, 24, 1220– vidual affective traits affect emotion recognition in a perceptual 1234. http://dx.doi.org/10.1038/s41380-018-0031-7 decision-making task. PLoS ONE, 12(2), e0171375. http://dx.doi.org/10 Lamm, C., Batson, C. D., & Decety, J. (2007). The neural substrate of .1371/journal.pone.0171375 human empathy: Effects of perspective-taking and cognitive appraisal. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory Journal of Cognitive Neuroscience, 19, 42–58. http://dx.doi.org/10 and data for two-choice decision tasks. Neural Computation, 20, 873– .1162/jocn.2007.19.1.42 922. http://dx.doi.org/10.1162/neco.2008.12-06-420 Latham, A. J., Patston, L. L. M., & Tippett, L. J. (2014). The precision of Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for experienced action video-game players: Line bisection reveals reduced Two-Choice Decisions. Psychological Science, 9, 347–356. http://dx.doi leftward response bias. Attention, Perception, & Psychophysics, 76, .org/10.1111/1467-9280.00067 2193–2198. http://dx.doi.org/10.3758/s13414-014-0789-x Ratcliff, R., & Tuerlinckx, F. (2002). Estimating parameters of the diffu- Leppänen, J. M., Tenhunen, M., & Hietanen, J. K. (2003). Faster choice- sion model: Approaches to dealing with contaminant reaction times and reaction times to positive than to negative facial expressions: The role of parameter variability. Psychonomic Bulletin & Review, 9, 438–481. cognitive and motor processes. Journal of , 17, 113– http://dx.doi.org/10.3758/BF03196302 123. http://dx.doi.org/10.1027//0269-8803.17.3.113 Roque, N. A., & Boot, W. R. (2018). Action video games DO NOT Li, R., Polat, U., Makous, W., & Bavelier, D. (2009). Enhancing the promote visual attention. In C. Ferguson (Ed.), Video game influences on contrast sensitivity function through action video game training. Nature aggression, cognition, and attention (pp. 105–118). Cham, Switzerland: Neuroscience, 12, 549–551. http://dx.doi.org/10.1038/nn.2296 Springer International Publishing. http://dx.doi.org/10.1007/978-3-319- Li, R., Polat, U., Scalzo, F., & Bavelier, D. (2010). Reducing backward 95495-0_9 masking through action game training. Journal of Vision, 10(14), 33. Sala, G., Tatlidil, K. S., & Gobet, F. (2018). Video game training does not http://dx.doi.org/10.1167/10.14.33 enhance cognitive ability: A comprehensive meta-analytic investigation. Liu, Y., Lan, H., Teng, Z., Guo, C., & Yao, D. (2017). Facilitation or Psychological Bulletin, 144, 111–139. http://dx.doi.org/10.1037/ disengagement? Attention bias in facial affect processing after short- bul0000139 term violent video game exposure. PLoS ONE, 12(3), e0172940. http:// Schubert, T., Finke, K., Redel, P., Kluckow, S., Müller, H., & Strobach, T. dx.doi.org/10.1371/journal.pone.0172940 (2015). Video game experience and its influence on visual attention Lobel, A., Engels, R. C. M. E., Stone, L. L., Burk, W. J., & Granic, I. parameters: An investigation using the framework of the Theory of (2017). Video gaming and children’s psychosocial wellbeing: A longi- tudinal study. Journal of Youth and Adolescence, 46, 884–897. http:// Visual Attention (TVA). Acta Psychologica, 157, 200–214. http://dx.doi dx.doi.org/10.1007/s10964-017-0646-z .org/10.1016/j.actpsy.2015.03.005 McCarthy, R. J., & Elson, M. (2018). A conceptual review of lab-based Sungur, H., & Boduroglu, A. (2012). Action video game players form more aggression paradigms. Collabra Psychology, 4, 4. http://dx.doi.org/10 detailed representation of objects. Acta Psychologica, 139, 327–334. .1525/collabra.104 http://dx.doi.org/10.1016/j.actpsy.2011.12.002 Mishra, J., Zinni, M., Bavelier, D., & Hillyard, S. A. (2011). Neural basis Szycik, G. R., Mohammadi, B., Hake, M., Kneer, J., Samii, A., Münte, of superior performance of action videogame players in an attention- T. F., & Te Wildt, B. T. (2017). Excessive users of violent video games demanding task. The Journal of Neuroscience, 31, 992–998. http://dx do not show emotional desensitization: An fMRI study. Brain Imaging .doi.org/10.1523/JNEUROSCI.4834-10.2011 and Behavior, 11, 736–743. http://dx.doi.org/10.1007/s11682-016- Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media 9549-y This document is copyrighted by the American Psychological Association or one of its allied publishers. multitaskers. Proceedings of the National Academy of Sciences of the Szycik, G. R., Mohammadi, B., Münte, T. F., & Te Wildt, B. T. (2017). This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. United States of America, 106, 15583–15587. http://dx.doi.org/10.1073/ Lack of evidence that neural empathic responses are blunted in excessive pnas.0903620106 users of violent video games: An fMRI study. Frontiers in Psychology, Palmer, J., Huk, A. C., & Shadlen, M. N. (2005). The effect of stimulus 8, 174. http://dx.doi.org/10.3389/fpsyg.2017.00174 strength on the speed and accuracy of a perceptual decision. Journal of van Ravenzwaaij, D., Boekel, W., Forstmann, B. U., Ratcliff, R., & Vision, 5(5), 376–404. http://dx.doi.org/10.1167/5.5.1 Wagenmakers, E. J. (2014). Action video games do not improve the Pavan, A., Hobaek, M., Blurton, S. P., Contillo, A., Ghin, F., & Greenlee, speed of information processing in simple perceptual tasks. Journal of M. W. (2019). Visual short-term memory for coherent motion in video : General, 143, 1794–1805. http://dx.doi.org/ game players: Evidence from a memory-masking paradigm. Scientific 10.1037/a0036923 Reports, 9, 6027. http://dx.doi.org/10.1038/s41598-019-42593-0 Wabersich, D., & Vandekerckhove, J. (2014). Extending JAGS: A tutorial Pichon, S., Antico, L., Chanal, J., Singer, T., & Bavelier, D. (2020). The on adding custom distributions to JAGS (with a diffusion model exam- link between competitive personality, aggressive and altruistic behav- ple). Behavior Research Methods, 46, 15–28. http://dx.doi.org/10.3758/ iors in action video game players. Manuscript submitted for publication. s13428-013-0369-3 Plummer, M. (2003). JAGS: A program for analysis of Bayesian models Whitaker, J. L., & Bushman, B. J. (2012). “Remain calm. be kind.” effects using Gibbs sampling. Proceedings of the 3rd International Workshop of relaxing video games on aggressive and prosocial behavior. Social 16 PICHON ET AL.

Psychological and Personality Science, 3, 88–92. http://dx.doi.org/10 Wu, S., & Spence, I. (2013). Playing shooter and driving videogames .1177/1948550611409760 improves top-down guidance in visual search. Attention, Perception, & White, C. N., Ratcliff, R., Vasey, M. W., & McKoon, G. (2010). Using Psychophysics, 75, 673–686. http://dx.doi.org/10.3758/s13414-013- diffusion models to understand clinical disorders. Journal of Mathemat- 0440-2 ical Psychology, 54, 39–52. http://dx.doi.org/10.1016/j.jmp.2010.01.004 Yu, H., Garrod, O. G. B., & Schyns, P. G. (2012). Perception-driven facial Wiecki, T. V., Sofer, I., & Frank, M. J. (2013, August). HDDM: Hierar- expression synthesis. Computers and Graphics, 36, 152–162. http://dx chical Bayesian estimation of the drift-diffusion model in Python. Fron- .doi.org/10.1016/j.cag.2011.12.002 tiers in Neuroinformatics, 7, 14. http://dx.doi.org/10.3389/fninf.2013 .00014 Wilms, I. L., Petersen, A., & Vangkilde, S. (2013). Intensive video gaming improves encoding speed to visual short-term memory in young male Received January 22, 2019 adults. Acta Psychologica, 142, 108–118. http://dx.doi.org/10.1016/j Revision received October 16, 2019 .actpsy.2012.11.003 Accepted January 17, 2020 Ⅲ This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.