<<

in habitual players of action video games

Swann Pichon*,1,2,6 , Benoit Bediou*,1,6, Lia Antico1,2,6, Rachael Jack3, Oliver

Garrod3, Chris Sims4, C. Shawn Green5, Philippe Schyns3, Daphne Bavelier1,6

1 Faculty of and Educational Sciences, University of Geneva, Geneva,

Switzerland

2 Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland

3 School of Psychology and Institute of and Psychology, Glasgow

University, UK

4 Cognitive Science Department, Rensselaer Polytechnic Institute, USA

5 Department of Psychology, University of Wisconsin-Madison, USA

6 Campus Biotech, Geneva, Switzerland

* Co-First/Equal authorship

Running title: Facial emotion perception in action game players

Keywords: Action video game, , emotion perception, drift diffusion modelling, reverse inference

Corresponding author

Swann Pichon, PhD

Faculty of Psychology and Educational Sciences

University of Geneva, Campus Biotech H8-3,

1 Chemin des Mines 9, 1202 Geneva, Switzerland

2 Abstract

Action video game players (AVGPs) display superior performance in various aspects of , especially in perception and top-down . The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present two cross-sectional studies contrasting AVGPs and non-video game players (NVGPs) in their ability to perceive facial . Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for -related expressions such as , , and while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs.

3 1.1 Introduction

Over the past fifteen-years, a growing body of work within the domain of has examined the effects of playing video games on cognitive function. One sub-type of video game, termed action video games (AVG), has been a primary focus in the field. Recent meta-analytic work indicates that habitual action video game players outperform non-action video game players by approximately 2/3rd of a standard deviation across a wide variety of cognitive skills (Bediou et al., 2018). Consistent with broader theories in cognitive psychology, action games have been distinguished from other game types based upon the load that they place on core cognitive processes

(Cardoso-Leite, Joessel, & Bavelier, 2020). This differs from other areas of psychology where games are selected for research based upon differences in content (e.g., violent content, pro-social content, educational content, etc.). Key characteristics of to cognitive that are inherent in action games include the need to track multiple targets simultaneously (cognitive load), to select targets from within cluttered displays (attentional load), to identify targets based upon fine-grained features

(perceptual load), and to perform these computations in the service of producing effective decisions and actions (planning/motor load). The most common game types that fit these characteristics are first- and third-person shooter video games, which together comprise the majority of games that would be labeled as action video games.

Playing action video games has been shown to be both associated with (i.e., in cross-sectional research designs), and to cause (i.e., in experimental research designs) a range of enhancements in perceptual, top-down attentional, and spatial cognitive skills. Critically, these enhancements are not seen with other types of video games, such as life simulation games or puzzle games, for example. The perceptual benefits

4 associated with AVG play range from enhanced contrast sensitivity (Li, Polat, Makous,

& Bavelier, 2009), to higher spatial resolution of vision (Green & Bavelier, 2006a, 2007;

Latham, Patston, & Tippett, 2014; Schubert et al., 2015), to reduced lateral and backward masking (Li, Polat, Scalzo, & Bavelier, 2010; Pohl et al., 2014). Action video game play has also been associated with superior performance in top-down attention tasks (Feng & Spence, 2018; Green & Bavelier, 2003; but see Roque & Boot, 2018), from demanding visual searches (Chisholm, Hickey, Theeuwes, & Kingstone, 2010;

Chisholm & Kingstone, 2012; Wu & Spence, 2013), to tasks meant to measure attentional control (Cain, Prinzmetal, Shimamura, & Landau, 2014; Dye, Green, &

Bavelier, 2009; Föcker, Mortazavi, Khoe, Hillyard, & Bavelier, 2018), to tasks that measure flexible allocation of attention to objects (Green & Bavelier, 2006b). These instances of superior performance reflect a greater ability to focus on the task and important stimuli at hand while ignoring sources of noise, distractions or interruptions.

For example, using the SSVEP technique, AVGPs have been observed to better filter out distractors during perceptually demanding tasks (Krishnan, Kang, Sperling, &

Srinivasan, 2013; Mishra, Zinni, Bavelier, & Hillyard, 2011). Finally, in the domain of visuo-spatial cognition, AVGPs have been found to outperform NVGPs on mental rotation tasks as well as short-term memory and working memory tasks using visuo- spatial stimuli (Blacker & Curby, 2013; Feng, Spence, & Pratt, 2007; Green & Bavelier,

2006b). This work points to action gaming being associated with more precise and flexible memory representations.

A recurrent finding in the AVGPs literature is that of enhanced information processing. Dye, Green and Bavelier (2009) showed that AVGPs responded about 10% faster than NVGPs across a wide range of cognitive tasks. This same advantage was seen for tasks where reaction were very fast (a few hundreds of milliseconds) to

5 those where responses were relatively slower (reaction times on the order of seconds).

It was further shown that these faster reaction times did not entail a cost in terms of task accuracy. Later work used drift-diffusion modeling to partition the overall reaction times into component processes and to examine which processes were altered in AVGPs.

Here, action video game play was associated primarily with greater evidence accumulation rates. For example, higher rate of evidence accumulation in AVGPs has been observed for motion perception and auditory discrimination (Green, Pouget, &

Bavelier, 2010; but see van Ravenzwaaij, Boekel, Forstmann, Ratcliff, & Wagenmakers,

2014), for contrast sensitivity (Li et al., 2009, critical duration task), and for top-down attentional mechanisms (Belchior et al., 2013; Hubert-Wallander, Green, Sugarman, &

Bavelier, 2011). Higher capacity in information processing in AVGPs is also illustrated by more precise memory representations, as exemplified by performance on tasks involving memory for motion (Pavan et al., 2019; Wilms, Petersen, & Vangkilde, 2013) and memory for color (Sungur & Boduroglu, 2012). Altogether, this work points to a rather global advantage in information processing, with AVGPs extracting information from the environment at a faster rate than NVGPs, along with showing greater precision of the encoded memory representations during cognitive tasks.

An unresolved question concerns the breadth of the changes observed in AVGPs. Thus far, behavioral differences between AVGPs and NVGPs have been almost exclusively examined in the context of well-known tasks in cognitive psychology. Yet, whether this advantage extends to the emotional domain remains largely unchartered. Chisholm and

Kingstone (2015) found that the greater top-down attentional control abilities of AVGPs extend to tasks using schematic facial emotional expressions as distractors. Specifically, AVGPs were better than NVGPs at ignoring abrupt onset distractors, irrespective of whether the distractor was a neutral, happy, or inverted face.

6 This result, a replication of previous work with non-emotional stimuli (Chisholm &

Kingstone, 2012), was interpreted as a sign of greater top-down attentional control (as measured by the ability to ignore distraction) in AVGPs. However, because facial emotion in this study was always task-irrelevant, these results are not necessarily informative with regard to the processing of facial emotions in AVGPs per se. Bailey and West (2013) studied changes in event-related potential responses to emotional faces following 10 hours of playing an action versus a non-action video game.

Participants performed a visual search task with schematic emotional faces. Their task was to look for a happy or an angry facial target in an array of neutral distractors. Ten hours of action video game training resulted in increased neural responses to both emotional targets (happy and angry faces) and non-emotional non-targets (neutral faces) in right frontal and occipito-parietal brain areas compared to non-action video game training. While shorter P3 latency at posttest compared to pretest was observed in action trainees only, there was no visible impact on behavioral measures and no differences across stimulus category (i.e., emotional vs. non-emotional). Although certainly not diagnostic, these few results are in line with the hypothesis of enhanced information processing that extends from cognitive to emotional tasks. In that respect, we may expect similar findings in the emotional domain as in the cognitive domain - namely a greater rate of evidence accumulation and higher memory precision in AVGPs when processing emotional faces.

7 An alternative prediction, however, comes from a line of research in , which proposes that repeated exposure to video games with violent content might be associated with individual differences, or possibly changes in social behavior together, with increased sensitivity to social cues signaling aggression and decreased sensitivity to social cues signaling distress. While the cognitive psychology literature has focused on differentiating games based upon cognitive processing demands, it is almost certainly the case that individuals who are categorized as AVGPs have had exposure to different video game content than individuals categorized as

NVGPs. In particular, although violence is not a necessary characteristic of action games (i.e., there are non-violent action video games, such as MarioKart or Splatoon), it is nonetheless the case that most action video games do contain violence.

Furthermore, because NVGPs (by definition) will have less total game experience than

AVGPs, they will also, necessarily, have had less total exposure to violent video game content. Thus, while the AVGPs/NVGPs categorization scheme utilized in cognitive psychology is not a perfect match to the violent/non-violent categorization scheme used in social psychology, it is interesting to also consider the predictions regarding emotional processing arising from this line of research, as they contrast with the predictions arising from the cognitive domain.

8 For instance, the General Aggression Model (GAM) predicts that exposure to rewarded violence in video games is likely to increase gamers’ aggressive (i.e. hostile ), aggressive (i.e. hostile thoughts), and aggressive behaviors, at least in the short term (Anderson & Bushman, 2002; Carnagey &

Anderson, 2005), while prosocial games seem to promote prosocial behaviors in the short term (Gentile, 2009; Greitemeyer, Osswald, & Brauer, 2010). In the long-term, the same model predicts that repeated exposure to media violence may influence aggressive behavior by promoting aggressive beliefs and reinforcing aggressive expectations and scripts, as well as desensitizing individuals to aggression. In line with the desensitization view, some studies have reported reduced amplitudes of brain responses to pictures depicting violent content among chronic players of violent video games and in players transiently exposed to violent video games (Bailey, West, &

Anderson, 2011; Bartholow, Bushman, & Sestir, 2006; Engelhardt, Bartholow, Kerr, &

Bushman, 2011). Another extension of this model proposes that, when activated, these hostile representations may affect the processing of perceptual information by creating an attentional () bias, which increases the processing of information signaling social threat. In that view, repeated exposure to violent video games may influence in real life, as individuals may be more likely to pay attention to social cues and interpret them negatively as signaling hostile intentions. At the same , these hostile representations are proposed to reduce individuals’ sensitivity to emotional signals, which have the potential to trigger empathic responding such as sad, painful, or happy expressions.

9 Consistent with these ideas, Kirsh, Mounts, and Olczak (2006) found that individuals who were high in violent media consumption were faster at identifying and/or slower at identifying happiness compared to individuals who were low in violent media consumption. Similar results were found a few minutes after playing a violent video game for 15 minutes (Kirsh & Mounts, 2007). While these studies document differences in the speed of emotion processing, others have reported differences in accuracy as a function of violent video game experience, looking at a larger range of emotions. Diaz, Wong, Hodgins, Chiu, and Goghari (2016) found that violent video game players recognized disgusted faces less accurately, and fearful faces more accurately (and also faster) than non-video game players. Thus, the effects of violent video games on emotion processing are likely subtler than just assuming general blunting. Instead, the model predicts enhanced anger and possibly other negatively valenced emotion perception, at the expense of hindered happiness, pain or sadness perception.

10 Finally, a third possible prediction comes from the proposal that the cognitive benefits documented in AVGPs discussed above arise through a learning mechanism that is common across a variety of tasks. In this “learning to learn” hypothesis proposed by Bavelier et al. (2012), AVGPs would more readily learn the particular statistical regularities of the new tasks and stimuli they are presented with. This would then, in turn, manifest as AVGPs outperforming NVGPs, and would include AVGPs exhibiting faster response times or more precise memory representations. A distinctive feature of this learning to learn view is that group differences should be the greatest at intermediate stages of learning, when using stimuli and task configurations that are rather unfamiliar, such as random dot kinematograms, or the N-back task. Because differences between AVGPs and NVGPs in this framework arise due to differences in learning rate, this theory anticipates that AVGPs are unlikely to outperform NVGPs for tasks or stimuli that are highly overlearned in adults, such as categorizing facial emotions.

Here we sought to adjudicate between the three overarching hypotheses discussed above: enhanced information processing, GAM, and learning to learn. In a first study (n=97), we contrasted AVGPs and NVGPs as they discriminated static facial expressions with different intensities of anger, happiness, sadness, and pain. We applied a drift-diffusion model (DDM) to characterize the different stages of decision- making in that emotional task. A key advantage of DDMs over standard methods is that they aggregate the complex pattern of RTs and accuracies into a relatively small number of parameters that together capture different stages of the decision-making process. Another advantage of DDMs is that they are more sensitive for detecting potential group-level differences than standard behavioral measures (e.g. White,

Ratcliff, Vasey, & McKoon, 2010). We can thus ask whether potential differences in

11 discrimination performance arise from the rate of evidence accumulation, or from response biases such as response caution (boundary separation), motor execution time

(non-decision time), or a possible bias in the starting point of the diffusion process

(starting point bias). The enhanced information processing view predicts group differences in evidence accumulation for all emotions. In contrast, the GAM predicts higher evidence accumulation rates and higher starting point bias for angry expressions, and possibly lower evidence accumulation rates and lower starting point bias for happiness, sadness, and other empathy-related emotions such as pain, in

AVGPs compared to NVGPs. Finally, the learning to learn hypothesis predicts no group differences in any of the aspects of decision-making, given that the task decisions involve well-tuned templates for expressions that have been overlearned. Moreover, as small to medium cross-sectional differences in lab-based measures of aggression have been reported by meta-analyses between frequent players of violent video games and

NVGPs (Hilgard, Engelhardt, & Rouder, 2017), a further distinctive feature of this study was to ask whether the same cross-sectional difference was observed between AVGPs and NVGPs. To this end, we made use of the Competitive Reaction-Time task (CRT) task. This is a lab-based paradigm that has often been used to measure reactive aggression, which has been shown to correlate with certain dimensions of trait aggression (Giancola & Zeichner, 1995; but see Ferguson & Rueda, 2009).

In a second study (n=54), we contrasted the mental representations that underlie the categorization of facial emotions in another sample of AVGPs and NVGPs using the reverse inference technique. This technique has its foundation in the well-accepted finding that facial emotional expressions are typically communicated through complex patterns of muscle movements rather than just the end-points of facial expressions, which is what is examined when using static photographs. The reverse inference

12 technique provides a characterization of the spatial and temporal patterns of facial muscle activation - also called action units (AUs) - that give rise to the recognition of dynamic facial expressions signaling happiness, , , anger, , and sadness. Previous work suggests that this technique is sufficiently sensitive to identify large-scale differences in experiences. For instance, it has been previously used to characterize cultural differences in facial emotion representations (Jack, Garrod, Yu,

Caldara, & Schyns, 2012). On this task, to the extent that AVGPs benefit from enhanced information processing, we would expect group differences, especially higher fidelity of each emotion representations in AVGPs as compared to NVGPs. In contrast, the GAM again predicts a differential view with sharper representations of anger (and possibly fear and surprise), but more blunted ones for happiness sadness, and possibly disgust. Finally, if the noted cognitive advantage of AVGPs was due to learning to learn, the high expertise of humans in facial emotion recognition predicts equivalent representations across groups.

1.2 STUDY 1 - Materials and methods

Participants completed 2 sessions separated by several weeks. Sessions were composed of tasks assessing emotion perception skills. Session 1 (N = 97) included two emotion perception tasks, using sad-to-angry and pain-to-happy morph continua.

Participants discriminated stimuli in a two alternative forced choice tasks as in (Qiao-

Tasserit et al., 2017). Stimuli were presented in blocks of 240 faces per continuum, 16 identities/continuum, 15 levels/morph continuum, time of presentation 500ms, without a response time limit (see Figure 1). Participants also completed the Competitive

Reaction Time task, which evaluates how participants react to unjustified aggression

13 (Whitaker & Bushman, 2012), as well as social tests and questionnaires assessing different aspects of personality (, , aggressive personality, competitiveness, and empathy), which will be reported elsewhere (Pichon, Antico,

Chanal, Singer, & Bavelier, submitted). At Session 2, 84 of the original 97 participants returned. Session 2 included four emotion perception tasks using the same four emotions as in Session 1, but with the discrimination being performed against neutral expressions.

Sample size: Recent meta-analysis in AVG research has highlighted superior performance (d=0.55) in cross-sectional studies comparing AVGP and NVGPs in various cognitive and perceptual tasks (Bediou et al., 2018). Estimation of sample size with G Power 3 (Faul, Erdfelder, Lang, & Buchner, 2007) comparing two group means showed that for the design of Study 1.1, respectively Study 1.2, Ns of 37 and 28 participants per group were needed to achieve 80% power at 5% significance level

(repeated-measures ANOVA with a 2-levels group factor and a 2 (Study 1.1) or 4- levels (Study 1.2) repeated-measures emotion factor, d=0.55, correlation r=0.35 among repeated measures based on an independent dataset (n=160) with the same stimuli).

14 Figure 1. Stimuli and protocol used in Study 1: Panel A. Example of pain-happy and sad-angry continua used in session 1. Panel B. Time course of a trial for the emotion perception tasks of Sessions 1 and 2.

Participants: The final sample consisted of 47 AVGPs (mean age ± SD 22.3 ±

4.3 years old) and 50 NVGPs (24.2 ± 5.1 y.o.) at session 1. At session 2, 38 AVGPs

(22.3 ± 4.3 y.o.) and 46 NVGPs, (24.1 ± 5.3 y.o.) returned. All participants were male and of Caucasian descent. The two groups were roughly matched in age (at session 1, t(95) = -2, p = 0.044; and at session 2 t(82) = -1.6, p = 0.1). This study was approved by the ethical committee of the University of Geneva, which abides by the Declaration of Helsinki. Subjects received compensation for their participation. We recruited a total of 104 male subjects. Seven subjects were excluded due to the following reasons: 3 participants were erroneously classified as AVGPs or NVGPs at the recruitment stage

(see criteria below), 1 participant was an outlier on age (45 y.o., while the age range of other participants was 18-35 y.o.), and 3 participants did not comply to instructions, responding randomly.

15 Our laboratory maintains a database of participants populated via responses from local ads. Some of these ads specifically target experienced and less experienced video-game players, as numbers in these extreme categories are otherwise too low to carry out a well-powered study. Participants are recruited year-round in that database by a lab assistant. As part of this process, participants fill out a battery of questionnaires, including questionnaires on leisure activities (sports practice, , media habits and video-game play) and cultural background. A list of participants that matched the inclusion/exclusion criteria of the present study was generated from that database. To avoid introducing demand characteristics, these participants were invited in the study by an experimenter unrelated to the lab assistant responsible for the database. Furthermore, participants were told that the study aimed to investigate cross- cultural differences in interpersonal behaviors. The experimenters were kept blind to participants’ video-game play status during the study.

The inclusion/exclusion criteria to be considered an AVGP or an NVGP were the same as those used in the Bavelier lab in the past. In order to be classified as an

AVGP, an individual would need to have played at least 5 hours per week of first- or third-shooter video games in the past year and at most, reported 1-3 hours per week of play in each of these other game genres: turn-based strategy games, action-sports games, real-time strategy games, fantasy/role playing games and music games (i.e. they needed to be what we refer to as “genre pure” players – only playing action video games and no other types of games; see Dale & Shawn Green, 2017). Participants who played 3-5 hours of AVG a week in the past year were also classified as an AVGP if they played first- or third-person shooters more than 5 hours a week before the past year. The criterion to be considered a NVGP was to play at most 1 hour per week for each game genres listed above and no more than 5 hours total per week across all game genres in the past year as well as the year before past. Note that only males 16 were tested because of the relative scarcity of female AVGPs.

Face stimuli: We used black and white pictures extracted from a dataset of video clips containing actors expressing pain, sad, happy and angry facial expressions. This dataset has been validated for and intensity in previous behavioral and functional studies (Decety, Echols, & Correll, 2010; Gleichgerrcht &

Decety, 2014; Lamm, Batson, & Decety, 2007). We extracted one frame at the apex of each expression, and selected expressions recognized above 70% accuracy (chance level 20%) in a 5-AFC discrimination pilot study. For Session 1, we generated sad and angry composite images from 16 identities (8 female actors), showing either a sad or an angry expression using the Fantamorph software (Abrosoft Co.). We also generated pain and happy composite images from the same identities expressing either a pain or a happy expression. These prototypical images were used as endpoints to generate, for each identity, morph sequences with 15 steps, with intermediate images changing incrementally from unambiguously sad (respectively pain) to unambiguously angry

(respectively happy), with emotionally ambiguous images in the middle. A grey mask surrounded each face. Luminance and contrast were equated for all faces. We used a similar approach to generate morphed stimuli at Session 2. We used the same identities to generate four continua composed of linear morph sequences between a set of neutral expressions and each of the four emotions used at Session 1. Each continuum was composed of 15 levels of morphs.

The Competitive Reaction Time task (CRT): The CRT task is one of the most commonly used measures of laboratory aggression (McCarthy & Elson, 2018). It is a

25-trial competitive game that requires participants to respond to a visual cue faster than their partner (Whitaker & Bushman, 2012). We used the computerized version of the CRT task (version 3.4.2, https://uk.groups.yahoo.com/neo/groups/CRTRP/info).

17 Given that our goal was to assess whether AVGPs/NVGPs (as defined in the cognitive psychology literature) differed from one another in the same ways as has been previously reported for violent/non-violent game players (as defined in the social psychology literature), the CRT task was thus well-suited for these needs. We used the computerized version of the CRT task developed by Brad Bushman and Scott Saults

(version 3.4.2), which can be found at this URL.

Participants were told they would be connected via our on-line platform to a same- sex opponent from another research institute in Europe (actually a computer confederate). Before playing each trial, the player determines the sound intensity and duration of the noise blast that the opponent would receive in case he loses the trial.

Noise blasts range between 60dB and 105dB (in 5 dB increments, a 0dB non- aggressive level was also available) and could last between 0 and 5s. Unknown to the participant, the sequence of wins and losses is predetermined. To provoke aggression from the participant, the first trial ends in a loss for the participant, who receives a punishing noise blast with intensity and duration set at maximum. The pre-programmed sequence of won or lost trials (W/L) was the same for all subjects. We used the following sequence with the corresponding levels of punishment intensities and durations set by the opponent: [L-10-10, L-9-10, W-8-7, W-7-8, L-6-6, W-5-4, L-4-2, L-3-

3, W-2-5, W-2-5, W-4-3, L-3-2, W-5-4, L-6-6, L-8-8, W-9-9, L-7-7, W-7-9, L-9-8, W-8-6,

W-6-7, L-5-5, L-3-4, W-4-3, L-2-2]. Noise blasts were calibrated by measuring the volume of the headphone system with a sound level meter (NTI-XL2, with an impulse measure LAImax and a temporal integration window of 35ms).

Different strategies have been used to quantify aggressive behavior from the CRT task (Elson, Mohseni, Breuer, Scharkow, & Quandt, 2014; Hyatt, Chester, Zeichner, &

Miller, 2019). Here we report the mean volume intensity and mean duration averaged

18 across all 25 trials separately, which is one of the most frequently used strategies according to Elson (2017) and has been proposed as standardized measurement for the CRT task (Ferguson, Smith, Miller-Stratton, Fritz, & Heinrich, 2008). Among the 97 participants who completed the CRT task in Session 1, 13 participants (4 AVGPs, 9

NVGPs) expressed suspicions and were excluded from its analysis (N = 84 total, 43

AVGPs and 41 NVGPs).

19 Drift-Diffusion modelling of facial emotion discrimination: For each emotional continuum, the distributions of classification responses and RT was jointly modeled using a DDM (Ratcliff & McKoon, 2008; Ratcliff & Rouder, 1998). The DDM assumes that evidence is integrated until a decision boundary is reached, at which point a response is generated. Because of noise in the drift process, stochasticity in RT and in accuracies emerges even for fixed stimuli. The important parameters of the DDM are the mean evidence accumulation rate (the rate at which evidence accumulates towards the decision boundary); the decision boundary separation (the amount of evidence required to commit to one response, which reflects response caution); an additive component to response time called “non-decision time”, which reflects all the combined effects on RT that are not due to evidence accumulation, such as motor planning and execution; and finally a possible initial bias (i.e. starting point) towards one of the two response alternatives. The latter parameter captures the fact that the diffusion process may start closer to one of the two decision boundaries, which results in faster responses to that alternative, as well as potentially higher error rates. For a full description of our DDM, we refer the reader to the Supplementary Material and the supplementary provided.

Previous work (Ratcliff & Tuerlinckx, 2002) has shown that outliers (trials with unusually fast or slow response times) can bias the estimation of parameters within the

DDM. Hence, we removed the slowest and fastest 2.5% of trials for each subject and continuum.

We estimated parameters of the DDM using Bayesian inference. More specifically, we utilized the diffusion model extensions developed for the JAGS statistical software package (Wabersich & Vandekerckhove, 2014). Bayesian parameter estimation allows a posterior distribution over parameter estimates to be obtained, in contrast to other

20 approaches that yield only a point estimate. Although it is also possible to develop hierarchical models within the Bayesian framework (Wiecki, Sofer, & Frank, 2013), for this application, we adopted a non-hierarchical analysis framework in which parameters were estimated independently for each participant.

Code implementing the DDM was implemented in the JAGS model description language (Plummer, 2003). The model specifies a prior probability distribution over each of the model parameters, as well as the likelihood of the observed data given the model and its parameterization. Markov Chain Monte Carlo (MCMC) is then used to approximate the Bayesian posterior distribution over parameter values. In our application we assumed that all model parameters had flat (uniform) priors over a plausible range of values. Complete code implementing the model is provided.

Parameters obtained from the analysis were examined at both the individual level (95% intervals around parameter estimates) and group-level. For group-level analyses, we computed the posterior mean parameter estimates for each subject, and then performed standard statistical analyses (ANOVA) on the estimated subject-level parameters. In our implementation of the DDM, we assumed that the evidence accumulation rate depended on the specific morph-value of the visual stimulus. As an intuitive example, some images of faces are clearly angry or sad, while others are more ambiguous. This difference between stimuli is modeled as a difference in how quickly evidence is accumulated in the diffusion process. In order to minimize the number of parameters in the model, we further assumed that the relationship between the stimulus morph value, and the evidence accumulation rate, followed a power-law relationship:

δ ( j )=sign ( j−μ ) ⋅κ ⋅|j−μ|γ .

In the above, δ ( j ) indicates the evidence accumulation rate (also called the drift diffusion rate) for stimulus morph-value j, μ indicates the morph value of subjective 21 equality along the continuum (the morph-value which is perceived as intermediate between the two alternatives), κ controls the scaling of the power function, and γ is a scaling exponent which determines the curvature of the power-law relationship. When γ equals 1, the model reduces to a linear relationship. A power-law relationship was previously shown to provide an excellent fit to perceptual decision making in other domains (Palmer, Huk, & Shadlen, 2005). In summary, this equation parameterizes the evidence accumulation rate for the 15 morph-values in terms of three underlying parameters. This reduces the number of model parameters and avoids potential overfitting. We further increased the statistical power of the model by enforcing that stimuli from all morph values constrain the estimates of the common underlying parameters that are response bias, boundary separation, and non-decision times.

Indeed, and unlike the accumulation rate, these were held constant across all stimuli within each emotional continuum.

Data analysis: Parameters of the DDMs (evidence accumulation rates, decision boundaries, non-decision times, response bias) and the intensity and duration variables from the CRT task were analyzed with ANOVAs and T-Tests to test for effects of group, emotion, and morph. Frequentist analyses were complemented with Bayesian statistics to examine the strength of evidence in favor or against our main hypotheses about the group level (AVGP/NVGP). By convention, a Bayes factor (BF10) > 3, represents substantial evidence for the alternative hypothesis, while a Bayes factor (BF10) < 0.3 represents substantial evidence for the null hypothesis. Anything between these values expresses weak or anecdotal evidence (Dienes, 2014).

22 1.3 STUDY 1 – Results Session1 (N = 97)

Emotion discrimination task: Group differences in mean evidence accumulation rates obtained from the DDM were assessed through an omnibus

ANOVA (n=97) carried out with group (AVGP/NVGP), emotion continuum (pain-happy/ sad-angry), and morph levels (15 levels). As expected, we observed a main effect of morph level (F(14, 1330) = 1185.4, p < 0.0001, Greenhouse-Geisser [GG] corrected p

2 < 0.0001, ηp = 0.84, see Figure 2, panels A-B), indicating greater evidence accumulation rates at the extremes of the morph continua. There was no effect of emotion (F(1, 95) = 2.3, p = 0.13). The only significant interactions, which survived GG correction, was that of emotion by morph levels (F(14, 1330) = 80.7, p < 0.0001, GG

2 corrected p < 0.0001, ηp = 0.12), signaling higher evidence accumulation rate at extremes values of morph for the pain-happy continuum as compared to the sad-angry continuum.

Importantly, videogame group had no reliable effect as indicated by a lack of

2 group effect (F(1, 95) = 0.5, p = 0.5, ηp = 0.001, BF10 = 0.07). Interactions with group, when present in the raw analyses, did not survive the GG corrections (group–by-morph

2 (F(14, 1330) = 1.8, p = 0.032, GG corrected p = 0.18, ηp = 0.008; group-by-emotion-

2 by-morph (F(14, 1330) = 2.5, p = 0.001, GG corrected p = 0.08, ηp = 0.004). These non-significant group trends arise from slightly higher evidence accumulation rate at extremes of the morph levels in AVGPs as compared to NVGPs, especially in the sad- angry continuum.

In addition to evidence accumulation rate, the DDM also provides estimates of decision boundary, non-decision time, and response bias. Separate ANOVAs for each

23 of these dependent variables with the factors group and emotion continuum revealed a

2 main effect of emotion in boundary separation (F(1, 95) = 10.8, p = 0.001, ηp = 0.02)

2 and in non-decision time (F(1, 95) = 31, p < 0.0001, ηp = 0.06), diagnostic of more cautious responding (mean: 1.65 vs. 1.55), and slower motor execution times (0.54 vs.

0.49) in the sad-angry versus the pain-happy continuum. A main effect of emotion for

2 response bias (F(1, 95) = 11, p = 0.001, ηp = 0.06) was also visible (sad-angry: 0.53, happy-pain: 0.51); we note this effect is difficult to interpret as bias estimates are specific to each emotion continuum. As with the accumulation rate, these results are in line with the greater discriminability of the pain-happy as compared to the sad-angry continuum. More relevant to our aim, there was yet again no effect of group in boundary separation (p > 0.99, BF10 = 0.29), in non-decision time (p > 0.53, BF10 =

0.6), or in response bias (p = 0.06, BF10 = 0.57), nor any interaction between group and emotion (all ps > 0.14).

24 Figure 2: Results of Study 1: Each column plots psychometric, chronometric, and evidence accumulation rates for the 15 morph-values. Panel A) Session1 data and

DDM fit illustrating higher evidence accumulation at the extremities of the pain-happy as compared to the sad-angry continuum. Panel B) Session1 data and DDM fits illustrating the comparable emotion decision-making skills of AVGPs and NVGPs.

Panel C) Session2 data and DDM fits illustrating higher evidence accumulation for pain-neutral and happy-neutral continua as compared to sad-neutral and angry-neutral continua. Panel D) Session2 data and DDM fits illustrating the comparable emotion decision-making skills of AVGPs and NVGPs. Model fit and empirical means are displayed with error bars/shaded areas indicating 95% confidence intervals.

Competitive Reaction Time (CRT) task: As mentioned above, only non- suspicious subjects were analyzed (N = 84). AVGPs inflicted higher levels of aggression intensity than NVGPs (AVGPs: 5.5 ± 0.34 vs. NVGPs: 4.45 ± 0.36, t(82) =

2.14, p = 0.035, d = 0.47). This difference remained significant after accounting for

2 auditory discomfort (ANCOVA: F(1,81) = 4.1, p = 0.046, ηp = 0.05), trait anxiety

2 2 (F(1,81) = 4.8, p = 0.03, ηp = 0.05), state anxiety (F(1,81) = 4.7, p = 0.03, ηp = 0.05),

2 and depression (F(1,81) = 4.3, p = 0.04, ηp = 0.05). Duration of noise blasts was numerically higher in AVGPs compared to NVGPs; however, this difference was not significant (AVGPs: 5.06 ± .31 vs. NVGPs: 4.26 ± 0.34, t(82) = 1.73, p = 0.09, d = 0.37).

1.4 STUDY 1 - Results Session2 (N = 84)

In session 2, participants performed 4 additional emotion discrimination tasks with each continuum varying from neutral faces to our four emotions (pain, happy, fear,

25 sad). We entered estimated evidence accumulation rates into an ANOVA (n=84) with the factors group (AVGP, NVGP), emotion continuum (neutral-pain, neutral- happy, neutral-fear, neutral-sad), and morph levels (15 levels). As expected, there was a main

2 effect of morph level (F(14,1148) = 1317.7, p < 0.0001, GG corrected p < 0.0001, ηp =

0.84). There was also a main effect of emotion (F(3,246) = 121.3, p < 0.0001, GG

2 corrected p < 0.0001, ηp = 0.24), which reflected higher drift rates in the Neutral-Happy and Neutral-Pain continua compared with Neutral-Sad and Neutral-Anger, and an interaction between morph level and emotion (F(42,3444) = 139.5, p < 0.0001, GG

2 corrected p < 0.0001, ηp = 0.28). Of interest to our primary goals though, we again found no effect of group (p = 0.43, BF10 = 0.59), nor any interaction involving the factor

Group (all ps’ > 0.45), suggesting equivalent evidence accumulation about facial emotions in AVGPs and NVGPs (see Figure 2, panels C-F).

The boundary, non-decision time and bias parameters were analyzed via separate ANOVAs with the factors group and emotion. We found a main effect of emotion in boundary separation (F(3,246)=29.4, p<.0001, GG corrected p<.0001,

2 ηp =.05), highlighting more cautious responding for the Neutral-Happy (mean:1.51) and

Neutral-Pain continuum (1.58) compared with the Neutral-Anger (1.40) and Neutral-

Sad (1.42) continuum. We also found an effect of emotion in non-decision time

2 (F(3,246)=54.3, p<.0001, GG corrected p<.0001, ηp =.17), highlighting longer non- decision times in the Neutral-Sad (0.5) and the Neutral-Anger (0.47) continuum compared to the Neutral-Pain (0.41) and the Neutral-Happy (0.41) continuum. We found an effect of emotion in the bias parameter (F(3,246)=3.5, p<.01, GG corrected

2 p<.02, ηp =.02), highlighting slightly different bias across the four continua. Importantly, we found again no effects involving the group factor (all ps > 0.15, all BF10s < 0.56),

26 nor any interaction between emotion and group (all ps > 0.44) in any of these parameters.

27 1.5 STUDY 1 – Discussion

Six different emotion discrimination tasks were used to test the predicted difference in facial emotion processing between AVGPs and NVGPs. Contrary to predictions from the “overall better perception” framework as well as from the GAM framework, we did not observe any evidence for a faster rate of information accumulation in AVGPs, nor did any other processing stages of decision-making differ.

Overall, we found highly similar facial emotion perception performance between the two groups.

The higher intensity in reactive aggression found in AVGPs, though, is consistent with the higher aggression found in recent meta-analyses of cross-sectional studies contrasting this same aggression measure in frequent and infrequent players of violent video games (Hilgard et al., 2017). According to Hilgard et al. (2017), this cross- sectional effect ranges between d = 0.44 and d = 0.6, which is similar to the effect size we found in both of our aggression measures (d=.47 and d=.37 respectively). This cross-sectional difference is consistent with the fact that our AVGP categorization scheme at least partly overlaps with the violent video game categorization scheme.

Note that this cross-sectional difference in aggression between AVGPs and

NVGPs must not be interpreted as causal evidence that frequent AVG play influences aggression. Evidence that long-term violent video game play is durably associated with increases in aggression remains highly controversial (Hilgard et al., 2017; Kepes,

Bushman, & Anderson, 2017; Prescott, Sargent, & Hull, 2018; but see Ferguson, 2015;

Kühn et al., 2018). For instance, a recent intervention study showed that playing a violent game (GTA-V) for more than 30 hours over 8 weeks did not cause any increase in impulsivity, in aggression, in empathy or interpersonal competencies, in mental

28 health (anxiety and depression) nor any change in the tasks the authors used to assess executive functions, compared with an active group who played a social game (The

Sims) or a passive group (Kühn et al., 2018). Cross-sectional differences in aggression between users of violent media and video games might thus reflect influences from other factors that typically co-occur with video game play. As extensively discussed by

Ferguson and colleagues, family environment, mental health problems, or personality factors such as competitiveness (Ferguson, Cruz, et al., 2008; Ferguson, San Miguel,

Garza, & Jerabeck, 2012; Lobel, Engels, Stone, Burk, & Granic, 2017) may influence participants’ willingness to play violent games or their willingness to respond to provocation. The present study cannot speak to this issue.

Importantly for our aim, our results highlight that despite being able to detect a small-to-medium effect size in CRT task between AVGPs and NVGPs, the two groups displayed comparable facial emotion processing abilities. The null effect of group in terms of emotion perception thus appears unlikely to be due to a lack of sensitivity in our approach. Our null finding dovetails with recent brain imaging reports which found no evidence of reduced functioning in brain networks important for emotion perception and for evaluating the emotional content of social situations in frequent users of violent video games (Szycik, Mohammadi, Hake, et al., 2017; Szycik, Mohammadi, Münte, & te

Wildt, 2017). Our null finding, however, contrasts with the proposal of enhanced perceptual abilities in AVGPs. Indeed, enhanced perception in AVGPs as a result of less noisy processing predicts better facial emotion discrimination, such as in the task used here. To better understand the robustness of this null finding, Study 2 uses a different approach, reverse inference, which more directly probes the quality of perceptual representations, and in this case the mental models for each of the 6 basic facial emotions.

29 30 2.1 STUDY 2

Participants were presented with faces displaying random combinations of dynamically animated action units and were asked to categorize the dynamic facial expressions along the six possible basic emotions of happiness, fear, surprise, anger, disgust, and sadness (Figure 3). Based on the participants’ responses, which included not only categorization but also intensity judgments, to 2400 dynamic facial stimuli (6 hours of in-laboratory testing per participant, spread over 2 to 3 sessions), the unique spatiotemporal patterns of action unit activations associated with each of the six basic emotions were recovered separately for each participant (Jack et al., 2012). Using reverse correlation, we then estimated the internal models of our participants for each of the six basic emotions and analyzed these as a function of video game group. As for

Study 1, to the extent that AVGPs benefit from enhanced perception, we could expect more precise emotional representations in AVGPs than in NVGPs. Yet, the GAM rather predicted a differential view with sharper representations of anger and possibly fear and surprise, but more blunted ones for happiness, sadness, and possibly disgust. Finally, if the perceptual advantage of AVGPs was due to their ability to learning to learn, the already high expertise of humans in facial emotion recognition predicted little differences across groups.

Sample size: Estimation of sample size comparing two group means showed an

N of 25 participants per group was needed to achieve 80% power at 5% significance level (repeated-measures ANOVA with a 2-levels group factor and a 6-levels repeated- measures emotion factor, d=0.55, correlation r=0.35 among repeated measures).

31 2.2 STUDY 2 - Materials and Methods

Participants: The final sample included in the analyses consists of 27 AVGPs

(mean age ± SD 22.81 ± 2.69 y.o) and 27 NVGPs (22.96 ± 3.66 y.o.), who were similar in age (t < 1, d = 0.05). This study was approved by the ethical committee of the

University of Geneva and that of the University of Wisconsin (each group includes 18 participants from Geneva and 9 from Wisconsin), which abide by the Declaration of

Helsinki. Subjects received compensation for their participation.

Recruitment proceeded as in Study 1, with some participants recruited at the

University of Geneva and others at the University of Wisconsin. Initially, 88 participants were recruited. Based on previous work showing cultural differences in this task (Jack et al., 2012), our inclusion/exclusion criteria called for male, Caucasian participants only

(12 AVGPs and 8 NVGPs who were non-Caucasian had to be excluded) between 18-

35 years of age (one 47 years old NVGP was excluded). Video game status was assessed as in E1. Participants were also selected so that they were not high media multi-taskers, as per the media multi-tasking inventory (Ophir, Nass, & Wagner, 2009).

Four subjects (1 AVGP and 3 NVGPs) who failed to complete the Media Multitasking

Inventory (MMI), and 1 AVGP and 1 NVGP whose MMI score were higher than 6, were excluded. Finally, participants with incomplete or corrupted data were removed: 5 subjects (2 AVGPs and 3 NVGPs) who did not complete the 2400 trials of the task; 1

NVGP who gave the same response throughout the 2400 trials of the task; 1 NVGP whose first analyses steps could not identify all six emotion categories.

Study procedure: The stimuli and task were identical to Jack et al. (2012). On each of the 2400 trials (Figure 3), a facial animation was shown, consisting of 3 facial muscles (action units, AU) chosen randomly from a set of 42 possible AUs. Each AU

32 was animated according to a specific dynamic sequence characterized by 6 temporal parameters: onset/peak/offset latency, peak amplitude, acceleration, and deceleration as in Yu, Garrod and Shyns (2012). After each animation, participants were asked to categorize the among 7 possible choices (happy, sad, angry, fearful, surprise, disgust, or other), and then to rate its intensity (using a 5-point scale, with anchors: very weak, weak, medium, strong, very strong). We used 8 different identities (4 women and 4 men) to generate random facial gestures by random selection of AUs and parameters.

Data analysis: Data analysis consisted of reverse correlating the spatial (42

AUs) and temporal parameters (6 parameters for each AU, defining the time course) on participants’ responses. The responses were z-scored for each participant to reduce individual and identity biases.

First, we extracted spatial and temporal components. Spatial components reflect the non-parametric correlation (i.e., Spearman’s rho) between the presence of a specific AU and a given emotion (e.g. how much the presence of AU-12 is associated with greater probability of responding “Happy”), whereas temporal components relate to the 6 parameters defining the dynamics of each AU (e.g. whether the peak amplitude latency correlates with perceived intensity judgment for a given emotion). In each case, these component matrices were derived by conducting a multivariate regression analysis and measuring the semi-partial correlation (i.e., the unique contribution in terms of the variance) of the specific spatial or temporal parameters, after partialling out all other parameters. Next, these component matrices were used to evaluate (i) the number of diagnostic units (defined by those with significant positive correlations), (ii) the dissimilarity between individual emotional models (corresponding to a vector of 42 correlation coefficients for each participant and emotion), and (iii) their clustering into

33 emotions (K-means clustering).

For these various dependent measures (number of diagnostic AUs, Euclidean distances between models, or matrices), classical statistical tests, including

ANOVAs, T-Tests, or Chi-square tests were used to test for effects of group or/and of emotion. Classical (i.e. frequentist) statistical analyses were complemented with

Bayesian statistics to examine the strength of evidence in favor or against our main hypotheses on group differences.

Figure 3. Stimuli used in Study 2. In each trial, a Generative Face Grammar (GFG) platform (Yu et al., 2012) randomly selects a subset of AUs (red, green, and blue labels), in this case Upper Lid Raiser (AU5), Nose Wrinkler (AU9), and Upper Lip

Raiser (AU10), from a core set of 42 AUs and assigns a random movement to each AU individually using six temporal parameters: onset latency, acceleration, peak amplitude, peak latency, deceleration, and offset latency (labels over the red curve). The GFG then combines the randomly activated AUs to produce a photorealistic facial animation

(shown with four snapshots across time). As in Figures 1 and 2, the receiver

34 categorizes the stimulus as socially meaningful (disgust) and rates the intensity of the perceived emotion (strong) when the dynamic pattern correlates with their mental representation of that facial expression. Building a relationship between the dynamic

AUs presented in each trial and the receiver’s categorical responses produces a mathematical model of each dynamic facial expression of emotion.

2.3 STUDY 2 - Results

Number of diagnostic AUs (amount of information per model). Figure 4 shows the mean number of diagnostic AUs for each emotion per group, or in other words, the number of AUs that correlated significantly with a given emotion response.

Fear was associated with fewest diagnostic AUs, whereas disgust was associated with the greatest number of diagnostic AUs. An ANOVA with emotion and group as factors

2 indicated an effect of emotion (F(5,260) = 16.62, p < 0.001, GG corrected p < 0.001, η p

= 0.24, BF10 = 2.30E+12), but no effect of group and no interaction (Fs < 1, all BF10s <

0.2).

Figure 4. Mean number of diagnostic AUs per emotion. Error bars are standard deviations (hap=happiness, sur=surprise, fea=fear, dis=disgust, ang=anger).

35 36 Configuration of AUs (homogeneity of models across individuals): The inter-individual variability of spatial configurations provides an index of the homogeneity of the representations within each group (AVGPs vs NVGPs). Thus, smaller distances indicate greater model similarities across the considered individuals.

Dissimilarity matrices between individual emotional models (i.e., one model per emotion and per participants) were computed using Euclidean distances, considering

AVGPs and NVGPs separately (Figure 5A). The mean Euclidean distance for each within-emotion model (i.e., diagonal of dissimilarity matrix) and each group, was compared with an omnibus ANOVA, followed by independent sample T-tests (Figure

5B). The ANOVA revealed an overall main effect of emotion (F(5,260) = 69.75, p <

2 0.001, GG corrected p < .001, ηp = 0.57, BF10 = 8.88E+43), reflecting smaller distances for happiness (i.e., more similar models) than for other emotions across all participants. More relevant to our aim, this analysis revealed no main effect of group (F

< 1, BF10 = 0.162). A significant group x emotion interaction (F(5,260) = 2.50, p = 0.03,

2 GG corrected p = 0.04, ηp = 0.57, BF10 = 1.13) was found, which was driven by a pattern of non-significant group differences, with numerically smaller values (i.e., greater homogeneity) in the NVGP group for both sad and fear expressions

(respectively p = 0.09, BF10 = 0.92 and p = 0.12, BF10 = 0.76), and numerically, though non-significantly, greater homogeneity in the AVGP group for happiness, anger, disgust, and surprise (all ps > 0.15, all BF10s < 0.64).

Confusions (quality of models): In order to examine the distinctiveness of individual emotional models, we performed a K-means clustering analysis of the

Euclidean distances between the emotional models recovered for each participant

(Figure 5C), and examined common confusions; that is the percentage of models that were misclassified as belonging to another emotion’s cluster (Figure 5D). An ANOVA of

37 the within-emotion clusters (i.e., the diagonal of the K-means matrix in Figure 5D) with emotion and group as factors revealed main effect of emotion (F(5,260) = 39.71, p <

2 0.001, GG corrected p < 0.001, ηp = 0.43, BF10 = 3.25E+29), reflecting differences in classification accuracy across emotions. The effect of emotion was qualified by a group

2 x emotion interaction (F(5,260) = 2.74, p = 0.02, GG corrected p = 0.05, ηp = 0.05,

BF10 = 2.57). This interaction was driven by AVGPs having better models of surprise

(i.e., fewer confusions, p = 0.04, BF10 = 1.71) and a trend toward NVGPs having better models for anger relative to AVGPs (p = 0.09, BF10 = 0.90). No other group difference approached significance (all other ps > 0.16, all BF10s < 0.41). Finally, chi-square analysis confirmed that the confusion rates did not differ between groups χ²(5) = 2, p =

0.8, BF10 = 0.003), providing strong evidence in favor of the null hypothesis.

38 Figure 5. Participants’ emotional models, separately for AVGPs and NVGPs. A)

Dissimilarity matrices using Euclidean distances applied to the vectors of correlations estimating the relationship between the presence of a specific AU and participants’ categorical responses (27 participants by 6 emotions); B) Mean Euclidean distance for each within-emotion models (i.e. mean and SD for the diagonal of dissimilarity matrix displayed in panel A); C) K-means clustering of individuals’ emotion models based on

Euclidean distances; D) Confusion matrices based on k-means clustering of Euclidean distances (the color scale here indicates the number of subjects). 39 2.4 STUDY 2 - Discussion

Study 2 aligns with and extends the results seen in Study 1, which indicated similar facial emotion representations in AVGPs and NVGPs. The absence of group effects raises the question of whether these may have been masked by some of our experimental choices. For example, the present study used photorealistic facial animations as stimuli, whose ecological validity remains unknown. Yet, several studies have shown that emotional expressions from both static (Dyck et al., 2008) and dynamic virtual computer-animated avatars (Faita et al., 2015) are comparable to those from real human emotions in terms of recognition accuracy and other social judgments, including the ability to replicate impairments found in clinical populations (Dyck,

Winbeck, Leiberg, Chen, & Mathiak, 2010). Moreover, this set of stimuli is known to be sufficiently sensitive to reveal the existence of cultural and individual differences in how information from faces is extracted in order to make emotional judgments (Jack &

Schyns, 2017). In sum, Study 2 reinforces the lack of group differences documented in

Study 1 when it comes to facial emotion representations.

2.5 General Discussion

In a series of two studies, we investigated whether AVGPs process static and dynamic emotional stimuli better, similarly, or worse than NVGPs, using two complementary psychophysical methods that are sensitive to inter-individual differences in emotion perception. In doing so, we contrasted three hypotheses. Less noisy perceptual processing across all domains of perception in AVGPs predicted that

AVGPs would be globally more accurate at recognizing facial emotions than NVGPs

40 (Study1), and accordingly display more precise mental models or templates of facial emotion (Study2). Our results did not support either of these predictions. Our results similarly failed to support the hypothesis, from social theories of media violence, that

AVGPs may display higher sensitivity to anger due to a learned attention bias for hostile emotions, and/or blunted processing of emotions signaling distress. We found no evidence for such emotion-specific group differences. Instead, our results are more in line with the learning to learn hypothesis, or the proposal that AVGPs should outperform

NVGPs due to higher learning abilities when faced with new information. To the extent that facial emotion perception is an overlearned visual function in adults, this view predicted that no group difference would be observed in either study. However, we note that predicted nulls in some ways offer less strong support for a theory than predicted positive results. Thus, future work may wish to directly contrast overlearned emotional expressions with emotional content to which participants are naïve.

The results reported in both studies do not support heightened emotional facial processing in AVGPs relative to NVGPs. Overall, our study contrasts with a growing body of work showing that AVGPs outperform NVGPs on a variety of perceptual, attentional and cognitive skills (Bediou et al., 2018; but see Sala, Tatlidil, & Gobet,

2018). Regarding visual processing in particular, AVGPs have shown superior performance in a number of perceptual decision making tasks measuring contrast sensitivity (Li et al., 2009), orientation identification (Bejjanki et al., 2014; Berard et al.,

2015), or motion perception (Green et al., 2010; Hutchinson & Stocks, 2013). These broad improvements in low-level aspects of visual perception are thought to arise through an increased capacity to separate relevant (i.e., diagnostic) information from distraction (i.e. noise), allowing perceptual information to accumulate faster in the service of decision-making (Bavelier et al., 2012; Green et al., 2010). It could be simply

41 that the benefits of AVG play, in terms of information processing, do not extend to the high-level perception of visual stimuli, such as emotional faces. Another possibility, which is compatible with the learning to learn hypothesis, could be that while the benefits of AVG practice generalize to tasks and stimuli that are relatively novel, the expertise reached by adults in processing facial emotions is likely to be the same, independent of video game expertise. In the case of such overlearned stimuli, AVGPs would perform no differently than NVGPs.

The absence of group differences across each and every emotion considered also speaks against the idea of emotion-specific differences, as would be predicted by some authors in the violent video game literature. Unlike Diaz and colleagues (2016), we found no difference in the mental representations of fear and disgust facial expressions between AVGPs and NVGPs. These authors compared performance of violent video game players (n = 83) and NVGPs (n = 69) in a facial emotion recognition task using static images of real faces. Violent video game players performed better than

NVGPs for fear (increased speed and accuracy), whereas they performed worse for disgust (accuracy only). Although Study 1 cannot speak to this issue, as it did not probe fear or disgust, Study 2 did, and provides no support for such a group difference. If any group effects were to be present concerning these two emotions, they concerned

NVGPs exhibiting more similar representations for fear as a group, and AVGPs showing more similar representations for disgust, again as a group. Yet, in terms of quality of the models, which is best measured by confusions, there was no evidence for any group differences along these two emotions. A worthwhile consideration in future studies is the gender balance across groups. Our study exclusively included males as very few females are avid players of action video games. While players of violent video game are also predominantly males, many violent media studies do not systematically

42 match for gender, making it difficult to evaluate the role of media use versus gender in the effects reported. For example, the study by Diaz and colleagues discussed above included 69% males in violent game players group, but only 23% males in the NVGPs group. Moreover, the gamer group was also more anxious and more depressed than the non-gamer group. It would seem best practice to tightly control for gender and mental health status when comparing groups selected based on their gaming habits.

Another proposal in the violent media literature is that of a reduced happy face advantage in violent media users. For example, Kirsh et al. (2006) contrasted the speeded recognition of happy and angry facial expressions in participants scoring high

(N=58) versus low (N=58) on a violent media usage questionnaire. They reported a reduced happy face advantage in the high scoring participants. In a follow-up study,

Kirsh and Mounts (2007) also reported a reduced happy face advantage after having participants (N=197) play 15 minutes of a violent as compared to a non-violent video game. Overall, these studies reported that RTs to happy faces are typically shorter than those to angry faces, a result also observed in Study 1. Of greater interest is the size of this happy face advantage as a function of gaming group. Unlike our modeling approach of RT and accuracy, considering each emotion separately, the Kirsh et al. studies used a composite score where mean RT for recognizing happy faces was subtracted from mean RT for recognizing angry faces. These authors reported that reaction times to happy faces were more similar to the angry faces reaction times in the group that played violent video game (either cross-sectionally or in their intervention study). The interpretation of this difference score is not straightforward. While the authors argued for a slowing down of reaction times to happy faces in the violent gaming group, no pretest was performed, and their result could have been due to a speeding of reaction times to angry faces in that same group. In our research, we could

43 not detect any group differences in the processing or representations of happy or angry facial expressions as a function of video game group. This is despite our experimental approach being able to replicate the main happy face advantage.

A more recent 25-minute intervention study contrasted the effect of playing a violent (n=23, 8 females) or a neutral video game (n=22, 3 females) on behavioral and neural markers during two search tasks with happy, angry, and neutral faces. The authors anticipated that violent video game play might reduce the attentional manifestation of the happy face advantage (Liu, Lan, Teng, Guo, & Yao, 2017). One task assessed attentional facilitation by emotional faces and required participants to search for an emotional face among neutral faces. A second task, which measured the difficulty of disengaging attention from emotional distractors, required participants to search for a neutral face among emotional ones. In the facilitation task, no group effect was detectable on RTs, accuracy, or the N2pc, an evoked potential indexing attention allocation. In the disengagement task, no group effect was found on RTs or accuracy, and only the neutral video game group displayed a N2pc in response to happy faces, providing weak evidence that violent game play may specifically reduce the happy face advantage (or alternatively, may potentiate an attention bias towards angry faces). At the level of behavioral markers, however, these results more directly mirror those of the present study of no difference as a function of video game play type. It should be noted that our results are not in contradiction with previous work showing that chronic violent video game play is linked with reduced P300 responses to negative violent scenes (but not to negative nonviolent scenes), which was interpreted as consistent with the hypothesis that chronic exposure to violent scenes may lead to desensitized responses to violence (Bailey et al., 2011; Bartholow et al., 2006).

The overall lack of group difference across all emotions tested in the present

44 study is all the more striking given that we used two robust approaches known to sample decision-making and the representations they engage exhaustively.

Accordingly, both Studies 1 and 2 demonstrate multiple core known effects in emotion processing. For instance, our DDM results are in line with reports of a happy-face advantage, compared to the identification of angry or sad faces (Kirita & Endo, 1995;

Kirsh et al., 2006; Leppänen, Tenhunen, & Hietanen, 2003). Indeed, we found greater accumulation rates for happy faces as compared to angry and sad ones in Study 1.

Likewise, the greater homogeneity in mental representations of happy faces, as indicated by the reduced Euclidean distances relative to other emotions found in Study

2, is also compatible with the happy-face advantage. Although the drift-diffusion model approach has not been commonly applied to facial emotion decision making, its usefulness as a tool to diagnose group differences in information processing is increasingly recognized (White et al., 2010). Accordingly, Study 1 illustrates its sensitivity to such known effects in the literature.

Conclusion

In conclusion, our studies show that AVG experience is not associated with differences in facial emotion decision-making nor the mental representations of dynamic facial emotions. While this null effect could be attributed to our subject selection, we used the same criterion used in the past and confirmed that in doing so, AVGPs do differ in their response to provocation in a competitive laboratory task. Thus, the lack of group differences in emotion processing reported here is unlikely attributable to a failure to properly select our samples. Rather, this work suggests that the cognitive benefits documented in AVGPs do not readily extend to the emotional domain, at least when using a task that participants are already experts at, such as facial emotion recognition.

45 This comes as a surprise in the context of the enhanced perceptual skills documented in AVGPs. Indeed, facial emotion identification is at its core, a perceptual skill. Yet, the proposal that AVGPs may not benefit perceptually from the onset, but rather may learn faster perceptual tasks as argued in Bavelier et al. (2012) and tested in Bejjanki et al.

(2014), seems aligned with the current null result. Indeed, group differences, if due to faster learning in AVGPs, are expected to be detectable at intermediate stage of learning, before asymptotic performance or expertise has been achieved. The present results thus invite further studies comparing AVGPs and NVGPs when using tasks and stimuli ranging from very novel to highly familiar. It is also possible that the domain of emotional processing differs from that of perceptual processing in terms of its sensitivity to AVG play. This is an important avenue for future work, as it would provide important boundary conditions to further characterize the way AVG play affects information processing.

Acknowledgements

This work was funded by the Swiss National Science Foundation (Ambizione

SNF grant N° PZ00P1_148035 to SP, SNF grant N° 100014_172574 to DB). We thank

Nuhamin Gebrewold Petros (SNF grant N° 100014_140676), for the help provided during recruitment. We also thank the National Centre of Competence in Research for

Affective Sciences, financed by the Swiss National Science Foundation (SNF grant N°

51NF40_104897), which supported the initial stages of the project.

Conflict of interest

Daphne Bavelier declares she is a member of the scientific advisory board of

Akili Interactive, Boston and acts as a scientific advisor to RedBull, Los Angeles. There

46 is no other interest to declare.

47 References

Anderson, C. A., & Bushman, B. J. (2002). Human aggression. Annual Review of

Psychology, 53, 27–51. https://doi.org/10.1146/annurev.psych.53.100901.135231

Bailey, K., & West, R. (2013). The effects of an action video game on visual and

affective information processing. Brain Research, 1504, 35–46.

https://doi.org/10.1016/j.brainres.2013.02.019

Bailey, K., West, R., & Anderson, C. A. (2011). The association between chronic

exposure to video game violence and affective picture processing: an ERP study.

Cognitive, Affective, & , 11(2), 259–276.

https://doi.org/10.3758/s13415-011-0029-y

Bartholow, B. D., Bushman, B. J., & Sestir, M. A. (2006). Chronic violent video game

exposure and desensitization to violence: Behavioral and event-related brain

potential data. Journal of Experimental Social Psychology, 42(4), 532–539. https://

doi.org/10.1016/j.jesp.2005.08.006

Bavelier, D., Green, C. S., Pouget, A., & Schrater, P. (2012). Brain Plasticity Through

the Life Span: Learning to Learn and Action Video Games. Annual Review of

Neuroscience, 35(1), 391–416. https://doi.org/10.1146/annurev-neuro-060909-

152832

Bediou, B., Adams, D. M., Mayer, R. E., Tipton, E., Green, C. S., & Bavelier, D. (2018).

Meta-analysis of action video game impact on perceptual, attentional, and cognitive

skills. Psychological Bulletin, 144(1), 77–110. https://doi.org/10.1037/bul0000130

Bejjanki, V. R., Zhang, R., Li, R., Pouget, A., Green, C. S., Lu, Z. L., & Bavelier, D.

(2014). Action video game play facilitates the development of better perceptual

templates. Proceedings of the National Academy of Sciences of the United States

48 of America, 111(47), 16961–16966. https://doi.org/10.1073/pnas.1417056111

Belchior, P., Marsiske, M., Sisco, S. M., Yam, A., Bavelier, D., Ball, K., & Mann, W. C.

(2013). Video game training to improve selective visual attention in older adults.

Computers in Human Behavior, 29(4), 1318–1324. Retrieved from

http://www.ncbi.nlm.nih.gov/pubmed/24003265

Berard, A. V., Cain, M. S., Watanabe, T., Sasaki, Y., Sagi, D., & Pathak, N. (2015).

Frequent video game players resist perceptual interference. PLoS ONE, 10(3),

e0120011. https://doi.org/10.1371/journal.pone.0120011

Blacker, K. J., & Curby, K. M. (2013). Enhanced visual short-term memory in action

video game players. Attention, Perception, & Psychophysics, 75(6), 1128–1136.

https://doi.org/10.3758/s13414-013-0487-0

Cain, M. S., Prinzmetal, W., Shimamura, A. P., & Landau, A. N. (2014). Improved

control of exogenous attention in action video game players. Frontiers in

Psychology, 5(FEB), 69. https://doi.org/10.3389/fpsyg.2014.00069

Cardoso-Leite, P., Joessel, A., & Bavelier, D. (2020). Games for Enhancing Cognitive

Abilities. In J. L. Plass, R. E. Mayer, & B. D. Homer (Eds.), MIT Handbook of

Game-based Learning (p. In Press). Boston: MIT Press. Retrieved from

https://mitpress.mit.edu/books/handbook-game-based-learning

Carnagey, N. L., & Anderson, C. A. (2005). The effects of reward and punishment in

violent video games on aggressive affect, cognition, and behavior. Psychological

Science, 16(11), 882–889. https://doi.org/10.1111/j.1467-9280.2005.01632.x

Chisholm, J. D., Hickey, C., Theeuwes, J., & Kingstone, A. (2010). Reduced attentional

capture in action video game players. Attention, Perception, and Psychophysics,

72(3), 667–671. https://doi.org/10.3758/APP.72.3.667

Chisholm, J. D., & Kingstone, A. (2012). Improved top-down control reduces oculomotor

capture: The case of action video game players. Attention, Perception, and 49 Psychophysics, 74(2), 257–262. https://doi.org/10.3758/s13414-011-0253-0

Chisholm, J. D., & Kingstone, A. (2015). Action video game players’ visual search

advantage extends to biologically relevant stimuli. Acta Psychologica, 159, 93–99.

https://doi.org/10.1016/j.actpsy.2015.06.001

Dale, G., & Shawn Green, C. (2017). The Changing Face of Video Games and Video

Gamers: Future Directions in the Scientific Study of Video Game Play and

Cognitive Performance. Journal of Cognitive Enhancement, 1(3), 280–294. https://

doi.org/10.1007/s41465-017-0015-6

Decety, J., Echols, S., & Correll, J. (2010). The Blame Game: The Effect of

Responsibility and Social Stigma on Empathy for Pain. Journal of Cognitive

Neuroscience, 22(5), 985–997. https://doi.org/10.1162/jocn.2009.21266

Diaz, R. L., Wong, U., Hodgins, D. C., Chiu, C. G., & Goghari, V. M. (2016). Violent

video game players and non-players differ on facial emotion recognition.

Aggressive Behavior, 42(1), 16–28. https://doi.org/10.1002/ab.21602

Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers

in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.00781

Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., Gur, R. C. R. C., & Mathiak, K. (2008).

Recognition profile of emotions in natural and virtual faces. PloS One, 3(11),

e3628. https://doi.org/10.1371/journal.pone.0003628

Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., & Mathiak, K. (2010). Virtual faces as a

tool to study emotion recognition deficits in . Psychiatry Research,

179(3), 247–252. https://doi.org/10.1016/j.psychres.2009.11.004

Dye, M. W. G., Green, C. S., & Bavelier, D. (2009). The development of attention skills

in action video game players. Neuropsychologia, 47(8–9), 1780–1789.

https://doi.org/10.1016/j.neuropsychologia.2009.02.002

50 Elson, M. (2017). Competitive Reaction Time Task. Http://Www.Flexiblemeasures.Com/

Crtt/. https://doi.org/10.17605/OSF.IO/4G7FV

Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT

to measure aggressive behavior: the unstandardized use of the competitive

reaction time task in aggression research. Psychological Assessment, 26(2), 419–

432. https://doi.org/10.1037/a0035569

Engelhardt, C. R., Bartholow, B. D., Kerr, G. T., & Bushman, B. J. (2011). This is your

brain on violent video games: Neural desensitization to violence predicts increased

aggression following violent video game exposure. Journal of Experimental Social

Psychology, 47(5), 1033–1036. https://doi.org/10.1016/j.jesp.2011.03.027

Faita, C., Vanni, F., Lorenzini, C., Carrozzino, M., Tanca, C., & Bergamasco, M. (2015).

Perception of basic emotions from facial expressions of dynamic virtual avatars. In

L. T. De Paolis & A. Mongelli (Eds.), Lecture Notes in Computer Science (including

subseries Lecture Notes in Artificial and Lecture Notes in

Bioinformatics) (Vol. 9254, pp. 409–419). Lecce, Italy: Springer.

https://doi.org/10.1007/978-3-319-22888-4_30

Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible

statistical power analysis program for the social, behavioral, and biomedical

sciences. Behavior Research Methods, 39(2), 175–191.

https://doi.org/10.3758/BF03193146

Feng, J., & Spence, I. (2018). Playing action video games Boosts Visual Attention. In

Video Game Influences on Aggression, Cognition, and Attention (pp. 93–104).

Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-95495-

0_8

Feng, J., Spence, I., & Pratt, J. (2007). Playing an action video game reduces gender

differences in spatial cognition. Psychological Science, 18(10), 850–855. 51 https://doi.org/10.1111/j.1467-9280.2007.01990.x

Ferguson, C. J. (2015). Do Angry Birds Make for Angry Children? A Meta-Analysis of

Video Game Influences on Children’s and Adolescents’ Aggression, Mental Health,

Prosocial Behavior, and Academic Performance. Perspectives on Psychological

Science, 10(5), 646–666. https://doi.org/10.1177/1745691615592234

Ferguson, C. J., Cruz, A. M., Martinez, D., Rueda, S. M., Ferguson, D. E., & Negy, C.

(2008). Personality, Parental, and Media Influences on Aggressive Personality and

Violent Crime in Young Adults. Journal of Aggression, Maltreatment & Trauma,

17(4), 395–414. https://doi.org/10.1080/10926770802471522

Ferguson, C. J., & Rueda, S. M. (2009). Examining the validity of the modified Taylor

competitive reaction time test of aggression. Journal of Experimental Criminology,

5(2), 121–137. https://doi.org/10.1007/s11292-009-9069-5

Ferguson, C. J., San Miguel, C., Garza, A., & Jerabeck, J. M. (2012). A longitudinal test

of video game violence influences on dating and aggression: A 3-year longitudinal

study of adolescents. Journal of Psychiatric Research, 46(2), 141–146.

https://doi.org/10.1016/j.jpsychires.2011.10.014

Ferguson, C. J., Smith, S., Miller-Stratton, H., Fritz, S., & Heinrich, E. (2008).

Aggression in the Laboratory: Problems with the Validity of the Modified Taylor

Competitive Reaction Time Test as a Measure of Aggression in Media Violence

Studies. Journal of Aggression, Maltreatment & Trauma, 17(1), 118–132.

https://doi.org/10.1080/10926770802250678

Föcker, J., Mortazavi, M., Khoe, W., Hillyard, S. A., & Bavelier, D. (2018). Neural

correlates of enhanced visual attentional control in action video game players: An

event-related potential study. Journal of , 31(3), 377–389.

https://doi.org/10.1162/jocn_a_01230

Gentile, D. (2009). Pathological Video-Game Use Among Youth Ages 8 to 18. 52 Psychological Science, 20(5), 594–602. https://doi.org/10.1111/j.1467-

9280.2009.02340.x

Giancola, P. R., & Zeichner, A. (1995). Construct validity of a competitive reaction‐time

aggression paradigm. Aggressive Behavior, 21(3), 199–204.

https://doi.org/10.1002/1098-2337(1995)21:3<199::AID-AB2480210303>3.0.CO;2-

Q

Gleichgerrcht, E., & Decety, J. (2014). The relationship between different facets of

empathy, pain perception and fatigue among physicians. Frontiers in

Behavioral Neuroscience, 8(JULY), 243. https://doi.org/10.3389/fnbeh.2014.00243

Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective

attention. Nature, 423(6939), 534–537. https://doi.org/10.1038/nature01647

Green, C. S., & Bavelier, D. (2006a). Effect of action video games on the spatial

distribution of visuospatial attention. Journal of : Human

Perception and Performance, 32(6), 1465–1478. https://doi.org/10.1037/0096-

1523.32.6.1465

Green, C. S., & Bavelier, D. (2006b). Enumeration versus multiple object tracking: the

case of action video game players. Cognition, 101(1), 217–245.

https://doi.org/10.1016/j.cognition.2005.10.004

Green, C. S., & Bavelier, D. (2007). Action-video-game experience alters the spatial

resolution of vision. Psychological Science, 18(1), 88–94.

https://doi.org/10.1111/j.1467-9280.2007.01853.x

Green, C. S., Pouget, A., & Bavelier, D. (2010). Improved probabilistic inference as a

general learning mechanism with action video games. Current : CB, 20(17),

1573–1579. https://doi.org/10.1016/j.cub.2010.07.040

Greitemeyer, T., Osswald, S., & Brauer, M. (2010). Playing Prosocial Video Games

53 Increases Empathy and Decreases . Emotion, 10(6), 796–802.

https://doi.org/10.1037/a0020194

Hilgard, J., Engelhardt, C. R., & Rouder, J. N. (2017). Overstated evidence for short-

term effects of violent games on affect and behavior: A reanalysis of Anderson et

al. (2010). Psychological Bulletin, 143(7), 757–774.

https://doi.org/10.1037/bul0000074

Hubert-Wallander, B., Green, C. S., Sugarman, M., & Bavelier, D. (2011). Changes in

search rate but not in the dynamics of exogenous attention in action videogame

players. Attention, Perception, & Psychophysics, 73(8), 2399–2412. https://doi.org/

10.3758/s13414-011-0194-7

Hutchinson, C. V., & Stocks, R. (2013). Selectively enhanced motion perception in core

video gamers. Perception, 42(6), 675–677. https://doi.org/10.1068/p7411

Hyatt, C. S., Chester, D. S., Zeichner, A., & Miller, J. D. (2019). Analytic flexibility in

laboratory aggression paradigms: Relations with personality traits vary (slightly) by

operationalization of aggression. Aggressive Behavior, 45(4), 377–388.

https://doi.org/10.1002/ab.21830

Jack, R. E., Garrod, O. G. B., Yu, H., Caldara, R., & Schyns, P. G. (2012). Facial

expressions of emotion are not culturally universal. Proceedings of the National

Academy of Sciences of the United States of America, 109, 7241–7244.

https://doi.org/10.1073/pnas.1200155109

Jack, R. E., & Schyns, P. G. (2017). Toward a Social Psychophysics of Face

Communication. Annual Review of Psychology, 68(1), 269–297.

https://doi.org/10.1146/annurev-psych-010416-044242

Kepes, S., Bushman, B. J., & Anderson, C. A. (2017). Violent video game effects

remain a societal concern: Reply to Hilgard, Engelhardt, and Rouder (2017).

Psychological Bulletin, 143(7), 775–782. https://doi.org/10.1037/bul0000112 54 Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facial expressions.

Acta Psychologica, 89(2), 149–163. https://doi.org/10.1016/0001-6918(94)00021-8

Kirsh, S. J., & Mounts, J. R. W. (2007). Violent video game play impacts facial emotion

recognition. Aggressive Behavior, 33(4), 353–358.

https://doi.org/10.1002/ab.20191

Kirsh, S. J., Mounts, J. R. W., & Olczak, P. V. (2006). Violent media consumption and

the recognition of dynamic facial expressions. Journal of Interpersonal Violence,

21(5), 571–584. https://doi.org/10.1177/0886260506286840

Krishnan, L., Kang, A., Sperling, G., & Srinivasan, R. (2013). Neural strategies for

selective attention distinguish fast-action video game players. Brain Topography,

26(1), 83–97. https://doi.org/10.1007/s10548-012-0232-3

Kühn, S., Kugler, D. T., Schmalen, K., Weichenberger, M., Witt, C., & Gallinat, J.

(2018). Does playing violent video games cause aggression? A longitudinal

intervention study. Molecular Psychiatry, 1–15. https://doi.org/10.1038/s41380-018-

0031-7

Lamm, C., Batson, C. D., & Decety, J. (2007). The Neural Substrate of Human

Empathy: Effects of Perspective-taking and Cognitive Appraisal. Journal of

Cognitive Neuroscience, 19(1), 42–58. https://doi.org/10.1162/jocn.2007.19.1.42

Latham, A. J., Patston, L. L. M., & Tippett, L. J. (2014). The precision of experienced

action video-game players: Line bisection reveals reduced leftward response bias.

Attention, Perception, and Psychophysics, 76(8), 2193–2198.

https://doi.org/10.3758/s13414-014-0789-x

Leppänen, J. M., Tenhunen, M., & Hietanen, J. K. (2003). Faster choice-reaction times

to positive than to negative facial expressions: The role of cognitive and motor

processes. Journal of , 17(3), 113–123.

https://doi.org/10.1027//0269-8803.17.3.113 55 Li, R., Polat, U., Makous, W., & Bavelier, D. (2009). Enhancing the contrast sensitivity

function through action video game training. Nature Neuroscience, 12(5), 549–551.

https://doi.org/10.1038/nn.2296

Li, R., Polat, U., Scalzo, F., & Bavelier, D. (2010). Reducing backward masking through

action game training. J Vis, 10(14), 33–33. https://doi.org/10.1167/10.14.33

Liu, Y., Lan, H., Teng, Z., Guo, C., & Yao, D. (2017). Facilitation or disengagement?

Attention bias in facial affect processing after short-term violent video game

exposure. PLoS ONE, 12(3), e0172940.

https://doi.org/10.1371/journal.pone.0172940

Lobel, A., Engels, R. C. M. E., Stone, L. L., Burk, W. J., & Granic, I. (2017). Video

Gaming and Children’s Psychosocial Wellbeing: A Longitudinal Study. Journal of

Youth and Adolescence, 46(4), 884–897. https://doi.org/10.1007/s10964-017-0646-

z

McCarthy, R. J., & Elson, M. (2018). A conceptual review of lab-based aggression

paradigms. Collabra: Psychology, 4(1), 4. https://doi.org/10.1525/collabra.104

Mishra, J., Zinni, M., Bavelier, D., & Hillyard, S. A. (2011). Neural basis of superior

performance of action videogame players in an attention-demanding task. Journal

of Neuroscience, 31(3), 992–998. https://doi.org/10.1523/JNEUROSCI.4834-

10.2011

Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers.

Proceedings of the National Academy of Sciences, 106(37), 15583–15587. https://

doi.org/10.1073/pnas.0903620106

Palmer, J., Huk, A. C., & Shadlen, M. N. (2005). The effect of stimulus strength on the

speed and accuracy of a perceptual decision. Journal of Vision, 5(5), 376–404.

https://doi.org/10.1167/5.5.1

Pavan, A., Hobaek, M., Blurton, S. P., Contillo, A., Ghin, F., & Greenlee, M. W. (2019). 56 Visual short-term memory for coherent motion in video game players: evidence

from a memory-masking paradigm. Scientific Reports, 9(1), 6027.

https://doi.org/10.1038/s41598-019-42593-0

Pichon, S., Antico, L., Chanal, J., Singer, T., & Bavelier, D. (submitted). The link

between competitive personality, aggressive and altruistic behaviors in action video

game players.

Plummer, M. (2003). JAGS: A program for analysis of Bayesian models using Gibbs

sampling. Proceedings of the 3rd International Workshop on Distributed Statistical

Computing; Vienna, Austria, 20–22. https://doi.org/10.1.1.13.3406

Pohl, C., Kunde, W., Ganz, T., Conzelmann, A., Pauli, P., & Kiesel, A. (2014). Gaming

to see: Action video gaming is associated with enhanced processing of masked

stimuli. Frontiers in Psychology, 5(FEB), 70.

https://doi.org/10.3389/fpsyg.2014.00070

Prescott, A. T., Sargent, J. D., & Hull, J. G. (2018). Metaanalysis of the relationship

between violent video game play and physical aggression over time. Proceedings

of the National Academy of Sciences of the United States of America, 115(40),

9882–9888. https://doi.org/10.1073/pnas.1611617114

Qiao-Tasserit, E., Quesada, M. G., Antico, L., Bavelier, D., Vuilleumier, P., & Pichon, S.

(2017). Transient emotional events and individual affective traits affect emotion

recognition in a perceptual decision-making task. PLoS ONE, 12(2), e0171375.

https://doi.org/10.1371/journal.pone.0171375

Ratcliff, R., & McKoon, G. (2008). The Diffusion Decision Model: Theory and Data for

Two-Choice Decision Tasks. Neural Computation, 20(4), 873–922.

https://doi.org/10.1162/neco.2008.12-06-420

Ratcliff, R., & Rouder, J. N. (1998). Modeling Response Times for Two-Choice

Decisions. Psychological Science, 9(5), 347–356. https://doi.org/10.1111/1467- 57 9280.00067

Ratcliff, R., & Tuerlinckx, F. (2002). Estimating parameters of the diffusion model:

Approaches to dealing with contaminant reaction times and parameter variability.

Psychonomic Bulletin & Review, 9(3), 438–481.

https://doi.org/10.3758/BF03196302

Roque, N. A., & Boot, W. R. (2018). Action Video Games DO NOT Promote Visual

Attention. In Video Game Influences on Aggression, Cognition, and Attention (pp.

105–118). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-

319-95495-0_9

Sala, G., Tatlidil, K. S., & Gobet, F. (2018). Video game training does not enhance

cognitive ability: A comprehensive meta-analytic investigation. Psychological

Bulletin, 144(2), 111–139. https://doi.org/10.1037/bul0000139

Schubert, T., Finke, K., Redel, P., Kluckow, S., Müller, H., & Strobach, T. (2015). Video

game experience and its influence on visual attention parameters: An investigation

using the framework of the Theory of Visual Attention (TVA). Acta Psychologica,

157, 200–214. https://doi.org/10.1016/j.actpsy.2015.03.005

Sungur, H., & Boduroglu, A. (2012). Action video game players form more detailed

representation of objects. Acta Psychologica, 139(2), 327–334.

https://doi.org/10.1016/j.actpsy.2011.12.002

Szycik, G. R., Mohammadi, B., Hake, M., Kneer, J., Samii, A., Münte, T. F., & te Wildt,

B. T. (2017). Excessive users of violent video games do not show emotional

desensitization: an fMRI study. Brain Imaging and Behavior, 11(3), 736–743.

https://doi.org/10.1007/s11682-016-9549-y

Szycik, G. R., Mohammadi, B., Münte, T. F., & te Wildt, B. T. (2017). Lack of evidence

that neural empathic responses are blunted in excessive users of violent video

games: An fMRI study. Frontiers in Psychology, 8(MAR). 58 https://doi.org/10.3389/fpsyg.2017.00174 van Ravenzwaaij, D., Boekel, W., Forstmann, B. U., Ratcliff, R., & Wagenmakers, E. J.

(2014). Action video games do not improve the speed of information processing in

simple perceptual tasks. Journal of Experimental Psychology: General, 143(5),

1794–1805. https://doi.org/10.1037/a0036923

Wabersich, D., & Vandekerckhove, J. (2014). Extending JAGS: A tutorial on adding

custom distributions to JAGS (with a diffusion model example). Behavior Research

Methods, 46(1), 15–28. https://doi.org/10.3758/s13428-013-0369-3

Whitaker, J. L., & Bushman, B. J. (2012). “Remain calm. be kind.” effects of relaxing

video games on aggressive and prosocial behavior. Social Psychological and

Personality Science. https://doi.org/10.1177/1948550611409760

White, C. N., Ratcliff, R., Vasey, M. W., & McKoon, G. (2010). Using diffusion models to

understand clinical disorders. Journal of , 54(1), 39–52.

https://doi.org/10.1016/j.jmp.2010.01.004

Wiecki, T. V., Sofer, I., & Frank, M. J. (2013). HDDM: Hierarchical bayesian estimation

of the drift-diffusion model in Python. Frontiers in Neuroinformatics, 7(JULY 2013).

https://doi.org/10.3389/fninf.2013.00014

Wilms, I. L., Petersen, A., & Vangkilde, S. (2013). Intensive video gaming improves

encoding speed to visual short-term memory in young male adults. Acta

Psychologica, 142(1), 108–118. https://doi.org/10.1016/J.ACTPSY.2012.11.003

Wu, S., & Spence, I. (2013). Playing shooter and driving videogames improves top-

down guidance in visual search. Attention, Perception, & Psychophysics, 75(4),

673–686. https://doi.org/10.3758/s13414-013-0440-2

Yu, H., Garrod, O. G. B., & Schyns, P. G. (2012). Perception-driven facial expression

synthesis. Computers and Graphics (Pergamon), 36(3), 152–162.

https://doi.org/10.1016/j.cag.2011.12.002 59 60