Exploring Social Information Processing of Emotion Content and its Relationship with

Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder

A dissertation presented to

the faculty of

the College of Arts and Sciences of Ohio University

In partial fulfillment

of the requirements for the degree

Doctor of Philosophy

Verenea J. Serrano

August 2017

© 2017 Verenea J. Serrano. All Rights Reserved. 2

This dissertation titled

Exploring Social Information Processing of Emotion Content and its Relationship with

Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder

by

VERENEA J. SERRANO

has been approved for

the Department of Psychology

and the College of Arts and Sciences by

Julie Sarno Owens

Professor of Psychology

Robert Frank

Dean, College of Arts and Sciences 3

ABSTRACT

SERRANO, VERENEA J., Ph.D., August 2017, Psychology

Exploring Social Information Processing of Emotion Content and its Relationship with

Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder

Director of Dissertation: Julie Sarno Owens

Children with Attention-Deficit/Hyperactivity Disorder (ADHD) often experience social and emotional impairments. However, there has been limited success in reducing these impairments and increasing social status through behavioral and pharmacological interventions. A fruitful avenue may be identifying the atypical social and emotional information processes that contribute to the impairments and subsequently developing interventions that target impaired, yet malleable processes. Using social information processing (SIP) theory as a guide, indicators of social and emotional processing and the relationship between these processes and social outcomes were examined. Specifically, cue encoding, cue interpretation, and latency to emotion recognition in children with or at-risk for ADHD and children without ADHD were investigated. Participants were 72 children (aged 8-14; 59.7% male; 61.1% Non-Hispanic White), 24 in the ADHD group and 48 in the control group. The SIP tasks included cue encoding, measured via emotion recognition during a face morphing task, and cue encoding and interpretation during a television episode. Significant differences in performance between the ADHD and control groups were not found on any of the SIP tasks. Further, performance on SIP tasks was related to measures of social skill, but not to measures of social impairment.

Implications for future research with children with ADHD are discussed. 4

DEDICATION

To my parents, family, and friends who provided endless love, support, and

encouragement

5

ACKNOWLEDGMENTS

Thank you to my advisor, Dr. Julie Owens, for her support and guidance through graduate school, to the research assistants and participants who made my dissertation possible, and to the additional mentors who helped me to grow and succeed. Additionally, thank you to my dissertation committee for their contributions to my dissertation.

6

TABLE OF CONTENTS

Page

Abstract ...... 3 Dedication ...... 4 Acknowledgments ...... 5 List of Tables ...... 9 Exploring Social Information Processing of Emotion Content and its Relationship with Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder ...... 10 Social Information Processing Theory ...... 13 Social Information Processing in Children with ADHD ...... 15 Story Comprehension in Children with ADHD ...... 18 Using SIP Theory to Guide ADHD Research ...... 20 Emotion-Related Cue Encoding ...... 20 Emotion-Related Cue Interpretation ...... 21 Limitations of SIP Research in Children with ADHD ...... 23 Current Study ...... 25 Method ...... 27 Participants ...... 27 Procedure ...... 29 Child Measures ...... 30 Cognitive Ability ...... 30 Depression Symptoms ...... 30 Anxiety Symptoms ...... 31 Parent and Teacher Measures ...... 31 ADHD Symptoms ...... 31 ODD and CD Symptoms ...... 32 Social Outcomes ...... 32 Demographics and Mental Health History ...... 33 SIP Tasks and Emotion Stimuli ...... 33 Faces Task ...... 33 Animals Task ...... 35 Video Task ...... 35 7

Results ...... 39 Analytic Plan ...... 39 Aim 1: Group Differences in Emotion Recognition and Interpretation in Faces and Videos ...... 48 Aim 2: Group Differences in Latency to Emotion Recognition ...... 50 Aim 3: Relations between SIP Performance and Emotion Knowledge Deficits, Externalizing Behavior, Age, Gender, and Verbal IQ ...... 51 Aim 4: Relations between SIP Performance and Social Outcomes ...... 53 Discussion ...... 55 Cue Encoding ...... 55 Cue Interpretation ...... 57 Latency to Recognize Emotion ...... 59 Additional Factors that Affect SIP ...... 60 Limitations ...... 61 Summary ...... 62 References ...... 64 Appendix A: Suggested Committee Revisions to Introduction Not Included in Manuscript ...... 74 Appendix B: Study Non-Copyright Measures and Consents ...... 75 Appendix B1: ADHD-5 Rating Scale ...... 76 Appendix B2: Disruptive Behavior Disorders (DBD) Rating Scale ...... 78 Appendix B3: Impairment Rating Scale ...... 79 Appendix B4: Strengths and Difficulties Questionnaire ...... 80 Appendix B5: Dishion Social Preference Scale ...... 82 Appendix B6: Demographic Questionnaire ...... 83 Appendix B7: Consent and Assent Forms ...... 88 Appendix B8: Pilot Testing ...... 98 Appendix B9: Defining Growing Pains Events ...... 99 Appendix B10 Interpretation Responses Rating Anchors ...... 100 Appendix B11: Coding Data for the Subset of 10 Events ...... 101 Appendix C: Results of Analyses with All 23 Coded Events ...... 102 Appendix C1: Video Task Performance Descriptives – 23 Events ...... 103 Appendix C2: Hierarchical Linear Regressions for Aim 3a ...... 104 Appendix C3: Linear Regressions for Aim 3b ...... 105 8

Appendix D: Table of Previous SIP Studies with Children with ADHD ...... 106

9

LIST OF TABLES

Page

Table 1 Sample Demographics ...... 28 Table 2 Analyses and Variables ...... 40 Table 3 Recognition Accuracy and Interpreation Coding Data ...... 42 Table 4 Latency to Recognize Emotion ...... 44 Table 5 Correlations between Demographic and Social Variables ...... 45 Table 6 Correlations between Parent and Teacher Social Measures and SIP Performance ...... 46 Table 7 Hierarchical Linear Regression Results for Video Interpretation and Video Emotion Accuracy Scores ...... 52 Table 8 Linear Regression Results for Video Interpretation and Video Emotion Accuracy with Externalizing Symptoms, Age, Gender, Verbal Ability ...... 55

10

EXPLORING SOCIAL INFORMATION PROCESSING OF EMOTION CONTENT

AND ITS RELATIONSHIP WITH SOCIAL OUTCOMES IN CHILDREN AT-RISK

FOR ATTENTION-DEFICIT/HYPERACTIVITY DISORDER

Children with ADHD often experience impairment in social and emotional functioning (Hoza, 2007; Semrud-Clickeman & Shafer, 2000; Uekermann et al., 2010).

Behaviors that contribute to social impairment include interrupting, acting boisterous, being intrusive, and difficulty following along in games. Behaviors hypothesized to contribute to emotional impairment include difficulty recognizing emotions in self and others and difficulty regulating emotions in social situations (Bunford, Evans, & Wymbs,

2015; Collin, Bindra, Raju, Gillberg, & Minnis, 2013; Serrano, Owens, & Hallowell,

2015). However, the processes that affect such behaviors and impairments in social and emotional functioning in children with ADHD are not well-understood.

Social information processing (SIP) theory has informed research with children with oppositional defiant disorder (ODD), conduct disorder (CD), and aggression, and has identified and response , and cue encoding deficits in these populations (Akhtar & Bradley, 1991; Orobio de Castro, Veerman, Koops, Bosch, &

Monshouwer, 2002). Thus, SIP theory may be able to guide research with children with

ADHD and offer insights into the processes contributing to social and emotional impairments. In children with ADHD, deficits in SIP may be due to missed cues, incorrect inferences, or variability in latency to recognize and respond to cues, rather than biases. Indeed, response variability on cognitive tasks is common in children with ADHD

(Sjöwall, Roth, Lindqvist, & Thorell, 2013; Zeeuw et al., 2008), as is longer latency to 11 emotion recognition (Serrano et al., 2015). Examining the performance of children with

ADHD at each step of SIP may identify which aspects of SIP are disrupted and to what extent these disruptions affect later SIP steps and social functioning.

However, additional research is needed in several areas of the SIP literature to understand whether SIP theory translates to social and emotional impairments in children with ADHD. First, SIP research has largely focused on children’s processing during specific social situations such as peer provocation. These situations elicit SIP errors and biases, but it is important to understand SIP during general social interactions, as these are likely more common. Second, though emotion is an integral part of SIP (Lemerise &

Arsenio, 2000), it is not commonly examined in SIP studies. Examining emotion during

SIP can further our understanding of social and emotional functioning in children with

ADHD. Third, the impact of age and gender on SIP in children with ADHD is unclear.

Some research suggests that adolescent females with ADHD may not exhibit SIP biases after controlling for verbal ability (Mikami, Lee, Hinshaw, & Mullin, 2008), whereas SIP biases have been found in younger, mostly male samples (e.g., Andrade et al., 2012,

Matthys, Cuperus, & Van Engeland, 1999). Lastly, despite evidence linking SIP deficits to social outcomes in aggressive children and to behavioral outcomes in children with

ADHD, few studies have directly linked SIP to social outcomes in children with ADHD.

SIP research has greatly informed the understanding of social functioning in children with ODD/CD and aggression, and it may further our understanding of social and emotional impairments in children with ADHD; however, additional research is needed before SIP theory can inform interventions for children with ADHD in a 12 meaningful way. Accordingly, the goals of the current study are to (a) assess performance on a SIP task among children with or at-risk for ADHD and children without ADHD using general social stimuli and focusing on emotion recognition, (b) compare latency to emotion recognition independently and during a SIP task between children with or at-risk for ADHD and children without ADHD, (c) examine the effect of emotion recognition deficits, externalizing symptoms (i.e., ADHD, ODD, CD), age, gender, and verbal ability on performance on a SIP task, and (d) examine the extent to which performance on a SIP task is related to social outcomes.

13

SOCIAL INFORMATION PROCESSING THEORY

The classic and most influential theory of SIP is that of Crick and Dodge (1994).

In this seminal article, Crick and Dodge detail six steps that comprise SIP and identify a

“database” of pre-existing factors (e.g., memories, social knowledge) and feedback loops that affect SIP. The six steps are: (1) encoding of cues, (2) interpretation of cues, (3) clarification of goals, (4) response access or construction, (5) response decision, and (6) behavioral enactment of the response. Processing at each step can occur in parallel to the other steps and is influenced by a host of factors including experiences, social knowledge, arousal, and feedback from the environment. This model sheds light on interpretation and response biases that affect the behavior and social functioning of aggressive children. Further, SIP biases are linked to behavioral outcomes (Lansford et al., 2006; Orobio de Castro, Merk, Koops, Veerman, & Bosch, 2005) for children with aggression both concurrently and prospectively.

Though Crick and Dodge (1994) state that “emotions are an integral part of each social information-processing step” (p. 81) and provide examples of how emotions can affect SIP, their model does not include emotion factors. Lemerise and Arsenio (2000) modified the model to explicitly integrate emotion processes into each step and into the database of pre-existing factors (e.g., social knowledge; see Figure 2 in Lemerise &

Arsenio, 2000). Initial testing of the model yielded an acceptable fit to empirical data and incremental utility in predicting aggression (Orobio de Castro et al., 2005) in aggressive and control children. Other more recent models also integrate emotion into social processes (e.g., Beauchamp & Anderson, 2010; McKown, Gumbiner, Russo, & Lipton, 14

2009).

Together the literature suggests there are identifiable steps to SIP and that emotion is an important aspect of SIP and social competence. However, extending research on emotion and SIP models to additional populations and outcome variables is needed.

15

SOCIAL INFORMATION PROCESSING IN CHILDREN WITH ADHD

Children with ADHD exhibit social behaviors, which are impairing and often long-lasting (de Boos & Prins, 2007; Hoza, 2007). Despite the development of evidence- based treatments for academic, behavioral, and family functioning (Evans, Owens, &

Bunford, 2014), researchers have yet to develop effective interventions for social impairment in children with ADHD. Pharmacological treatment can reduce ADHD symptoms that interfere with social functioning, but reduction in symptoms has minimal impact on changing peer status (Hoza et al., 2005; Mrug, Hoza, Pelham, Gnagy, &

Greiner, 2007). Similarly, behavioral treatments can reduce problematic behavior and encourage prosocial behavior, but even the most intensive treatments have not resulted in improved social status in children with ADHD (Hoza et al., 2005).

Perhaps such interventions have not produced desired social outcomes because they are not targeting social-emotional processing deficits that may be primary contributors to social deficits. Because of the similarities in some of the offending social behaviors demonstrated by children with ADHD and ODD/CD (e.g., jumping to conclusions, over-controlling peer situations), research on SIP in children with ADHD may help to understand the processes that are disrupting social competence in this population. However, ADHD is distinct from ODD/CD; thus, its potential unique association with SIP deficits must be understood.

Surprisingly, the literature on SIP in children with ADHD is limited. Research suggests that children with ADHD exhibit deficits at multiple SIP steps relative to control children (e.g., Andrade et al., 2012; Matthys et al., 1999); however, the extent of these 16 deficits is unclear. Further, there is variability across studies in the extent to which age, gender, ADHD status or symptoms, and ODD/CD status or symptoms contributes to SIP deficits (see Appendix D for a summary of studies).

Several commonalities and contrasts emerge across these studies. First, a hostile attribution was not found for interpretation of intent or attributions (King et al.,

2009; Sibley, Evans, & Serpel, 2010), indicating that the ADHD group was not more likely than the control group to ascribe hostile intentions to others when interpreting social events. However, methylphenidate was associated with increased aggressive response generation compared to controls (King et al., 2009). Second, children with

ADHD encoded fewer cues than children without ADHD (Andrade et al., 2012; Matthys et al., 1999). However, Mikami et al. (2008) found no deficits in cue interpretation and response generation among adolescent females with ADHD after controlling for verbal ability and childhood ODD/CD. Third, only two studies linked SIP deficits to social outcomes. Sibley and colleagues (2010) found story comprehension, but not response generation, to predict parent-rated peer impairment. Mikami and colleagues (2008) found that peer rejection at ages 6-11 was not correlated with SIP variables measured at ages

11-18 (Mikami et al., 2008).

Despite the inconsistencies, two key findings can be gleaned from this literature.

First, deficits have been observed for children with ADHD at the SIP steps of cue encoding, cue interpretation, and response generation. Examining the effect of externalizing symptoms (i.e., ADHD, ODD, CD) on SIP may help shed light on which symptoms are associated with SIP deficits among children with ADHD. Second, the 17 pattern of results suggests that examination of age and gender effects is warranted. Two studies with adolescent and mostly-to-all female ADHD samples suggest that age and gender affect whether SIP deficits are present compared to younger, mostly-to-all male samples. Given that the ADHD SIP literature is limited, considering the related body of literature of story comprehension can offer additional information on processes that may be interfering with SIP.

18

STORY COMPREHENSION IN CHILDREN WITH ADHD

Story comprehension research is similar to SIP research in that it focuses on processing and understanding content that often contains social interactions (e.g., television episodes, stories). However, the two lines of research are different in that SIP research is more specific to steps that comprise how social information is processed, and story comprehension is more specific to how an overall story is structured, its events are understood, and inferences or causal links are generated. Children with ADHD have difficulty understanding casual relations between events, generate fewer inferential and causal links between events when retelling the story, make more implausible and fewer plausible inferences, have difficulty identifying important events, and have less integrated and coherent story recall than control children (Berthiaume, Lorch, & Milich, 2010; Flory et al., 2006; Lorch, Berthiaume, Milich, & Van Den Broek, 2008).

Because social interactions and relationships usually span a longer length of time than the short situations in SIP research, story comprehension research highlights how, in the span of a longer social interaction, comprehension difficulties can negatively affect the social functioning of children with ADHD. A longitudinal study found that, over a

21-month period, children with ADHD failed to demonstrate the same developmental increase in understanding causal relations that was observed in comparison children

(Bailey, Lorch, Milich, & Charnigo, 2009). Children in the study were on average 8- years-old at the first time point; thus, difficulties understanding causal relations between events occur early in development, and the gap between children with and without

ADHD may grow over time. 19

In a study that blends SIP stimuli with story comprehension outcome variables,

Milch-Reich and colleagues (1999) found that children with ADHD exhibit the above- described story comprehension deficits (i.e., fewer inferential/causal links, fewer cues recalled, less integrated story recall and reasoning; Milch-Reich, Campbell, Pelham,

Connelly, & Geva, 1999). Children with ADHD and younger children were also more likely to rely on the last picture in view than on earlier events in the story and rely on circular logic when determining the appropriateness of the character’s behavior. If children with ADHD rely primarily on recent events and do not reference earlier events, even in the context of a short story, this may translate to their social reasoning in real- world social interactions and friendships.

Story comprehension and SIP are distinct but related areas of research, and both have potential to affect social outcomes. For example, the deficits in making inferences or creating a coherent story narrative can affect cue interpretation, goal clarification, or response decision. Given that there may not be an identified bias, such as hostile , in children with ADHD, SIP deficits that are primary in children with

ADHD may be more related to errors that arise from less developed understanding of social and emotional situations. For example, a child does not understand causal links between events or incorrectly infer another’s intentions, leading to cue misinterpretation or enactment of a response inappropriate to the situation.

20

USING SIP THEORY TO GUIDE ADHD RESEARCH

Using landmark (Crick & Dodge, 1994) and updated (Lemerise & Arsenio, 2000)

SIP models can help researchers systematically examine social and emotional competence in children with ADHD. Namely, the sequence provides a conceptual organization to identify critical points wherein SIP errors may occur.

Emotion-Related Cue Encoding

The first SIP step is cue encoding, which includes emotion recognition. Emotion knowledge is a broader construct that includes recognizing emotion in self and others, as well as understanding the role of emotion in situations and understanding that the emotions of self and others may differ (Denham, 1998; Saarni, 1999). Emotion knowledge scores show small to moderate correlations with SIP performance

(Bauminger, Edelsztein, & Morash, 2005; Denham et al., 2014; Dodge, Laird, Lochman,

& Zelli, 2002), and greater emotion knowledge is associated with higher SIP (Bauminger et al., 2005). The extent to which emotion recognition deficits are present in children with

ADHD is variable, with effect sizes varying both across and within studies from less than small to large magnitude (e.g., Boakes, Chapman, Houghton, & West, 2008; Corbett &

Glidden, 2000; Serrano et al., 2015). Effect sizes vary based on emotion (e.g., group differences are found when recognizing anger, disgust, and fear in faces, but not when recognizing happy in faces), and there is great variability in the study methods. However, overall, children with ADHD appear to demonstrate some degree of emotion recognition deficits, relative to those without ADHD.

In relation to SIP, if not all cues are viewed or encoded, then children with ADHD 21 may be starting the SIP cycle with less information than other children. Serrano and colleagues reported that, overall, children with ADHD spent less time viewing relevant areas of faces and situations when trying to discern emotions. However, the effect sizes varied across emotions and across the tasks of emotion recognition in a face or situation

(Serrano et al., 2015). Further, Marotta et al. (2014) found that children with ADHD reflexively oriented to arrows and periphery cues in a similar manner as typical children; however, they did not reflexively orient to an eye gaze in the same manner as typical children. Thus, the deficit appears to be unique to following eye gaze rather than following other directors of attention, and the results suggest that children with ADHD may have difficulty following subtle social cues such as eye gaze compared to overt cues such as pointing. Similarly, children with ADHD are less able to understand referential statements (statement in which a reference is made to something; Nilsen, Mangal, &

MacDonald, 2012), which can lead to poor understanding of a social interaction. If children with ADHD are spending less time viewing areas relevant to emotion recognition and are less able to follow social cues (e.g., eye gaze, referential statements), they may be missing relevant information. Perhaps leading to a need to “fill the gaps”, which in turn can lead to interpretation errors.

Emotion-Related Cue Interpretation

The next step in the SIP model after cue encoding is cue interpretation. Cue interpretation includes attributions regarding intention and causality and is affected by multiple factors including information encoded at the first step, social knowledge, emotionality, and emotion regulation (Lemerise & Arsenio, 2000). In one study, Andrade 22 et al. (2012) found children with ADHD relied more on inferences and non-observable information than observable information compared to children without ADHD when asked whether an action was a nice or bad way to act. Thus, children with ADHD may rely less on the facts (e.g., peer stole a toy) and more on personal judgment (e.g., peer is mean) when interpreting events. Assessing real time cue identification offers one way to examine whether children with ADHD are missing cues during social interactions and to examine on which cues/reasoning they rely on to make interpretations.

23

LIMITATIONS OF SIP RESEARCH IN CHILDREN WITH ADHD

Though SIP theory provides a theoretical framework for advancing social and emotional research in children with ADHD, overall there are few SIP studies with children with ADHD and there are limitations to the literature. First, the scenarios used to test SIP are usually specific situations (a response to a specific trigger, such as peer provocation). Understanding how children with ADHD process social information during more common social interactions may identify SIP deficits that negatively affect social functioning. Second, typical SIP scenarios are short, have limited cues requiring attention, and do not require much integration of previous information. There is a need for longer scenarios with cues or triggers that vary in saliency, as this moves closer to real world social interactions and allows for a fuller understanding of how SIP affects social functioning.

Third, most SIP studies with children with ADHD do not include emotion- specific questions, even though the importance of inclusion of emotion in SIP has been demonstrated. Fourth, the extent to which SIP deficits are affected by age and gender is unclear. Mikami et al. (2008) and Sibley et al. (2010) are the only SIP studies with adolescents with ADHD, yet their ADHD groups are predominantly or entirely female.

Thus, age and gender have been confounded in previous studies. Examining the effects of age and gender on SIP in youth with ADHD spanning early childhood through early adolescence will add clarity to the literature by indicating if the bias found in elementary school children is present or wanes in early adolescence. Fifth, the design of SIP and story comprehension studies usually involves asking the child SIP and comprehension 24 questions at the end or at intermittent points during the story if the stimulus is long.

Examining real-time processing of cues can offer insight into where in the SIP process errors are occurring and how children with ADHD differ from control children. Lastly, in order for SIP to be considered a target in future social interventions, it is important to know whether SIP deficits are related to social outcomes in children with ADHD.

Addressing these limitations helps to identify whether variability in SIP studies in children with ADHD is due to methodological differences and whether SIP deficits are a viable target for intervention.

25

CURRENT STUDY

The aims of the current study were to determine whether: (1) children with or at- risk for ADHD demonstrate cue identification deficits (measured via emotion recognition) and interpretation deficits in the context of dynamic social and emotional stimuli relative to children without ADHD, (2) children with or at-risk for ADHD recognize emotion later than children without ADHD, (3) performance on a SIP task is affected by emotion knowledge deficits, externalizing behavior, age, gender, and verbal ability in children with or at-risk for ADHD and children without ADHD, and (4) performance on a SIP task is related to social outcomes.

It was hypothesized that overall, children with or at-risk for ADHD would detect fewer cues, demonstrate lower accuracy for recognized cues, demonstrate lower SIP response scores, and demonstrate longer latency to recognize emotion than children without ADHD, after accounting for covariates (Andrade et al., 2012; Matthys et al.,

1999; Milch-Reich et al., 1999). It was hypothesized that effects would vary across emotion, with children in the ADHD and control groups performing similarly for some emotions (e.g., happy), but not all (e.g., mad, sad). Females and older children were expected to detect more cues and demonstrate higher accuracy and higher coding scores than males and younger children, and children with higher verbal ability were expected to have higher SIP response scores than those with lower verbal ability (Mikami et al.,

2008). No response latency differences were expected on a control, animal recognition task. Emotion recognition accuracy was hypothesized to account for a small, but significant amount of the variance in performance on SIP tasks, and age, gender, and 26 verbal ability were expected to independently and jointly account for variance in performance on SIP tasks (Bauminger et al., 2005; Mikami et al., 2008). ADHD symptoms were expected to be more strongly associated with performance on SIP tasks than ODD/CD symptoms because the nature of the video task was not expected to predispose participants to respond aggressively to the same extent as other SIP tasks, and the nature of ADHD symptoms may more negatively affect performance across the video task than ODD/CD symptoms. Finally, a hypothesis was not made regarding which social measure would be most related to performance on SIP tasks, due to lack of previous research connecting social outcomes and SIP; however, it was hypothesized that performance on SIP tasks would be correlated with at least social outcome (Sibley et al.,

2010).

27

METHOD

Participants

Data were collected for 90 participants; five were excluded due to exclusionary diagnosis, four were excluded because there was a previous diagnosis of ADHD, but parent ratings were not in the ADHD range, and nine were excluded due to missing data.

The participants in the analyzed sample were 72 children (n = 24 in the ADHD group, n

= 48 in the control group) ages 8 to 14, with an IQ estimate of 74 or greater. Parents and teachers also participated as informants. Table 1 contains sample demographic data. As expected, the ADHD group had higher values for inattention, hyperactivity/impulsivity, and ODD/CD symptoms averages (p < .001) and lower social functioning on all parent measures (p < .01). 28

Table 1

Sample Demographics ADHD (n = 24) Control (n = 48) Age 10.83 (1.79) 10.21 (1.54) Percent Age 8-11 66.7% 79.2% Modal Age(s) 10 and 13 10 Percent Male 66.7% 56.3% WASI Composite Score 99.88 (13.22) 104.58 (15.58) WASI Vocabulary t-score 52.96 (12.00) 53.38 (10.92) Race Asian 4.2% 2.1% Biracial/Multiracial 4.2% 8.3% Black/African American 29.2% 29.2% White 62.5% 60.4% Ethnicity Not Hispanic/Latino 100% 97.9% Primary Caregiver Education Partial HS or HS Diploma 12.5% 31.3% Partial College 33.3% 29.2% Associate’s Degree 8.3% 8.3% Bachelor’s to Doctoral Degree 45.8% 31.2% Primary Household Annual Income Up to $24,999 25.0% 27.1% $25,000 – $49,999 20.8% 37.5% $50,000 - $74,999 8.3% 12.5% $75,000 - $99,999 12.5% 10.4% $100,000 + 33.3% 12.5% ADHD-5 Inattention Average* 2.13 (0.53) 0.69 (0.59) ADHD-5 Hyp/Imp Average* 1.61 (0.67) 0.39 (0.38) DBD ODD/CD Average* 0.68 (0.51) 0.20 (0.28) SSIS Standard Score* 71.29 (16.12) 95.42 (18.24) Parent SDQ Peer Problems* 3.17 (1.86) 1.23 (1.45) Parent SDQ Prosocial* 6.54 (2.02) 8.50 (1.62) Parent IRS Peer*a 2.39 (1.83) 0.38 (0.99) Parent Dishion Accept Score* 2.08 (2.24) 3.40 (0.98) Teacher SDQ Peer Problemsb 2.33 (1.71) 1.47 (1.86) Teacher SDQ Prosocial*b 6.48 (2.66) 7.84 (2.35) Teacher Dishion Accept Scoreb 2.48 (2.04) 2.70 (1.85) aOne participant in the ADHD group has missing data for the parent IRS. bRepresents a subsample of those with teacher data (ADHD n = 21, control n = 37) 29

*Denotes significant group differences (p < .05) according to t-tests or chi-square tests.

To be included in the ADHD group, parent ratings on the ADHD Rating Scale-5 had to be greater than or equal to the 93rd percentile for age and gender. Of those in the

ADHD group, 17 (71%) had parent report of a previous ADHD diagnosis. Thus, the

ADHD group was comprised of individuals at-risk for ADHD and with a previous diagnosis of ADHD, all of whom had parent report of elevated ADHD symptoms. To be included in the control group, parent ratings on the ADHD Rating Scale-5 had to be below the 93rd percentile cut-off. Exclusion criteria were: parent report of autism spectrum disorder, intellectual disability, or psychosis, and an IQ estimate score of less than 74. Children prescribed stimulant medication were required to complete a 12-hour medication hiatus. Five children required the medication hiatus.

Procedure

Potential participants were identified via an epidemiological study being conducted at the same site (Owens & Evans, 2015), a database of participants who consented to be contacted for future research, flyer distribution in local school and community locations, and a university faculty and staff email listserv. Caregivers of potential participants completed a phone screen, and if no exclusionary diagnoses were reported, a study session was scheduled.

Informed parent consent and child assent procedures were conducted. All child measures were obtained during an individual study session. Parents completed rating scales in a separate room from children, and teacher rating scales were collected via

Redcap online data capture following the study session. For the SIP tasks, project staff 30 read task-specific instructions to the child, then the child completed a training trial, during with they were given corrective feedback to ensure task understanding. The child then completed the test trials. Task order of the Faces and Animals tasks was counterbalanced, and these tasks were completing prior to the Video task.

Child Measures

Cognitive Ability

The Wechsler Abbreviated Scale of Intelligence-Second Edition (WASI-II;

Wechsler, 2011) is a psychometrically reliable tool for assessing cognitive ability in individuals ages 6 to 90. Two subtests (vocabulary and matrix reasoning) were administered to yield a full-scale IQ (FSIQ) estimate. Both subscales and FSIQ scores demonstrate strong internal reliability (r’s ranging from .87 to .96 for child participants;

Wechsler, 2011), and the two subtest FSIQ estimate is highly correlated (r = .85) with the

FSIQ score from the Wechsler Intelligence Scale for Children-Fourth Edition (Wechsler,

2011).

Depression Symptoms

The Children’s Depression Inventory-2 (CDI-2; Kovacs, 2010) was used to assess self-reported depression symptoms for children up to 5th grade. The CDI-2 has good psychometric properties, including high internal consistency (.91 for Total score, .83-.85 for Scales) and 2 - 4 week test-retest reliability (r =.98 for Total score, r = .92-.97 for

Scales). The CDI-2 can differentiate between youth with and without depressive symptoms (Bae, 2012). Cronbach’s alpha for the current sample was .80 for total score.

For young adolescents (grades 6-8), the Reynolds Adolescent Depression Scale-2 31

(RADS-2; Reynolds, 2002) was used to assess self-reported depression symptoms.

Internal consistency for the total score (.93) and scale scores are good (.80-.87; Reynolds,

2002), and the RADS total score is associated with depression status as determined via diagnostic interview (Reynolds & Mazza, 1998). Cronbach’s alpha for the current sample was .82 for the total score.

Anxiety Symptoms

The Revised Children’s Manifest Anxiety Scale-2 (RCMAS-2; Reynolds &

Richmond, 2008) was used to assess self-reported anxiety symptoms in children up to 5th grade. The RMCAS-2 has good reliability (.92 for Total Score, .75-.86 for scales), and can differentiate between youth with and without an anxiety disorder (Seligman,

Ollendick, Langley, & Baldacci, 2004). Cronbach’s alpha for the current sample was .83 for the total score. For young adolescents in grades 6-8, the Beck Youth Inventories-2nd edition-Anxiety (BYI-2-A; Beck & Steer, 1990) was used as a self-report measure of anxiety symptoms. Internal consistency for the measure is high (>.90 for males and females), and self-report on the BYI is correlated with other self-report measures of anxiety (Bose-Deakins & Floyd, 2004). Cronbach’s alpha for the current sample was .75.

Parent and Teacher Measures

ADHD Symptoms

The ADHD Rating Scale-5 (DuPaul et al., 2015) was completed by parents to assess ADHD symptoms. The measure has good psychometric properties and the factor structure is invariant across older and younger youth (5-17 years) and across genders

(male and female; DuPaul et al., 2015). Cronbach’s alpha in the current sample was .96 32 for the inattentive items and .93 for hyperactive/impulsive items.

ODD and CD Symptoms

The ODD and CD items from the Disruptive Behavior Disorders (DBD) Rating

Scale (Pelham, Gnagy, Greenslade, & Milich, 1992) were completed by parents to assess for ODD and CD symptoms. The DBD rating scale is psychometrically sound and has strong internal consistency (ODD/CD α = .96), and the ODD items have good positive predictive power in predicting ODD. Cronbach’s alpha for the current sample was .94.

Social Outcomes

Four social measures were collected across parent and teacher informants. First is is the peer item from the Impairment Rating Scale (IRS; Fabiano et al., 2006), which was completed by parents to assess the child’s level of impairment with peers. Scores range from 0 to 6; higher scores indicate greater impairment. The second is the Social Skills

Improvement System-Rating Scales (SSIS-RS; Gresham & Elliot, 2008), which assesses social competence. Parents completed the social competence domain items. The SSIS-RS demonstrates excellent internal consistency for the parent, teacher, and student versions

(α = .94 - .97 for the total scale scores; Gresham, Elliot, Vance, & Cook, 2011).

Cronbach’s alpha for the current sample was .97. Raw scores were transformed to combined gender-based t-scores; higher scored indicate greater social skills.

The third is the Strengths and Difficulties Questionnaire (SDQ), which contains items related to emotional, behavioral, and peer problems, as well as prosocial behavior

(Goodman, 2001). Parents and teachers completed the peer problems and prosocial subscales, where scores on both subscales range from 0 to 10. For the peer problem 33 subscale, higher scores indicate greater peer impairment; for the prosocial subscale, higher scores indicate greater social skills. Cronbach’s alpha for the current sample was

.91 for the peer problem scales and .76 for the prosocial scale.

The fourth measure is the Dishion Social Preference Scale (Dishion, 1990), which is a three-item measure in which parents and teachers rated the proportion of peers who accept, reject, or ignore the child. The measure is sensitive to group differences, and is moderately related to peer sociometrics (Dishion, 1990). The acceptance and rejection items (items 1 and 2) were used to calculate a social preference score (acceptance rating – rejection rating).

Demographics and Mental Health History

Parents completed a demographics questionnaire, which included questions regarding ethnic background, parent education level, parent occupation, family income, child age, child sex, and child medication history. Parents also completed a mental history questionnaire for their child, which inquired into previous diagnosis and treatment for common mental health disorders.

SIP Tasks and Emotion Stimuli

Three types of stimuli were used to answer the research questions: (a) an emotion recognition face morphing task (henceforth Faces task), (b) an animal morphing task

(henceforth, Animals task), and (c) a video task using an episode from the television series Growing Pains (henceforth Video task).

Faces Task

To examine latency to emotion recognition and provide a measure of emotional 34 knowledge, participants completed a task wherein they recognized emotion in a face that morphed from neutral to an expressed emotion. Images from the Pictures of Facial Affect

(POFA; Ekman & Friesen, 1979) set comprised the morphs, and all test images had an emotion reliability rating of 80% or higher. The morphing sequence was divided into 50 images, representing an approximately 2% change for each image. Participants completed one training trial and three test trials for each of the six emotions for a total of 19 trials.

Each image was displayed for 175ms, and each morph lasted 8.75 seconds total. These specifications were determined via pilot testing with children with and without ADHD who were ages 8-14 (See details in Appendix B8).

Participants were asked to press a button as soon as they could identify the expressed emotion of the morph. A list of the six emotion choices (i.e., happy, mad, sad, scared, surprised, and disgusted) was provided, and a forced-choice response format was used. The time of the first press and the emotion identified was recorded. In addition, the children were allowed to press the button again at any time to change their answer. The time of additional presses and the emotions identified were recorded. At the end of the morph, all children were to identify the emotion to assess emotion recognition when the full emotion was displayed. A computer program designed specifically for this study displayed the images and recorded response time for each press. Faces Accuracy Scores were the percent correct for initial press and final presentation (referred to as Face

Emotion Accuracy - First Press; Faces Emotion Accuracy - Final Press). Faces Latency was the time stamp of the first press. For each test stimuli when a participant did not press during the morph, and thus do not have time or emotion response data for a first 35 press, the full image display time (i.e., 8.75 seconds) was entered for first press time and their emotion accuracy was entered as incorrect. This allowed all participants to be included in first press analyses.

Animals Task

To serve as a control task for the Faces task, allow for examination of response variability uncoupled from emotion-related stimuli, and compare group differences in latency to respond across emotion and non-emotion stimuli, participants completed an animal morphing task. Images changed from a blurry, undecipherable image to a clear picture of an animal. To mimic the six categories of emotions from the POFA image set, six categories of animals (i.e., dog, cat, chicken, bear, horse, and fish) were completed.

Participants were asked to press a button as soon as they could identify the animal. At the end of the morph, all children were to identify the animal to assess emotion recognition when the full animal was displayed. Free-response format was used. Display time, number of trials, and data recording procedures were the same as those for the Faces task.

For participants without a first press, full image display time and accuracy was entered.

Mean time to first press across all animal trials was used as a covariate in Aim 1 analyses.

Video Task

The cue identification (measured via emotion recognition) and cue interpretation steps of SIP were measured via a Video task. Participants watched an episode of Growing

Pains that has previously been used in SIP and story comprehension research in children with ADHD (e.g., Lorch et al., 2004). This video was chosen because the content is appropriate for children and adolescents, it has previously been used with ADHD 36 populations, there are multiple emotions displayed and various factors affecting these emotions, and there is a range in difficulty of identifying and interpreting the emotions.

Participants were instructed to press the space bar to pause the video as soon as a target character displayed an emotion (Video Latency), identify the emotion expressed if possible (Emotion Response), and describe why the character is feeling that specific emotion if possible (Interpretation Response). Participants were then instructed to resume the video and press again when there was a change in emotion or change in reason for the emotion. Participants were told if they did not know whether there was a change, simply press when they saw the target character display an emotion. A one-minute training video of a different Growing Pains episode was completed, and the test stimulus was a 20- minute video. The same specially-designed computer program displayed the videos, recorded response times, and recorded the timestamp of each press. The video was comprised of 129 separate events. See Appendix B9 for details regarding defining these

129 events. Emotion Responses were used to assess Video Emotion Accuracy and

Interpretation Responses were used to assess Interpretation Accuracy, Inference, and

Quality; coding for each is described below.

Free-response format was used for the Video task, and Emotion Accuracy was coded based on the categories outlined in Shaver, Schwartz, Kirson, and O’Connor

(1987; love, joy, surprise, anger, sadness, and fear), with the exceptions that love and joy were collapsed into a single category and disgust was extracted from anger to form its own category. This allowed responses to parallel the basic six emotions depicted in the

POFA images. Responses not included in Shaver et al.’s (1987) list were classified into 37 the existing categories based on consensus decision among data coders. Emotion responses were coded for accuracy based on the emotion determined from pilot testing.

Namely, pilot testing procedures revealed 23 events (i.e., emotion changes by the main character) that were identified by at least two-thirds of the college students in the pilot sample and had at least 50% agreement as to the specific emotion displayed. Thus, we first documented the number of the 23 events recognized (the denominator), and then used the child’s response for each event as the numerator to determine the percent of correct answers provided (i.e., Video Emotion Accuracy).

Interpretation Responses were coded on the dimensions of accuracy, inference, and quality. Interpretation Accuracy and Inference were rated on a 0-2 scale (See

Appendix B10 for scale anchors). Interpretation Quality was rated on a 0-1 scale (See

Appendix B10). In the current sample, inter-rater reliability of ratings of Interpretation

Responses for the 23 events was 67.8% agreement. Due to low inter-rater reliability, a subset of events with higher reliability (i.e., > 70% agreement) was identified. This resulted in 10 events with 81.1% agreement. Thus, Interpretation data are only reported for the subset of 10 events. The Interpretation ratings across all 10 events were summed to produce four variables: Video Interpretation Accuracy, Video Interpretation Inference, and Video Interpretation Quality, as well as Video Interpretation Composite which is the sum of the three dimensions. Of note, similar results were found with all 23 events, and results are presented in Appendix C).

In addition, a fifth variable was produced using the child’s individual response as a denominator. Namely, Video Emotion Accuracy-Individualized represents the percent 38 of emotions accurately reported across all of the events for which the child had a response (i.e., number correct/number recognized). Two additional variables were produced to indicate the number of events recognized during the Video task. Number of

Events Recognized represents the number of events for which the child paused to indicate an emotion was present during the course of the video. Number of Events Recognized –

10 Events, represents the number of events recognized of the subset of 10 events.

39

RESULTS

Analytic Plan

Table 2 contains a summary of analyses and variables for each aim. Power analyses indicated that Aim 1, 3, and 4 were powered to detect a large effect size (80% power, α = .05), and Aim 2 was underpowered to detect a large effect size. Tables 3 and 4 contain the means and standard deviations for variables used in Aims 1 and 2. Tables 5 and 6 contain correlation matrices of demographic variables and social outcome measures for Aims 3 and 4. 40

Table 2

Analyses and Variables Independent Variables Covariates Dependent Variables Calculation of Dependent Variables Aim 1 1a. Emotion Group status Age, Gender, Animals First Faces Accuracy - First Percent of correctly identified Recognition Press Mean Time Press for 6 emotions emotions for first press of all 3 trials of Differences (Faces) each emotion of Faces trials

1b. Emotion Group status Age, Gender, Animals First Faces Accuracy – Final Percent of correctly identified Recognition Press Mean Time Press for 6 emotions emotions for final presentation of all 3 Differences (Faces) trials of each emotion of Faces trials

1c. Emotion Group status Age, Gender, Animals First 1.Video Emotion 1. Percent of correctly identified Identification and Press Mean Time Accuracy emotions among the 10 events Recognition 2. Number of Events 2. Count of all events recognized by Differences (Video) Recognized- the child Individualized 1d. Emotion Group status Age, Gender, Animals First 1.Video Emotion 1. Percent of correctly identified Identification and Press Mean Time Accuracy – Individualized emotions among events the child Recognition 2. Number of Events recognized Differences (Video) Recognized- 2. Count of all events recognized by Individualized the child 1e. Emotion Group Status Age, Gender, Animals First 1.Video Interpretation Sum of coding scores for each Interpretation Press Mean Time, WASI Accuracy dimension Differences (Video) Vocab t-score 2. Video Interpretation Inference 3. Video Interpretation Quality 2a. Faces Task First Group Status None Average First Press Time Mean time of first press for each Press Latency for 6 Emotions emotion, averaged across the 3 trials for each emotion

41

Table 2 Continued

2b. Animals Task First Group Status None Average First Press Time Mean time of first press for each Press Latency for 6 Animals animal, averaged across the 3 trials for each animal 2c. Video Task First Group Status None Average First Press Time Mean time of first press for each Press Latency for 3 Emotions emotion, averaged across the included trials for each emotion 3a. Emotion Faces Accuracy – First Age, Gender, WASI Vocab t- Video Interpretation Sum of the scores of the three coding Recognition Press score Composite dimensions Predicting SIP

3b. Externalizing 1. ADHD Inattention None Video Interpretation Sum of the scores of the three coding Symptoms Mean; 2. ADHD Composite dimensions Predicting SIP Hyperactivity/Impulsivity Mean; 3. ODD/CD Mean 3c. Demographic 1. Age; 2. Gender; 3. None Video Interpretation Sum of the scores of the three coding Factors Predicting WASI Vocab t-score Composite dimensions SIP 4a. Correlation Between 8 parent and teacher social N/A N/A SIP and Social measures, Video Outcomes Interpretation Composite, Video Emotion Accuracy (10 events and Individualized), Faces Emotion Accuracy 4b. Social Outcomes Video Emotion Accuracy- BAI Teacher SDQ Prosocial N/A Predicting SIP Individualized; Video Interpretation Composite 42

Table 3

Recognition Accuracy and Interpretation Coding Data Range ADHD Control First Press Final Press First Press Final Press Faces Percent Correct Disgusted 0-100 37.50 56.94 38.19 51.39 (26.58) (33.30) (31.50) (33.66) Happy 0-100 93.06 98.61 (6.80) 87.50 95.83 (13.83) (22.41) (11.14) Mad 0-100 65.28 91.67 59.72 90.97 (26.88) (17.72) (31.48) (16.47) Sad 0-100 38.89 68.06 49.31 77.08 (32.10) (26.88) (32.24) (24.94) Scared 0-100 38.89 52.78 22.22 50.00 (34.98)* (32.48) (26.03) (35.06) Surprised 0-100 69.44 79.17 65.97 84.72 (25.85) (25.66) (34.03) (25.69) Total Faces 0-100 57.18 74.54 53.82 75.00 (16.90) (11.46) (16.09) (12.02) Animals Percent Correct Dog 0-100 97.22 (9.41) 100 (.00) 98.61 (6.73) 100 (.00) Cat 0-100 97.22 (9.41) 100 (.00) 100 (.00) 100 (.00) Bear 0-100 94.44 100 (.00) 98.61 (6.73) 100 (.00) (12.69) Chicken 0-100 97.22 (9.41) 100 (.00) 96.53 99.31 (4.81) (10.29) Horse 0-100 100 (.00) 100 (.00) 100 (.00) 100 (.00) Fish 0-100 98.61 (6.80) 100 (.00) 98.61 (6.73) 100 (.00) Total 0-100 97.45 (4.63) 100 (.00) 98.73 (2.86) 99.88 (0.80) Animals

Video 0-20 6.38 (3.17) 6.69 (3.63) Interpretation Accuracy

Video 0-20 2.38 (2.65) 1.90 (1.74) Interpretation Inference

43

Table 3 Continued

Video 0-10 3.79 (1.79) 3.92 (1.78) Interpretation Quality

Video 0-50 12.54 (6.67) 12.50 (6.21) Interpretation Composite

Video Emotion 0-100 69.30 (14.69) 75.86 (18.52) Accuracy- Individualized

Video Emotion 0-100 34.17 (20.83) 35.00 (18.01) Accuracy

Number of Events 0-129 20.38 (9.69) 17.85 (9.28) Recognized

Number of Events 0-10 4.08 (1.91) 4.50 (1.99) Recognized-10 Events * p < .05 44

Table 4

Latency to Recognize Emotion ADHD Control aFaces First Press Mean Time Disgusted 6.04 (1.38) 6.27 (1.47) Happy 4.66 (1.12) 5.11 (1.50) Mad 6.67 (1.45) 7.01 (1.49) Sad 6.92 (1.90) 7.30 (1.45) Scared 6.37 (1.62)§ 7.00 (1.45) Surprised 6.11 (1.51) 6.43 (1.52) Total Faces 6.13 (1.32) 6.52 (1.28) Animals First Press Mean Time Dog 3.81 (0.88) 3.90 (0.99) Cat 3.44 (0.66) 3.52 (0.64) Bear 4.00 (0.98) 4.29 (1.01) Chicken 3.89 (1.09) 4.08 (1.01) Horse 3.15 (0.63) 3.18 (0.83) Fish 3.70 (0.81) 3.70 (0.86) Total Animals 3.66 (0.72) 3.78 (0.77)

Video Mean Time to Respond for 4.46 (4.46) 5.98 (5.39) Mad Events

Video Mean Time to Respond for 6.10 (5.44) 3.14 (4.22) Surprised Events

Video Mean Time to Respond for 2.54 (4.88) 3.08 (4.35) Happy Events Note. Times are expressed in seconds § p <0.1 45

Table 5

Correlations between Demographic and Social Variables 1 2 3 4 5 6 7 8 9 10 11 12 13

1. ADHD status - .18 .10 -.15 -.02 -.55** .50** -.47** .59** -.38* .23§ -.26* -.06

2. Age - .21§ -.24* -.30* -.28* .20 -.23§ -.004 -.12 .26* -.12 -.14

3. Gender - -.25* -.18 -.38* .30* -.38* .21 -.09 .10 -.19 -.08

4. WASI-2 Composite - .77** .36* -.18 .42** -.14 .28* -.08 .30* .14

5. WASI-2 Vocab. T- - .25* -.10 .24* -.01 .18 -.17 .42* .18 score 6. SSIS Standard Score - -.49** .77** -.51** .45** -.17 .45** .09

7. Parent SDQ Peer - .49** .34* -.43** .54** -.42* -.19 Prob. 8. Parent SDQ Prosocial - -.38* .36* -.25§ .43* .23§

9. Parent IRS Peer - -.52** .18 -.28* -.07

10. Parent Dishion - -.16 .17 .14

11. Teacher SDQ Peer - -.50** -.56** Prob. 12. Teacher SDQ - .65** Prosocial 13. Teacher Dishion -

Note. ADHD status is coded as 0=control, 1=ADHD. Gender is coded as 0=female, 1=male. ** p < .01, * p < .05, § < .1 46

Table 6

Correlations Between Parent and Teacher Social Measures and SIP Performance 1 2 3 4 5 6 7 8 9 10 11 12 13 1. Parent SDQ Peer - -.49** -.49** .34* -.43** .54** -.42* -.19 .002 .07 -.03 .09 .17 Problems 2. Parent SDQ - .77** -.38* .36* -.25§ .43* .23§ -.02 -.06 .18 .07 .01 Prosocial 3. Parent SSIS - -.51** .45** -.17 .45** .09 -.07 -.06 .22§ .15 .12

4. Parent IRS Peer - -.52** .18 -.28* -.07 -.11 -.15 -.15 -.05 -.18

5. Parent Dishion - -.16 .17 .14 -.02 .01 .08 -.04 .05 Accept 6. Teacher SDQ Peer - -.50** -.56** .06 .12 -.17 .04 .11 Problems 7. Teacher SDQ - .65** .35* .25§ .28* .11 -.06 Prosocial 8. Teacher Dishion - .18 .14 .33* -.06 -.20

9. Video Interpretation - .83** .08 .36* .15 Composite-Subset 10. Video Emotion - .35* .40* .16 Accuracy-Subset 11. Video Emotion - .17 .09 Accuracy-Individualized 12. Faces 1st Press - .55** Accuracy 13. Faces Final - Accuracy Note. ** p < .01, * p < .05, § <0.1. 47

First, data were examined for group differences in demographic variables and possible covariates. The criterion of a moderate correlation (i.e., |r| ≥ .5) with outcome variables was used to determine inclusion of anxiety, depression, and ODD/CD ratings, as well as learning disorder status, as covariates. For Aim 1, age and gender were determined a priori to be included as covariates because of the known association with emotion recognition and SIP, and Animals task mean time to first press was included as a covariate to account for response time uncoupled from emotion on a similar control task.

Age and gender demonstrated small magnitude correlations (i.e., |r| = .2 - .49) with half of the dependent variables. For analyses with Video Interpretation score, verbal ability was determined a priori to be included as a covariate to account for its effect on free- response interpretation responses, and it was not correlated with interpretation responses.

Covariates were not used in Aim 2. For Aim 3, age, gender, and verbal ability were determined a priori to be included as covariates due to their association with SIP. For

Aim 4, OCC/CD mean score was moderately correlated (i.e., |r| ≥ 0.5) with all five parent social measures, and anxiety scores from the BAI were moderately correlated with teacher SDQ prosocial subscale scores. Thus, these variables were considered for inclusion as covariates in the second step of analyses for Aim 4.

To test Aim 1, five multivariate analysis of covariance (MANCOVA) analyses were used to examine group (ADHD vs. Control) differences in performance on the

Faces and Video tasks. The dependent variables for these six MANOVA were as follows:

(1) Faces Accuracy - First Press (6 emotions); (2) Faces Accuracy - Final Press (6 emotions); (3) Video Emotion Accuracy and Number of Events Recognized; (4) Video 48

Emotion Accuracy - Individualized and Number of Events Recognized; and (5) Video

Interpretation Accuracy, Inference, and Quality scores. In interpreting results of the

MANCOVAs, univariate results of between-subjects effects were only interpreted for variables with significant multivariate test results. Additionally, effect sizes are reported and interpreted using Cohen’s conventions (Cohen, 1988; i.e., small: 0.2; medium: 0.5; large: 0.8).

To test Aim 2, a series of t-tests with Holmes correction were used to examine group (ADHD vs. Control) differences in response latency across the three tasks (i.e.,

Faces, Animals, and Video). To examine Aim 3, linear regressions were used to determine which externalizing symptoms and demographic variables predict Video

Interpretation Composite score and Video Emotion Accuracy scores, and forward selection regressions were considered for use to determine relative prediction among significant variables. To examine Aim 4, correlations were used to determine which social measures were significantly correlated with Video task performance, and forward selection regressions were used to determine relative predictive strength of Video

Interpretation Composite Score or Video Emotion Accuracy scores in predicting social functioning.

Aim 1: Group Differences in Emotion Recognition and Interpretation in Faces and

Videos

The first MANCOVA examined Faces task emotion recognition, with group as the between-subjects independent variable, Faces Accuracy - First Press for the six emotions as the dependent variables, and age, gender, and Animals first press mean time 49 as covariates. Multivariate tests were significant for age (F(6, 62) = 3.18, p = .01) and gender (F(6, 62) = 2.82, p = .02), with higher age associated with greater accuracy across all six emotions, and female gender associated with greater accuracy across five emotions. For group status, the multivariate test was marginally significant (F(6, 62) =

2.00, p = .08), and the univariate test for scared was significant (F(4) = 4.08, p = .047), with the ADHD group demonstrating greater accuracy (d = .54). The second MANCOVA used Faces Accuracy – Final Press for the six emotions as the dependent variables and neither the covariates nor group status were significant (p > .17).

The third MANCOVA examined Video task emotion identification and recognition, with group as the between-subjects independent variable, Video Number of

Events Recognized-Individualized and Video Emotion Accuracy as the dependent variables, and age, gender, and Animals first press mean time as covariates. Age was significant (F(2, 66) = 3.36, p = .04), with higher age associated with greater number of events recognized and accuracy. The multivariate test result for group was non- significant (F(2, 66) = .84, p = .44), and effect sizes were less than small to small magnitude (0.27 for Number of Events Recognized, -.06 Video Emotion Accuracy). The fourth MANCOVA used Video Emotion Accuracy-Individualized, and age was again the only significant variable (F(2, 66) = 3.35, p = .04).

The fifth MANCOVA was conducted to examine Video Interpretation scores with group as the between-subjects independent variable, Video Interpretation Accuracy,

Video Interpretation Inference, and Video Inference Quality as the dependent variables, and age, gender, Animals first press mean time, and verbal ability as covariates. Of the 50 covariates, age (F(3, 64) = 3.55, p = .02) and gender were significant (F(3, 64) = 3.45, p =

.02.), and WASI vocabulary t-score was marginally significant (F(3, 64) = 2.53, p = .07).

The multivariate test result for group was non-significant (F(3, 64) = 1.00, p = .40).

Effect sizes were of small magnitude (0.37 for inference, 0.24 for accuracy, 0.20 for quality).

Aim 2: Group Differences in Latency to Emotion Recognition

An independent samples t-test with Holmes correction was conducted for each of the three tasks (i.e., Faces, Animals, Video). See Table 4 for Faces and Animals and time data. For the Faces task, t-tests were conducted using the average response time for each emotion as the dependent variables and group status as the independent variable. This was repeated with the average response time for the six animals (dog, cat, horse, bear, fish, chicken). Across both tasks, none of the t-tests were significant, indicating that latency to recognize emerging emotions and animals did not differ between groups. Small effect sizes were observed for all emotions except disgusted (d range: -0.21 to -0.41), and for Bear from the Animals task (d = -0.29), with children with ADHD responding sooner than control children.

The third series of t-tests were completed with time data for the subset of 10 events from the Video task. The 10 events were represented by three Mad events, three

Surprised events, and four Happy events. Only participants who had responses for the emotion were included in the t-test for that emotion (n = 54 for Mad, n = 65 for

Surprised, and n = 51 for Happy). See Table 4 for time data for latency to respond. The

Surprised events t-test was marginally significant after correction (p = .02, effect size = 51

0.61), with children with ADHD exhibiting longer response time than children without

ADHD. Thus, overall the control and ADHD groups did not differ in latency to recognize emotion in simple or complex dynamic formats, and small to medium effect sizes were observed.

Aim 3: Relations between SIP Performance and Emotion Knowledge Deficits,

Externalizing Behavior, Age, Gender, and Verbal IQ

First, separate linear regressions were examined for Video Interpretation

Composite and Video Emotion Accuracy (10 Events and Individualized) as separate criterion variables. See Table 3 for descriptives of these variables and Table 5 for bi- variate correlations between variables. In the regressions, age, gender, and verbal ability were entered in the first block, and Faces Accuracy - First Press was entered in the second block. See Table 7 for results. Overall, Faces Accuracy - First Press was significant in its associations with Video Emotion Accuracy and marginally significant with Video Interpretation Composite, such that greater accuracy in recognizing emotion during the Faces task was associated with greater accuracy in recognizing emotion in

Video task and higher scores for interpretation response coding.

52

Table 7

Hierarchical Linear Regression Results for Video Interpretation and Video Emotion Accuracy Scores Video Video Emotion Video Emotion Interpretation Accuracy-10 Accuracy- Composite Events Individualized ΔR2 β ΔR2 β ΔR2 β Step 1 .14* .09§ .07 Gender -.12 -.08 -.21§ Age .37* .30* -.16 Verbal Ability .20§ .17 -.05 Step 2 .04§ .09* .03 Faces Accuracy – First .34* .24§ .21 Press Total R2 .18* .18* .11 * p < .05, § p <0.1.

To examine the extent to which externalizing behavior are associated with Video task performance, separate linear regressions were conducted for ADHD Rating Scale-5 inattention mean, ADHD Rating Scale-5 hyperactivity/impulsivity mean, and DBD

ODD/CD mean as predictors and Video Interpretation Composite and Video Emotion

Accuracy (10 events and Individualized) as separate criterion variables. See Table 8 for results. None of the regressions were significant, thus externalizing symptoms were not associated with emotion recognition or cue interpretation for the Video task. Due to lack of significant results, forward regressions were not conducted.

To examine the extent to which age, gender, and verbal ability are associated with performance on SIP tasks, Video Interpretation Composite and Video Emotion Accuracy

(10 events and Individualized) were entered as criterion variables in separate linear regressions. See Table 8 for results. Age was significantly associated with Video 53

Interpretation Composite (β = .28, R2 = .08, p = .02) and Video Emotion Accuracy-10 event (β = .24, R2 = .06, p = .047), indicating that older participants were more likely to demonstrate greater accuracy than younger participants. Verbal ability was not associated with outcomes, and gender was marginally significant for Video Emotion Accuracy-

Individualized (β = -0.23, R2 = .05, p = .06). Due to age demonstrating the only significant association, forward regressions were not conducted.

Table 8

Linear Regression Results for Video Interpretation and Video Emotion Accuracy with Externalizing Symptoms, Age, Gender, Verbal Ability Video Video Emotion Video Emotion Interpretation Accuracy-10 Accuracy- Composite Events Individualized R2 β R2 β R2 β ADHD-5 Inattention .002 .04 .00 .01 .02 -.14 Mean ADHD-5 Hyperactivity/Impulsivity .001 -.04 .007 -.09 .02 -.15 Mean DBD ODD/CD Mean .001 .03 .002 .05 .04 -.20 Age .08* .28* .06* .24* .03 -.18 Gender .01 -.08 .002 -.05 .05§ -.23§ Verbal Ability .01 .11 .01 .09 .002 .04 * p < .05, § p <0.1.

Aim 4: Relations between SIP Performance and Social Outcomes

Parent report was provided for five social outcomes (i.e., SDQ peer problems,

SDQ prosocial, SSIS, IRS, and Dishion), and teacher report was provided for three social outcomes (i.e., SDQ peer problems, SDQ prosocial, and Dishion). See Table 6 for a 54 correlation matrix. Video Emotion Accuracy-Individualized was significantly correlated with two social measures (i.e., teacher SDQ prosocial: r = .28, teacher Dishion: r = .33).

Video Interpretation Composite was significantly correlated with Teacher SDQ prosocial t (r = .35, p = .01). Thus, SIP performance was correlated with measures of social skill rather than social impairment.

A forward selection hierarchical linear regression was conducted with teacher

SDQ prosocial as the criterion variable, ODD/CD mean and BAI score entered in the first block as covariates, and Video Interpretation Composite and Video Emotion Accuracy-

Individualized entered via forward selection in the second block, to determine which SIP performance metric was most associated with social skills, after accounting for covariates. The first step of the model was non-significant (R2 = .31, p = .19), and no variables were entered in the second step; however, only 12 participants had both values of BAI and teacher SDQ prosocial, thus the analysis was underpowered. The regression was repeated with only ODD/CD mean entered in the first step because all participants had this value, and 59 participants had teacher data (81.9% of sample). The model was significant at the first step (ODD/CD mean: β = -.29, R2 = .09, p = .03) and at the second step, with only Video Interpretation Composite entered in the model (β = .32, ΔR2 = .10, p < .05 for F change and step 2 model). Thus, after considering parent report of ODD/CD symptoms, Video task response interpretation was associated with teacher report of prosocial skills.

55

DISCUSSION

Previous SIP research in children with ODD/CD and children with ADHD highlighted SIP as a potential area for research to identify specific processes that contribute to social and emotional impairments in children with ADHD. However, few studies on SIP in children with ADHD, inconsistent findings across the available studies, and only two studies linking SIP to social outcomes indicated that additional research is needed to determine whether SIP is a viable target for social and emotional intervention among children with ADHD. In the current study, limitations of previous studies were addressed and multiple aspects of SIP and their connection to social outcomes were examined among children with or at-risk for ADHD and children without ADHD.

Contrary to hypotheses, the ADHD and control groups did not perform differently on simple or complex dynamic SIP tasks requiring emotion recognition and cue interpretation (see Table 3), nor did the groups differ in latency to recognize emotion (see

Table 4). Further, SIP performance was only related to two of eight parent- and teacher- rated social outcomes. These results call into question whether SIP processes, such as emotion recognition and interpretation, are useful intervention targets for addressing social and emotional impairments in children with ADHD. The results and contextual factors, including measurement and sample characteristics, are discussed in relation to existing literature and future directions.

Cue Encoding

Cue encoding is the first step of SIP, and includes emotion recognition and identification. Previous research documented emotion recognition deficits in children 56 with ADHD (e.g., Boakes, Chapman, Houghton, & West., 2008; Corbett & Glidden,

2000; Serrano et al., 2015); however, across these studies, detected group differences varied depending on specific emotion and task design, and effect sizes ranged from less than small to large magnitude. In the current study, no differences were found between the ADHD and control groups for emotion recognition (i.e., accuracy scores) in face morphing or video tasks. One exception is Faces Accuracy – First Press for Scared, which demonstrated a significant MANCOVA univariate effect for a marginally significant multivariate effect. However, if participants did not have a first press response because they were unsure of the emotion and did not press during the morph, a value of incorrect was entered for their first press accuracy. The control group exhibited fewer first presses for Scared than the control group (d = 0.32), and as a result, the significant univariate effect may be a result of the control group being less willing to guess during the morph and having a value of incorrect entered more often than the ADHD group. The lack of consistent group differences may be because children at-risk for ADHD are capable of detecting emotions accurately when in a contrived environment. Future research should test emotion recognition during live interactions with other children or adults.

Alternatively, the findings may be a function of methodology and sample characteristics. For example, the Video task may have been too difficult. Video Emotion accuracy was 34-35% for the 10 reliably-coded events and less than 30% for all 23 coded events for both groups. However, when emotion accuracy was examined only for the emotions that the child recognized (i.e., Video Emotion Accuracy-Individualized), 57 emotion recognition accuracy increased to 69% for the ADHD group and 75% for the control group (d = -0.39, p > .05); yet, again revealing only minimal group differences.

Together, this suggests that despite the task difficulty, if emotion recognition deficits are present in children with ADHD, they are likely not consistent and large enough to serve as a focus area for interventions targeting social and emotional impairments

Based on SIP literature, it was expected that children at-risk for ADHD would identify fewer emotions in the Video task than children without ADHD (i.e., Number of

Events Recognized); however, the number of emotions recognized did not differ between groups. In previous SIP studies, cue encoding was measured via response to questions such as “How do you know [he did it on purpose]?” (Matthys et al., 1999) or asking children what happened in a story (Andrade et al., 2012). Cue encoding, as measured via number of emotions identified in the current study, includes emotion in the SIP process, but it may be too different of a construct to draw a parallel to other SIP study measures of cue encoding. Thus, if the construct of emotion is to be included in SIP research, additional consideration is needed to determine how to best incorporate it with other measures of SIP.

Cue Interpretation

The second step of SIP is cue interpretation. In the present study, cue interpretation was measured via coding of children’s reasoning for a television character’s displayed emotion. This is in contrast to interpretation of a specific trigger in previous SIP research (i.e., peer spilled milk on child). In the current study, the child’s reasoning was coded on three dimensions: inference, accuracy, and quality, and the 58 specific events coded were based on pilot testing with college students. Notably, inter- rater reliability was low for the coded events overall. This led to reduction of events included in analyses from the initial 23 events to a subset of 10 events that achieved

>70% inter-rater reliability, resulting in 81% inter-rater reliability among these 10 events.

Though the television episode has been used in previous research with children with

ADHD, it had not been used in the same manner as the current study. Thus, development of a coding system across the three dimensions was required and proved to be difficult given the variability of emotions and events in the video.

For example, some events demonstrated variability in levels of inference or accuracy possible, whereas others were more straightforward and demonstrated less variability in these dimensions. See Appendix B11 for a summary and descriptives of the

10 included events. The possible ranges for Video Interpretation Accuracy and Inferences scores for the 10 events were 0-20, and the average scores were 1.90 - 2.38 for Inference and 6.38 – 6.69 for Accuracy across both groups. Additionally, the possible range for the

Video Interpretation Quality score was 0-10, and the average scores were 3.79 – 3.92 across groups. Thus, despite having data from pilot testing that informed development of the coding scale, the coding system may have been inadequate or, as seen with emotion recognition, the task may have been too difficult. Together, the novel use of the television episode, development of a coding system, and difficulty of the task may have contributed to the lack of cue interpretation differences observed.

In other studies, cue interpretation has been measured by a forced choice as to whether an action was done on purpose and explanation of why an event occurred. Thus, 59 though explanation of an event (following a forced-choice question) in previous studies revealed SIP deficits among children with ADHD, explanation of an emotion (following an open-ended question) did not reveal SIP deficits among children at-risk for ADHD in the current study. As discussed above with cue identification, if emotion constructs are to be included SIP research, additional research is needed to determine which methods of measurement are most meaningful to include.

Latency to Recognize Emotion

Design of previous SIP studies did not allow for real-time level processing of cues, thus it was unclear whether children with ADHD were delayed in cue identification.

Latency to initial emotion recognition in a face morphing task did not significantly differ between groups, which aligns with results from the only other study, to the author’s knowledge, to examine performance on a face morphing task among individuals with

ADHD (Schwenk et al., 2013). Small effect sizes were observed for all emotions except disgusted, with the ADHD group demonstrating faster latency to recognition than the control group. For the Video task, the ADHD group demonstrated a longer time to recognize Surprised events than the control group (d = 0.61), and no difference was demonstrated for Mad and Happy events. Overall, the results suggest that neither impulsive responding errors nor longer latency to recognize emotion is present among children at-risk for ADHD. Further, the shorter latency to respond was not associated with decreased accuracy for the ADHD group, save for one exception. Inclusion of live social interactions in future research may reveal whether these results (i.e., lack of impulsive responding and shorter latency not associated with decreased accuracy) 60 translate to more complex social demands.

Additional Factors that Affect SIP

In the existing ADHD SIP literature, it was unclear the extent to which SIP was affected by emotion knowledge deficits, externalizing behavior, age, gender, and verbal ability. Regression analyses indicated that initial emotion recognition accuracy for the

Faces task was not significantly associated with Video task performance. Surprisingly, externalizing symptoms were not associated with Video task performance either.

However, this aligns with Serrano and colleagues’ (2015) findings that parent and teacher rating inattention symptoms were not associated with emotion recognition accuracy.

Thus, in addition to moving toward stimuli that more closely mimic real life interaction, future social and emotional research in ADHD populations may need to investigate other characteristics beyond ADHD symptoms that contribute to social and emotional impairments in children with ADHD.

Age was positively associated with Video Interpretation Composite and Video

Emotion Accuracy, indicating that older children demonstrated better performance on

SIP tasks than younger children. Lastly, gender was not associated with Video task performance. In previous SIP research, age and gender were confounded, and the current study’s results suggest that performance on SIP tasks may be more strongly associated with age rather than gender. Additional research with samples comprised of a wider age range and balance in gender is needed to support these results.

Despite the theoretical connection between SIP and social functioning, the limited inclusion of social outcomes in SIP research raised the question as to whether 61 performance on SIP tasks is associated with social functioning. Demonstration of this connection would help to identify SIP as a potential target for social and emotional intervention. Eight measures of social problems and skill, as rated by parents and teachers, were collected, and performance on SIP tasks was only related to two teacher measures of social skill and no parent-rated social measures. SIP Composite score of the

10 included events and Video Emotion Accuracy of the events recognized were predictive of teacher SDQ prosocial and teacher Dishion. Thus, in the current sample, the

SIP steps of cue interpretation and emotion recognition accuracy were associated with measures of social skill, rather than measures of social impairment. The lack of connection between performance on SIP tasks and social impairment measures may be due in part to the difficulty of the task as previously discussed. Alternatively, it may be that, even though the complex, dynamic nature of the Video task moved closer to real life interaction compared to static images of previous SIP research, the task still does not approximate the social and emotional demands of real life interaction wherein children with ADHD have difficulty.

Limitations

There are limitations of the current study. First, due to recruitment difficulties, the sample size is small and does not have the distribution of children to young adolescents and males to female needed to fully examine the effects of these variables. Reporting and interpreting effect sizes provides information on the nature of the relationships examined, however a larger sample with more young adolescents and females would allow for greater confidence in the findings. Second, the Video task was a novel integration of 62 emotion and SIP into a task that met the needs of longer and more complex SIP stimuli; however, it may have been too difficult and the coding system may not have equally represented all events.

Additionally, in determining ADHD group classification, only parent ratings of

ADHD symptoms were used; impairment or second informant ratings were not used.

Thus, results may vary when more comprehensive assessment is conducted to determine diagnoses. However, correlations between ADHD symptoms and social and emotional outcomes, suggest that greater severity was not associated with worse performance.

Summary

SIP theory has helped to identify differences and biases in processing that are associated with behavioral and social outcomes in children with ODD/CD. However, there do not appear to be SIP deficits in cue identification, cue interpretation, or latency to recognize emotion among children at-risk for ADHD across simple and complex dynamic stimuli in this study. Further, performance on SIP tasks was related to measures of social skill, but not measures of social impairment. Together, these results suggest that children at-risk for ADHD do not exhibit atypical processing of social and emotional information and overall do not respond in an impulsive or delayed manner when completing social and emotional tasks. Additional research is needed to confirm these findings before excluding SIP as a potential target for social and emotional intervention among children with ADHD and to identify alternative processes that are related to social outcomes among children with ADHD and potentially malleable. Additional research 63 may help to identify target areas for social and emotional interventions among children with ADHD.

64

REFERENCES

Akhtar, N., & Bradley, E. J. (1991). Social information processing deficits of aggressive

children: Present findings and implications for social skills training. Clinical

Psychology Review, 11(5), 621-644. https://doi.org/10.1016/0272-

7358(91)90007-H

Andrade, B. F., Waschbusch, D. A., Doucet, A., King, S., MacKinnon, M., McGrath, P.

J., ... & Corkum, P. (2012). Social information processing of positive and negative

hypothetical events in children with ADHD and conduct problems and controls.

Journal of Attention Disorders, 16(6), 491-504.

https://doi.org/10.1177/1087054711401346

Bae, Y. (2012). Test Review: Children's Depression Inventory 2 (CDI 2). Journal of

Psychoeducational Assessment, 30(3), 304-308.

https://doi.org/10.1177/0734282911426407

Bailey, U. L., Lorch, E. P., Milich, R., & Charnigo, R. (2009). Developmental changes in

attention and comprehension among children with attention deficit hyperactivity

disorder. Child Development, 80(6), 1842-1855. https://doi.org/10.1111/j.1467-

8624.2009.01371.x

Bauminger, N., Edelsztein, H. S., & Morash, J. (2005). Social information processing and

emotional understanding in children with LD. Journal of Learning Disabilities,

38(1), 45-61. https://doi.org/10.1177/00222194050380010401

Beauchamp, M. H., & Anderson, V. (2010). SOCIAL: an integrative framework for the

development of social skills. Psychological Bulletin, 136(1), 39. 65

https://doi.org/10.1037/a0017768

Beck, A. T., & Steer, R. A. (1990). Manual for the Beck anxiety inventory. San Antonio,

TX: Psychological Corporation.

Berthiaume, K. S., Lorch, E. P., & Milich, R. (2010). Getting clued in inferential

processing and comprehension monitoring in boys with ADHD. Journal of

Attention Disorders, 14(1), 31-42. https://doi.org/10.1177/1087054709347197

Boakes, J., Chapman, E., Houghton, S., & West, J. (2008). Facial affect interpretation in

boys with attention deficit/hyperactivity disorder. Child Neuropsychology, 14(1),

82-96. https://doi.org/10.1080/09297040701503327

Bose-Deakins, J. E., & Floyd, R. G. (2004). A review of the Beck Youth Inventories of

Emotional and Social Impairment. Journal of School Psychology, 42, 333-

340.https://doi.org/10.1016/j.jsp.2004.06.002

Bunford, N., Evans, S. W., & Wymbs, F. (2015). ADHD and emotion dysregulation

among children and adolescents. Clinical Child and Family Psychology Review.

18(3), 185-217. https://doi.org/10.1007/s10567-015-0187-5

Collin, L., Bindra, J., Raju, M., Gillberg, C., & Minnis, H. (2013). Facial emotion

recognition in child psychiatry: a systematic review. Research in Developmental

Disabilities, 34(5), 1505-1520. https://doi.org/10.1016/j.ridd.2013.01.008

Corbett, B., & Glidden, H. (2000). Processing affective stimuli in children with attention-

deficit hyperactivity disorder. Child Neuropsychology, 6(2), 144-155.

https://doi.org/10.1076/chin.6.2.144.7056

Crick, N. R., & Dodge, K. A. (1994). A review and reformulation of social information- 66

processing mechanisms in children's social adjustment. Psychological Bulletin,

115(1), 74. https://doi.org/10.1037/0033-2909.115.1.74 de Boo, G. M., & Prins, P. J. M. (2007). Social incompetence in children with ADHD:

Possible moderators and mediators in social-skills training. Clinical Psychology

Review, 27(1), 78-97. https://doi.org/10.1016/j.cpr.2006.03.006

Denham, S. A. (1998). Emotional development in young children. New York: Guilford

Press.

Denham, S. A., Bassett, H. H., Way, E., Kalb, S., Warren-Khot, H., & Zinsser, K. (2014).

"How Would You Feel? What Would You Do?" Development and underpinnings

of preschoolers' social information processing. Journal of Research in Childhood

Education, 28(2), 182-202. https://doi.org/10.1080/02568543.2014.883558

Dishion, T. (1990). The peer context of troublesome child and adolescent behavior. In P.

E. Leone (Ed.), Understanding troubled and troubling youth: Multiple

perspectives (pp. 128–153). Thousand Oaks, CA: Sage.

Dodge, K. A., Laird, R., Lochman, J. E., & Zelli, A. (2002). Multidimensional latent-

construct analysis of children's social information processing patterns:

Correlations with aggressive behavior problems. Psychological Assessment,

14(1), 60-73. https://doi.org/10.1037/1040-3590.14.1.60

DuPaul, G. J., Reid, R., Anastopoulos, A. D., Lambert, M. C., Watkins, M. W., & Power,

T. J. (2015). Parent and teacher ratings of attention-deficit/hyperactivity disorder

symptoms: Factor structure and normative data. Psychological Assessment, 28(2),

214-225 67

Ekman, P., & Friesen, W. V. (1975). Pictures of facial affect. Palo Alto, CA: Consulting

Psychologists Press.

Evans, S. W., Owens, J. S., & Bunford, N. (2014). Evidence-based psychosocial

treatments for children and adolescents with attention-deficit/hyperactivity

disorder. Journal of Clinical Child & Adolescent Psychology, 43(4), 527-551.

https://doi.org/10.1080/15374416.2013.850700

Fabiano, G. A., Pelham, W. E., Waschbusch, D. A., Gnagy, E. M., Lahey, B. B., Chronis,

A. M., ,…Burrows-MacLean, L. (2006). A practical measure of impairment:

Psychometric properties of the impairment rating scale in samples of children

with attention deficit hyperactivity disorder and two school-based samples.

Journal of Clinical Child and Adolescent Psychology, 35(3), 369-385.

https://doi.org/10.1207/s15374424jccp3503_3

Flory, K., Milich, R., Lorch, E. P., Hayden, A. N., Strange, C., & Welsh, R. (2006).

Online story comprehension among children with ADHD: Which core deficits are

involved?. Journal of Abnormal Child Psychology, 34(6), 850-862.

https://doi.org/10.1007/s10802-006-9070-7

Goodman, R. (2001). Psychometric properties of the strengths and difficulties

questionnaire. Journal of the American Academy of Child & Adolescent

Psychiatry, 40(11), 1337-1345. https://doi.org/10.1097/00004583-200111000-

00015

Gresham, F. M., & Elliot, S. N. (2008). Social skills improvement system: Rating scales.

Bloomington, MN: Pearson Assessments. 68

Gresham, F. M., Elliott, S. N., Vance, M. J., & Cook, C. R. (2011). Comparability of the

Social Skills Rating System to the Social Skills Improvement System: Content

and psychometric comparisons across elementary and secondary age levels.

School Psychology Quarterly, 26(1), 27. https://doi.org/10.1037/a0022662

Hoza, B. (2007). Peer functioning in children with ADHD. Journal of Pediatric

Psychology, 32(6), 655-663. https://doi.org/10.1093/jpepsy/jsm024

Hoza, B., Gerdes, A. C., Mrug, S., Hinshaw, S. P., Bukowski, W. M., Gold, J. A., ... &

Wigal, T. (2005). Peer-assessed outcomes in the multimodal treatment study of

children with attention deficit hyperactivity disorder. Journal of Clinical Child

and Adolescent Psychology, 34(1), 74-86.

https://doi.org/10.1207/s15374424jccp3401_7

King, S., Waschbusch, D. A., Pelham Jr, W. E., Frankland, B. W., Andrade, B. F.,

Jacques, S., & Corkum, P. V. (2009). Social information processing in

elementary-school aged children with ADHD: medication effects and

comparisons with typical children. Journal of Abnormal Child Psychology, 37(4),

579-589. https://doi.org/10.1007/s10802-008-9294-9

Kovacs, M. (2010). The children's depression inventory 2nd ed.: CDI 2 manual. Multi-

Health Systems, North Tonawanda, NJ.

Lansford, J. E., Malone, P. S., Dodge, K. A., Crozier, J. C., Pettit, G. S., & Bates, J. E.

(2006). A 12-year prospective study of patterns of social information processing

problems and externalizing behaviors. Journal of Abnormal Child Psychology,

34(5), 709-718. https://doi.org/10.1007/s10802-006-9057-4 69

Lemerise, E. A., & Arsenio, W. F. (2000). An integrated model of emotion processes and

cognition in social information processing. Child Development, 71(1), 107-118.

https://doi.org/10.1111/1467-8624.00124

Lorch, E., Berthiaume, K., Milich, R., Van Den Broek, P. (2008). Story comprehension

impairments in children with attention-deficit/hyperactivity disorder. In Cain, K.

& Oakhill, J (Eds), Children's comprehension problems in oral and written

language: A cognitive perspective. Eds.. NY, New York: Guilford Press

Marotta, A., Casagrande, M., Rosa, C., Maccari, L., Berloco, B., & Pasini, A. (2014).

Impaired reflexive orienting to social cues in attention deficit hyperactivity

disorder. European Child & Adolescent Psychiatry, 23(8), 649-657.

https://doi.org/10.1007/s00787-013-0505-8

Matthys, W., Cuperus, J. M., & Van Engeland, H. (1999). Deficient social problem-

solving in boys with ODD/CD, with ADHD, and with both disorders. Journal of

the American Academy of Child & Adolescent Psychiatry, 38(3), 311-321.

https://doi.org/10.1097/00004583-199903000-00019

McKown, C., Gumbiner, L. M., Russo, N. M., & Lipton, M. (2009). Social-emotional

learning skill, self-regulation, and social competence in typically developing and

clinic-referred children. Journal of Clinical Child & Adolescent Psychology,

38(6), 858-871. https://doi.org/10.1080/15374410903258934

Mikami, A. Y., Lee, S. S., Hinshaw, S. P., & Mullin, B. C. (2008). Relationships between

social information processing and aggression among adolescent girls with and

without ADHD. Journal of Youth and Adolescence, 37(7), 761-771. 70

https://doi.org/10.1007/s10964-007-9237-8

Milch- Reich, S., Campbell, S. B., Pelham Jr, W. E., Connelly, L. M., & Geva, D.

(1999). Developmental and individual differences in children's on-line

representations of dynamic social events. Child Development, 70(2), 413-431.

https://doi.org/10.1111/1467-8624.00030

Mrug, S., Hoza, B., Pelham, W. E., Gnagy, E. ., & Greiner, A. R. (2007). Behavior and

peer status in children with ADHD. Journal of Attention Disorders. 10(4), 359-

371. https://doi.org/10.1177/1087054706288117

Murphy, D. A., Pelham, W. E., & Lang, A. R. (1992). Aggression in boys with attention

deficit-hyperactivity disorder: Methylphenidate effects on naturalistically

observed aggression, response to provocation, and social information processing.

Journal of Abnormal Child Psychology, 20(5), 451-466.

https://doi.org/10.1007/BF00916809

Nilsen, E. S., Mangal, L., & MacDonald, K. (2013). Referential communication in

children with ADHD: Challenges in the role of a listener. Journal of Speech,

Language, and Hearing Research, 56(2), 590-603. https://doi.org/10.1044/1092-

4388(2012/12-0013)

Orobio de Castro, B. O., Merk, W., Koops, W., Veerman, J. W., & Bosch, J. D. (2005).

Emotions in social information processing and their relations with reactive and

proactive aggression in referred aggressive boys. Journal of Clinical Child and

Adolescent Psychology, 34(1), 105-116.

https://doi.org/10.1207/s15374424jccp3401_10 71

Orobio de Castro, B. O., Veerman, J. W., Koops, W., Bosch, J. D., & Monshouwer, H. J.

(2002). Hostile attribution of intent and aggressive behavior: A meta- analysis.

Child Development, 73(3), 916-934. https://doi.org/10.1111/1467-8624.00447

Owens, J. S. & Evans, S.W (2010). Project to Learn About Youth - Mental Health. Grant

funded by the Centers for Disease Control via the Disability Research and

Dissemination Center.

Pelham, W., Fabiano, G., & Massetti, G. (2005). Evidence-based assessment of attention

deficit hyperactivity disorder in children and adolescents. Journal of Clinical

Child and Adolescent Psychology, 34(3), 449, 476.

https://doi.org/10.1207/s15374424jccp3403_5

Pelham, W. E., Gnagy, E. M., Greenslade, K. E., & Milich, R. (1992). Teacher ratings of

DSM-III—R symptoms for the disruptive behavior disorders. Journal of the

American Academy of Child & Adolescent Psychiatry, 31(2), 210-218.

https://doi.org/10.1097/00004583-199203000-00006

Reynolds, W. M. (2002). RADS-2: Reynolds Adolescent Depression Scale-: Professional

Manual. Lutz, FL: PAR, Psychological Assessment Resources.

Reynolds, W. M., & Mazza, J. J. (1998). Reliability and validity of the Reynolds

Adolescent Depression Scale with Young Adolescents. Journal of School

Psychology, 36(3), 295-312. https://doi.org/10.1016/S0022-4405(98)00010-7

Reynolds, C. R., & Richmond, B. O. (2008). The Revised Children's Manifest Anxiety

Scale, Second Edition (RCMAS-2). Los Angeles, CA: Western Psychological

Services 72

Saarni, C. (1999). The development of emotional competence. New York, NY: Guilford

Press.

Schwenck, C., Schneider, T., Schreckenbach, J., Zenglein, Y., Gensthaler, A., Taurines,

R., ... & Romanos, M. (2013). Emotion recognition in children and adolescents

with attention-deficit/hyperactivity disorder (ADHD). ADHD Attention Deficit

and Hyperactivity Disorders, 5(3), 295-302. https://doi.org/10.1007/s12402-013-

0104-z

Seligman, L. D., Ollendick, T. H., Langley, A. K., & Baldacci, H. B. (2004). The utility

of measures of child and adolescent anxiety: A meta-analytic review of the

Revised Children's Anxiety Scale, the State-Trait Anxiety Inventory for Children,

and the Child Behavior Checklist. Journal of Clinical Child and Adolescent

Psychology, 33(3), 557-565. https://doi.org/10.1207/s15374424jccp3303_13

Semrud-Clikeman, M., & Schafer, V. (2000). Social and emotional competence in

children with ADHD and/or learning disabilities. Journal of Psychotherapy in

Independent Practice, 1(4), 3-19. https://doi.org/10.1300/J288v01n04_02

Serrano, V. J., Owens, J. S., & Hallowell, B. (2015). Where children with ADHD direct

visual attention during emotion knowledge tasks: Relationships to accuracy,

response time, and ADHD symptoms. Journal of Attention Disorders.

https://doi.org/10.1177/1087054715593632

Shaver, P., Schwartz, J., Kirson, D., & O'Connor, C. (1987). Emotion knowledge: Further

exploration of a prototype approach. Journal of Personality and Social

Psychology, 52(6), 1061-1086. https://doi.org/10.1037/0022-3514.52.6.1061 73

Sibley, M. H., Evans, S. W., & Serpell, Z. N. (2010). Social cognition and interpersonal

impairment in young adolescents with ADHD. Journal of Psychopathology and

Behavioral Assessment, 32(2), 193-202. https://doi.org/10.1007/s10862-009-

9152-2

Sjöwall, D., Roth, L., Lindqvist, S., & Thorell, L. B. (2013). Multiple deficits in ADHD:

Executive dysfunction, delay aversion, reaction time variability, and emotional

deficits. Journal of Child Psychology and Psychiatry, 54(6), 619-627.

https://doi.org/10.1111/jcpp.12006

Uekermann, J., Kraemer, M., Abdel-Hamid, M., Schimmelmann, B. G., Hebebrand, J.,

Daum, I., ... & Kis, B. (2010). Social cognition in attention-deficit hyperactivity

disorder (ADHD). Neuroscience & Biobehavioral Reviews, 34(5), 734-743.

https://doi.org/10.1016/j.neubiorev.2009.10.009

Wechsler, D. (2011). Wechsler Abbreviated Scale of Intelligence - Second Edition

(WASI-II). San Antonio, TX: Pearson

Zeeuw, P., Aarnoudse-Moens, C., Bijlhout, J., König, C., Uiterweer, A. P., Papanikolau,

A., ... & Oosterlaan, J. (2008). Inhibitory performance, response speed,

intraindividual variability, and response accuracy in ADHD. Journal of the

American Academy of Child & Adolescent Psychiatry, 47(7), 808-816.

https://doi.org/10.1097/CHI.0b013e318172eee9

74

APPENDIX A: SUGGESTED COMMITTEE REVISIONS TO INTRODUCTION NOT

INCLUDED IN MANUSCRIPT

Not applicable 75

APPENDIX B: STUDY NON-COPYRIGHT MEASURES AND CONSENTS

ADHD-5 Rating Scale 76

Disruptive Behavior Disorders (DBD) 78

Rating Scale

Impairment Rating Scale 79

Strengths and Difficulties Questionnaire 80

Dishion Social Preference Scale 82

Demographic Questionnaire 83

Parent Consent – Athens 88

Child Assent - Athens 92

Parent Consent – PLAY participant 93

Child Assent – PLAY participant 97

Pilot Testing 98

Defining Growing Pains Events 99

Interpretation Response Coding Anchors 100

Coding Data for the Subset of 10 Events 101 76

APPENDIX B1: ADHD-5 RATING SCALE

Please circle the number that best describes your child’s behavior over the past 6 months.

______How often does your child Never or Rarely Sometimes Often Very Often display this behavior? ______

Fails to give close attention to details or makes careless mistakes in schoolwork or 0 1 2 3 during other activities

Has difficulty sustaining attention in tasks or play 0 1 2 3 activities

Does not seem to listen when spoken to directly 0 1 2 3

Does not follow through on instructions and fails to finish 0 1 2 3 schoolwork or chores

Has difficulty organizing tasks and activities 0 1 2 3

Avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort (e.g., 0 1 2 3 schoolwork or homework)

Loses things necessary for tasks or activities (e.g., school 0 1 2 3 materials, pencils, books, eyeglasses)

Easily distracted 0 1 2 3

Forgetful in daily activities (e.g., doing chores) 0 1 2 3 77

______How often does your child Never or Rarely Sometimes Often Very Often display this behavior? ______

Fidgets with or taps hands or feet or squirms in seat 0 1 2 3

Leaves seat in situations when remaining seated is 0 1 2 3 expected

Runs about or climbs in situations where it is 0 1 2 3 inappropriate

Unable to play or engage in leisure activities quietly 0 1 2 3

“On the go,” acts as if “driven by a motor” 0 1 2 3

Talks excessively 0 1 2 3

Blurts out an answer before a question has been completed 0 1 2 3

Has difficulty waiting his or her turn (e.g., while waiting 0 1 2 3 in line).

Interrupts or intrudes on others 0 1 2 3

Reprinted with permission from the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, (Copyright 2013). American Psychiatric Association.

78

APPENDIX B2: DISRUPTIVE BEHAVIOR DISORDERS (DBD) RATING SCALE

Check the column that best describes your/this child. Please write DK next to any items for which you don't know the answer

Not Just Pretty Very at a Much Much All Little 1. has run away from home overnight at least twice while living in parental or parental surrogate home (or once without returning for a lengthy period)

2. often argues with adults 3. often lies to obtain goods or favors or to avoid obligations (i.e., "cons" others) 4. often initiates physical fights with other members of his or her household 5. has been physically cruel to people 6. has stolen items of nontrivial value without confronting a victim (e.g., shoplifting, but without breaking and entering; forgery)

7. often truant from school, beginning before age 13 years 8. is often spiteful or vindictive 9. often swears or uses obscene language 10. often blames others for his or her mistakes or misbehavior 11. has deliberately destroyed others' property (other than by fire setting) 12. often actively defies or refuses to comply with adults' requests or rules 13. often initiates physical fights with others who do not live in his or her household (e.g., peers at school or in the neighborhood)

14. is often angry and resentful 15. is often touchy or easily annoyed by others 16. often loses temper 17. has forced someone into sexual activity 18. often bullies, threatens, or intimidates others 19. has been physically cruel to animals 20. often stays out at night despite parental prohibitions, beginning before age 13 years 21. often deliberately annoys people 22. has stolen while confronting a victim (e.g., mugging, purse snatching, extortion, armed robbery) 23. has deliberately engaged in fire setting with the intention of causing serious damage 24. has broken into someone else's house, building, or car 25. has used a weapon that can cause serious physical harm to others (e.g., a bat, brick, broken bottle, knife, gun)

79

APPENDIX B3: IMPAIRMENT RATING SCALE

Please mark an “X” on the lines at the point that you believe reflects the severity of your child’s problems in this area and whether he or she needs treatment or special services for the problems.

Please consider behavior during the last month when making your ratings. Please do not leave any of the items blank.

(1) How your child’s problems affect his or her relationship with other children

80

APPENDIX B4: STRENGTHS AND DIFFICULTIES QUESTIONNAIRE

Parent and Teacher Version for Children Ages 4-10

81

Parent and Teacher Version for children ages 11-17 82

APPENDIX B5: DISHION SOCIAL PREFERENCE SCALE

Very Few Some About Many Almost (< 25%) (25%- Half (51%- All 49%) (50%) 75%) (> 75%) 1. What proportion 1 2 3 4 5 of other students in this class would they say like this student

2. What proportion 1 2 3 4 5 of other students in this class would they say dislike this student

3. What proportion 1 2 3 4 5 of other students in this class would they say ignore this student

83

APPENDIX B6: DEMOGRAPHIC QUESTIONNAIRE

84 85 86 87 88

APPENDIX B7: CONSENT AND ASSENT FORMS

Ohio University Parental Consent Form

Title of Research: Social and Emotional Information Processing in Children

Researchers: Verenea Serrano, M.S. & Julie Sarno Owens, Ph.D.

You are being asked permission for your child to participate in research. For you to be able to decide whether you want your child to participate in this project, you should understand what the project is about, as well as the possible risks and benefits in order to make an informed decision. This process is known as informed consent. This form describes the purpose, procedures, possible benefits, and risks. It also explains how your child’s personal information will be used and protected. Once you have read this form and your questions about the study are answered, you will be asked to sign it. This will allow your child’s participation in this study. You should receive a copy of this document to take with you.

Explanation of Study

The main purpose of this study is to measure children’s performance on tasks of emotion recognition and social information processing (SIP) to better understand how children process social and emotional information.

If you agree to allow your child to participate:

• Your child will: o Watch and respond to two types of videos with emotional and social content that are appropriate for children. The first type of videos will show a neutral facial expression changing to a certain emotion expression. The second type of video is an episode of the television show Growing Pains, a family-friendly sitcom. § Your child responses will be audio recorded. o Watch and respond to a third type of video that contains images of animals o Wear a heart rate monitor while watching the videos o Complete a measure of cognitive ability if such a test has not been completed in the context of a different Center for Intervention Research in Schools (CIRS) project within the past year § If a measure of cognitive ability has been completed during a CIRS project within the past year, the cognitive ability score will be shared with research staff for the present project. o Complete self-report measures of depression and anxiety symptoms

89

• You will be to complete the following measures o A clinical diagnostic interview o Rating scales related to your child’s behavior, functioning, and social skills o A demographics form

• Your child’s teacher will be asked to: o Complete a brief rating scale about your child’s behavior and social functioning

Your child should not participate in this study if your child is younger than 8 years old or older than 14 years old, or has intellectual disability, autism spectrum disorder, psychosis, an anxiety disorder, or a depressive disorder.

You and your child’s participation in the study will last approximately 1.5 hours.

Risks and Discomforts

There are no anticipated risks to you or your child. Some children may view the task as challenging or tiring. Some individuals may find the questions asked uncomfortable. Your child may experience discomfort while wearing the heart rate monitor. Research staff will be available if any concerns or problems arise.

Benefits

If you agree to participate in this research study, you and your child may receive direct benefit. Participants will receive a free evaluation summary report upon request. Furthermore, after participating in practice sessions for this project, children reported that the activities were fun. The information learned from this study may benefit other children. The results may help guide development of social and emotional interventions for children with inattentive and hyperactive/impulsive symptoms.

Compensation

You will receive $30 for completion of study procedure. Partial completion of study procedures will receive $10 compensation.

Confidentiality and Records

The information that we collect from you and your child will be kept confidential. Children’s names and parents’ names will not appear on these materials; instead, each child will be assigned a code number that will identify the materials. You agree that scientific data not identifiable with the children involved in the project may be presented at meetings and published so that the information from the project can be useful to others. 90

If for some reason, you chose to discontinue with the research project, de-identified assessment data will be maintained in the project database, unless you explicitly request otherwise. The master list (the list of participant codes and identifying information) of all program participants will be stored in a password-protected file on a secured drive at Ohio University. The master list will be destroyed in August of 2017 (approximately one year after the completion of the project).

While not anticipated, project investigators are required to break confidentiality if one of the following situations occurs: • If you, as legal guardian give written permission to release the information • If you or your child reveals information that, in program staff’s judgment, indicates that your child intends to harm self or someone else. (See A below) • If you or your child reveals information that indicates the existence of past or present child abuse or neglect, as required by The Ohio Child Abuse and Neglect Law (See B below) • If an appropriate court order is received by a member of our program staff (See C Below)

*Additionally, while every effort will be made to keep your study-related information confidential, there may be circumstances where this information must be shared with: * A: A licensed clinician who is part of the research staff may be consulted for further action if you reveal harm to yourself or others, or if you have revealed information that indicates past or present child abuse. * B: The Child Protective Services in the county in which you reside. * C: The court from which a court order was received. * D: In a medical emergency, medical personnel who would need information to care for your child; including emergency vehicle service personnel, hospital emergency department staff. * Federal agencies, for example the Office of Human Research Protections, whose responsibility is to protect human subjects in research; * Representatives of Ohio University (OU), including the Institutional Review Board, a committee that oversees the research at OU. Finally, there may be circumstances that are out of our control in which your confidentiality is compromised.

Contact Information

If you have any questions regarding this study, please contact the investigator [Verenea Serrano, [email protected], 740-593-3236] or the advisor [Dr. Julie Owens, [email protected], and 740-593-3236].

If you have any questions regarding your child’s rights as a research participant, please contact Dr. Chris Hayhow, Director of Research Compliance, Ohio University, (740)593- 0664 or [email protected]. 91

By signing below, you are agreeing that: • you have read this consent form (or it has been read to you) and have been given the opportunity to ask questions and have them answered; • you have been informed of potential risks to your child and they have been explained to your satisfaction; • you understand Ohio University has no funds set aside for any injuries your child might receive as a result of participating in this study; • you are 18 years of age or older; • your child’s participation in this research is completely voluntary; • your child may leave the study at any time; if your child decides to stop participating in the study, there will be no penalty to your child and he/she will not lose any benefits to which he/she is otherwise entitled • research staff can contact your child’s teacher to obtain teacher ratings scales of your child’s behavior and social functioning. I understand that the consent to exchange information form will be emailed for faxed to my child’s teacher. a. Current teacher’s name and school: ______

___Check here if (a) you are currently or have recently participated in another project within the Center for Intervention Research in Schools, and (b) you are willing to allow your de-identified data from one project (e.g., scores on a given rating scale) to be used in the data analyses associated with another project within the Center.

Parent Signature Date

Printed Name

Child’s Name

Version Date: 09/04/2015 92

Ohio University Child Assent Form

Title of Research: Social and Emotional Information Processing in Children

Researchers: Verenea Serrano, M.S. & Julie Sarno Owens, Ph.D.

We are asking you to help with a project. To help, you will watch videos of people’s facial expressions, animals, and a television show. You will be asked questions about the videos. You will also be asked to complete forms that ask about your thoughts and feelings.

You will be asked to wear a band across your chest that measures your heart beat.

This study will not hurt you in any way and your answers will not be shared with your friends or your teachers. Some kids get may find the activity difficult or feel uncomfortable about answering some of the questions. Other kids find the activities to be fun. If you have questions or you do not feel comfortable answering any questions, please let me know and we can skip those questions.

You will not put your name on any papers. Instead, we will use a secret code so that all of your information is kept private. All of your answers will be kept in a locked file cabinet in a locked room at Ohio University.

You do not have to be in to be in this study if you do not want to. If you decide to participate in the study, you can also stop any time. Your parents have given their permission for you to take part in this study.

If you have any questions at any time, please ask one of the researchers.

If you print your name on this form, it means that you have decided to participate and have read everything that is on the form. You and your parents will be given a copy of this form to keep.

(Participant name printed) (Researcher witness)

Version Date: 09/04/2015 93

Ohio University Parental Consent Form PLAY Participant

Title of Research: Social and Emotional Information Processing in Children

Researchers: Verenea Serrano, M.S. & Julie Sarno Owens, Ph.D.

You are being asked permission for your child to participate in research. For you to be able to decide whether you want your child to participate in this project, you should understand what the project is about, as well as the possible risks and benefits in order to make an informed decision. This process is known as informed consent. This form describes the purpose, procedures, possible benefits, and risks. It also explains how your child’s personal information will be used and protected. Once you have read this form and your questions about the study are answered, you will be asked to sign it. This will allow your child’s participation in this study. You should receive a copy of this document to take with you.

Explanation of Study

The main purpose of this study is to measure children’s performance on tasks of emotion recognition and social information processing (SIP) to better understand how children process social and emotional information.

If you agree to allow your child to participate:

• Your child will: o Watch and respond to two types of videos with emotional and social content that are appropriate for children. The first type of videos will show a neutral facial expression changing to a certain emotion expression. The second type of video is an episode of the television show Growing Pains, a family-friendly sitcom. § Your child responses will be audio recorded. o Watch and respond to a third type of video that contains images of animals o Wear a heart rate monitor while watching the videos

• You will be asked to allow the project researchers to access data obtained on your child in the Project to Learn About Youth (PLAY) study conducted by the Center for Intervention Research in Schools (CIRS). The data to be collected from these records include: o Child’s age, gender, grade, cognitive ability o Parent and teacher ratings of social, emotional, and behavioral functioning o Child self-report data on thoughts, behaviors, and feelings o Clinical interview data o Diagnostic evaluation results 94

o Demographic form data

Your child should not participate in this study if your child is younger than 8 years old or older than 14 years old, or has intellectual disability, autism spectrum disorder, psychosis, an anxiety disorder, or a depressive disorder.

Your child’s participation in the study will last approximately 60 minutes.

Risks and Discomforts

There are no anticipated risks to you or your child. Some children may view the task as challenging or tiring. Some individuals may find the questions asked uncomfortable. Your child may experience discomfort while wearing the heart rate monitor. Research staff will be available if any concerns or problems arise.

Benefits

Your child may not benefit, personally by participating in this study. However, the information learned from this study will help guide development of social and emotional interventions for children with inattentive and hyperactive/impulsive symptoms.

Compensation

You will receive $15 as compensation for completion of study procedures. Partial completion of study procedures will receive $5 compensation.

Confidentiality and Records

The information that we collect from you and your child will be kept confidential. Children’s names and parents’ names will not appear on these materials; instead, each child will be assigned a code number that will identify the materials. You agree that scientific data not identifiable with the children involved in the project may be presented at meetings and published so that the information from the project can be useful to others. If for some reason, you chose to discontinue with the research project, de-identified assessment data will be maintained in the project database, unless you explicitly request otherwise. The master list (the list of participant codes and identifying information) of all program participants will be stored in a password-protected file on a secured drive at Ohio University. The master list will be destroyed in August of 2016 (approximately one year after the completion of the project).

While not anticipated, project investigators are required to break confidentiality if one of the following situations occurs: • If you, as legal guardian give written permission to release the information 95

• If you or your child reveals information that, in program staff’s judgment, indicates that your child intends to harm self or someone else. (See A below) • If you or your child reveals information that indicates the existence of past or present child abuse or neglect, as required by The Ohio Child Abuse and Neglect Law (See B below) • If an appropriate court order is received by a member of our program staff (See C Below)

*Additionally, while every effort will be made to keep your study-related information confidential, there may be circumstances where this information must be shared with: * A: A licensed clinician who is part of the research staff may be consulted for further action if you reveal harm to yourself or others, or if you have revealed information that indicates past or present child abuse. * B: The Child Protective Services in the county in which you reside. * C: The court from which a court order was received. * D: In a medical emergency, medical personnel who would need information to care for your child; including emergency vehicle service personnel, hospital emergency department staff. * Federal agencies, for example the Office of Human Research Protections, whose responsibility is to protect human subjects in research; * Representatives of Ohio University (OU), including the Institutional Review Board, a committee that oversees the research at OU. Finally, there may be circumstances that are out of our control in which your confidentiality is compromised.

Contact Information

If you have any questions regarding this study, please contact the investigator [Verenea Serrano, [email protected], 740-593-3236] or the advisor [Dr. Julie Owens, [email protected], and 740-593-3236].

If you have any questions regarding your child’s rights as a research participant, please contact Dr. Chris Hayhow, Director of Research Compliance, Ohio University, (740)593- 0664 or [email protected].

By signing below, you are agreeing that: • you have read this consent form (or it has been read to you) and have been given the opportunity to ask questions and have them answered; • you have been informed of potential risks to your child and they have been explained to your satisfaction; • you understand Ohio University has no funds set aside for any injuries your 96

child might receive as a result of participating in this study; • you are 18 years of age or older; • your child’s participation in this research is completely voluntary; • your child may leave the study at any time; if your child decides to stop participating in the study, there will be no penalty to your child and he/she will not lose any benefits to which he/she is otherwise entitled.

___Check here if (a) you are currently or have recently participated in another project within the Center for Intervention Research in Schools, and (b) you are willing to allow your de-identified data from one project (e.g., scores on a given rating scale) to be used in the data analyses associated with another project within the Center.

Parent Signature Date

Printed Name

Child’s Name

Version Date: 09/04/2015 97

Ohio University Child Assent Form PLAY Participant

Title of Research: Social and Emotional Information Processing in Children

Researchers: Verenea Serrano, M.S. & Julie Sarno Owens, Ph.D.

We are asking you to help with a project. To help, you will watch videos of people’s facial expressions, animals, and a television show. You will be asked questions about the videos.

You will be asked to wear a band across your chest that measures your heart beat.

This study will not hurt you in any way and your answers will not be shared with your friends or your teachers. Some kids get may find the activity difficult or feel uncomfortable about answering some of the questions. Other kids find the activities to be fun. If you have questions or you do not feel comfortable answering any questions, please let me know and we can skip those questions.

You will not put your name on any papers. Instead, we will use a secret code so that all of your information is kept private. All of your answers will be kept in a locked file cabinet in a locked room at Ohio University.

You do not have to be in to be in this study if you do not want to. If you decide to participate in the study, you can also stop any time. Your parents have given their permission for you to take part in this study.

If you have any questions at any time, please ask one of the researchers.

If you print your name on this form, it means that you have decided to participate and have read everything that is on the form. You and your parents will be given a copy of this form to keep.

(Participant name printed) (Researcher witness)

Version Date: 09/04/2015 98

APPENDIX B8: PILOT TESTING

All three stimuli sets were pilot tested a priori. The Faces and Animal Tasks were piloted with children (N = 8, ages 8 to 13) to evaluate whether they understand and are able to complete the task, and to identify the most appropriate display time. Display times between 100ms – 200ms were tested, and 175ms was chosen as the display time because this represented a compromise between natural look of the morph and sufficient time to press before morph termination. The same sample of children also completed the Video task, and the interpretation responses informed adjustments to the coding scale and the addition of an on-screen press counter to the computer program. Coding scale adjustments were changing Inference from a 0-1 to a 0-2 scale and changing Quality from a 0-2 to a 0-1 scale. The on-screen press counter increased ease of data collection.

For the Growing Pains Video task, discrete emotion labels for the episode have not been developed. As such, the emotion labels and time stamps for the emotions were obtained through pilot testing with undergraduate students (N=18). Their responses served as the accuracy and timing metric against which the child participants’ responses were gauged. Discrete emotion changes that were identified by 66% or more of the participants and that had a specific emotion label identified by 50% or more of those who identified the emotion were selected as events to code, yielding 23 events to code. Of these events, the emotion label chosen by 50% or more of the participants who recognized that event was chosen as the correct answer. The time stamp for each discrete emotion was obtained by averaging the response times for that specific emotion.

99

APPENDIX B9: DEFINING GROWING PAINS EVENTS

The original research group to use the Growing Pains episode (Lorch et al., 2004) identified 593 events, which represent segments of the show. Following college pilot testing, all participants’ qualitative responses were reviewed and events were grouped together based on the collective precipitating occurrence to which the response was made. For events for which there were no college participant responses, the first author used subjective judgment to divide the events. This process resulted in 133 events.

Through the process of child participant data collection and examining the pattern of responding from child participants, the primary author and two coders agreed that specific pairs of events could be merged into a single event due to precipitating occurrence being the same across the event. Thus, 4 pairs of events (i.e., 8 individual events) were merged into 4 events. This resulted in 129 events total for the video. 100

APPENDIX B10 INTERPRETATION RESPONSES RATING ANCHORS

Score 0 1 2 Inference in No inference made. No Simple inference based on Complex inference that includes Reasoning reference to information not observable events only with no reference to thoughts, desires, or explicitly stated. Refers to reference to the thoughts, desires, intentions of a person that are not observable events with no or intentions of a person that are explicitly stated, or that extends inference (e.g., stating it’s a not stated (e.g., stating it’s a or interprets stated or observed birthday party after Happy birthday before Happy Birthday content Birthday is said) is said)

Correct/Plausible No part of response is correct or Response is partially Response provides a full plausible correct/plausible, contains some explanation of the emotion, based but not essential parts that on pilot testing. It provides a comprise a 2-point response fuller explanation of the emotion than 1-point response and references multiple aspects contributing to the emotion (when applicable)

Quality/Coherence Response is poorly organized, it Basic organization in response, would be difficult to understand references specific details to without knowledge of the explain the emotion episode, is not a complete explanation, is general or non- specific (e.g., “sister is making him mad”) 101

APPENDIX B11: CODING DATA FOR THE SUBSET OF 10 EVENTS

Brief Description of Number of Accuracy of Inference Quality of the Event to be Interpreted respondents the about the Inference who detected Inference Event M (SD) the event M (SD) M (SD) Mike’s dad pulled out 40 1.83 (.50) .32 (.69) .95 (.22) the headphones

Mike returns from the 20 1.65 (.75) .30 (.66) .90 (.31) arcade after his friend does not show up

Mike is practicing his 34 1.15 (.70) .59 (.70) .79 (.41) speech

Mike’s mom is 28 1.39 (.79) .68 (.72) .82 (.39) surprised to find the form and that her tooth feels better

Mike’s dad says he 16 1.56 (.73) .25 (.68) .94 (.25) can go to his friend’s house

Mike’s smiles and 9 1.44 (.73) 1.11 (.93) .89 (.33) says his room is “Perfect”

Mike finds his room 53 1.58 (.63) .64 (.56) .92 (.27) clean after his brother used the magic rock

Mike’s brother adds 21 1.43 (.75) 1.43 (.81) .90 (.30) chores onto the deal for the magic rock

Mike’s friend returns 31 1.42 (.85) .52 (.77) .90 (.30) with the money from Mike’s family

Mike’s friend tell 59 1.59 (.65) .29 (.59) .92 (.28) Mike he is selling the rock for $200

Note. Possible range for Accuray and Inference is 0-2; possible range for Quality is 0-1. 102

APPENDIX C: RESULTS OF ANALYSES WITH ALL 23 CODED EVENTS

Video Task Performance Descriptives - 23 103 Events Hierarchical Linear Regressions for Aim 3a 104

Linear Regressions for Aim 3b 105

103

APPENDIX C1: VIDEO TASK PERFORMANCE DESCRIPTIVES – 23 EVENTS

Range ADHD Control

Video Interpretation Inference - 23 0-46 5.92 (5.11) 4.35 (3.19) Events Video Interpretation Accuracy - 23 0-46 13.42 (6.94) 11.81 (6.29) Events Video Interpretation Quality - 23 0-23 8.13 (3.91) 7.38 (3.77) Events Video Interpretation Composite - 23 0-115 27.46 (14.56) 23.54 (12.21) Events Video Emotion Accuracy - 23 Events 0-100 27.36 (15.78) 26.63 (14.20) Number of Events Recognized - 23 0-23 8.71 (4.27) 8.23 (4.16) Events

104

APPENDIX C2: HIERARCHICAL LINEAR REGRESSIONS FOR AIM 3A

Video Interpretation Video Emotion Accuracy Composite - 23 Events – 23 Events ΔR2 β ΔR2 β Step 1 .22* .13* Gender -.09 -.08 Age .48* .37* WASI Vocab .26* .21§ Step 2 .05* .07* Faces Accuracy – First .25* .31* Press Total .27* .21* * p < .05, § p <0.1. 105

APPENDIX C3: LINEAR REGRESSIONS FOR AIM 3B

Video Interpretation Video Emotion

Composite-Total Accuracy-Total R2 β R2 β ADHD-5 Inattention Mean .03 .18 .003 .06 ADHD-5 Hyperactivity/Impulsivity .00 .02 .01 -.08 Mean DBD ODD/CD Mean .01 .12 .001 .03 Age .15* .39* .08* .29* Gender .001 -.03 .002 -.04 Verbal Ability .02 .13 .01 .12 * p < .05 106

APPENDIX D: TABLE OF PREVIOUS SIP STUDIES WITH CHILDREN WITH ADHD

Article Sample Characteristics SIP Variables Results Andrade et al. N = 64, 71.9% male Cue encoding via story recall Controlling for ODD/CD, children with (2012) ADHD (n = 39), control (n = 25) Interpretation/attributions ADHD encoded fewer cues, encoded a Age range: 6-12 Response generation lower proportion of cues, made more intent attributions, generated fewer positive responses to negative and ambiguous vignettes, and generated more negative responses to negative vignettes compared to control children.

King et al. (2009) N = 75, 75% male Interpretation/attributions No hostility bias in interpreting peer ADHD (n =41; 31 with Response generation intention. ODD/CD), control (n = 34) Children in the ADHD MPH group ADHD placebo (n =20), ADHD generated more hostile responses than MPH (n = 21) control children. Age range: 6-12

Matthys et al. N = 164, 100% male Cue encoding via explaining an event ADHD only, ADHD/ODD/CD, and (1999) ADHD only (n = 27), ODD/CD Interpretation/attributions ODD/CD only groups encoded fewer cues only (n = 48), ADHD + Response generation and generated fewer responses than ODD/CD (n = 29), normal controls. controls (n = 37), psychiatric ADHD/ODD/CD and ODD/CD generated controls (n = 23) more aggressive responses and were more Age range: 7-12 confident in executing aggressive responses than controls.

107

Mikami et al. N = 228, 100% female Interpretation/attributions After controlling for Verbal IQ, there were (2008) ADHD (n = 140) and controls Response generation no group differences for efficacy of (n=88) generated responses. Baseline age range: 6-12 After controlling for Verbal IQ and Follow up age range: 11-18 childhood ODD/CD, neither ADHD nor current ODD/CD was associated with any SIP variable.

Murphy et al. N = 26, 100% male Cue encoding via cue recall No aggressive group effects for SIP (1992) ADHD high aggression (n=14) Interpretation/attributions variables. and ADHD low aggression Behavioral response MPH was associated with increased cue (n=12) recall. Age range: 6-11 Limited correlation between SIP and behavioral outcomes.

Sibley et al. (2010) N = 45, 53% male Interpretation/Attributions No hostile attribution bias. ADHD (n = 27), control (n = 18) Response Generation Of story comprehension and response Age range: not reported, ADHD generation, the former significantly group Mean age = 12.36, control predicted parent-rated peer impairment. group Mean age = 12.22

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

! ! Thesis and Dissertation Services