Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences 126

World of faces, words and actions

Observations and neural linkages in early life

ANDREA HANDL

ACTA UNIVERSITATIS UPSALIENSIS ISSN 1652-9030 ISBN 978-91-554-9535-0 UPPSALA urn:nbn:se:uu:diva-281242 2016 Dissertation presented at Uppsala University to be publicly examined in Auditorium Minus, Museum Gustavianum, Akademigatan 3, Uppsala, Wednesday, 18 May 2016 at 10:15 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Professor Mikael Heimann (Linköping University, Avd för psykologi).

Abstract Handl, A. 2016. World of faces, words and actions. Observations and neural linkages in early life. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences 126. 85 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9535-0.

From the start of their lives, infants and young children are surrounded by a tremendous amount of multimodal social information. One intriguing question in the study of early social cognition is how vital social information is detected and processed and how and when young infants begin to make sense of what they see and hear and learn to understand other people’s behavior. The overall aim of this thesis was to provide new insights to this exciting field. Investigating behavior and/or neural mechanisms in early life, the three different studies included in this thesis therefore strive to increase our understanding on perception and processing of social information. Study I used eye-tracking to examine infants´ observations of gaze in a third-party context. The results showed that 9-, 16- and 24-month-old infants differentiate between the body orientations of two individuals on the basis of static visual information. More particularly, they shift their gaze more often between them when the social partners face each other than when they are turned away from each other. Using ERP technique, Study II demonstrated that infants at the age of 4 to 5 months show signs of integrating visual and auditory information at a neural level. Further, direct gaze in combination with backwards-spoken words leads to earlier or enhanced neural processing in comparison to other gaze-word combinations. Study III, also an EEG investigation, found that children between 18 and 30 months of age show a desynchronization of the mu rhythm during both the observation and execution of object- directed actions. Also, the results suggest motor system activation when young children observe others’ mimed actions. To summarize, the findings reported in this thesis strengthen the idea that infants are sensitive to others´ gaze and that this may extend to third-party contexts. Also, gaze is processed together with other information, for instance words, even before infants are able to understand others’ vocabulary. Furthermore, the motor system in young children is active during both the observation and of another person’s goal-directed actions. This is in line with findings in infants, children and adults, indicating that these processes are linked at neural level.

Keywords: Infant development, Eye-tracking, EEG, third-party interactions, crossmodal integration, gaze processing, speech processing, mu desynchronization, action perception, imitation

Andrea Handl, Department of Psychology, Box 1225, Uppsala University, SE-75142 Uppsala, Sweden.

© Andrea Handl 2016

ISSN 1652-9030 ISBN 978-91-554-9535-0 urn:nbn:se:uu:diva-281242 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-281242)

To András

List of Papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I Handl, A., Mahlberg, T., Norling, S., & Gredebäck, G. (2013). Fac- ing still faces: What visual cues affect infants’ observations of oth- ers? Infant Behavior and Development, 36, 583–586.

II Parise, E., Handl, A., Palumbo, L., & Friederici, A. D. (2011). Influ- ence of eye gaze on spoken word processing: An ERP study with in- fants. Child Development, 82(3), 842–853.

III Warreyn, P., Ruysschaert, L., Wiersema, J. R., Handl, A., Pattyn, G., & Roeyers, H. (2013). Infants' mu suppression during the observa- tion of real and mimicked goal-directed actions. Developmental Sci- ence, 16(2), 173–185.

Reprints were made with permission from the respective publishers.

Contents

Introduction ...... 11 Looking at others ...... 12 Others’ faces ...... 12 Others’ eyes and gaze ...... 13 The observation and processing of others’ gaze ...... 13 Gaze and its different manifestations in social interactions ...... 14 The ability to follow others’ gaze ...... 15 Joint attention ...... 16 Learning about infants’ gaze: Eye-tracking technology ...... 17 The gaze between others: Observations of social interactions ...... 18 Neural processing of social signals ...... 20 Measuring brain activity with EEG ...... 21 Event-Related Potentials ...... 22 Neural linkages between action and observation ...... 25 The mirror , direct matching and the quest for answers ...... 25 Using the EEG to study action observation and execution ...... 26 Aims of the thesis ...... 30 Methods ...... 31 Participants ...... 31 Stimuli ...... 32 Apparatus ...... 32 Procedure ...... 33 Data Analysis ...... 34 Study I – Third-party observations in infants ...... 37 Design ...... 38 Results ...... 40 Discussion ...... 40 Study II – Neural processing of gaze and words ...... 43 Design ...... 44 Results ...... 45 Discussion ...... 48

Study III – The mu rhythm in toddlers: Seeing and performing actions ...... 50 Design ...... 51 Results ...... 53 Discussion ...... 54 General discussion ...... 57 Gaze – a key player in infant development ...... 58 Gaze and intentions in a third-party context ...... 59 Neural processing of gaze and its integration with other social cues ...... 61 Integration of gaze with other social cues ...... 62 Infants’ perception of others’ actions and the mu rhythm ...... 63 Mimed actions and the mu rhythm ...... 63 When does the infant mu rhythm desynchronize? ...... 65 What role does experience play? ...... 66 Future directions ...... 67 Conclusions ...... 69 Acknowledgements ...... 71 References ...... 74

Abbreviations

AOI Area of Interest EEG EOG Electrooculogram ERP Event-Related Potentials FFT Fast Fourier Transform fMRI Functional Magnetic Resonance Imaging MNS Mirror System Nc Negative central Component NIRS Near-infrared Spectroscopy PET Positron Emission Tomography ROI Region of Interest SMA Supplementary Motor Area STS Superior Temporal Sulcus SW Slow Wave

Introduction

Entering this world as a human being holds the imminent challenge to con- stantly adapt to a complex and in itself ever changing environment. It marks the beginning of a life-long process of learning of how to relate to and make sense of the world around us. To propose that our social encounters play a key role in learning seems an understatement. Rather, our dependence on others and our interactions with them seem woven into our very nature and crucial for our survival. In order to be able to relate to others, it is paramount to understand their behavior and intentions, which, among others, also re- quires the skill to decode others non-verbal communicative signals and to make sense of the actions they perform. These abilities are part of early so- cial competence. When referring to “social competence” or “social cogni- tion”, a clarification of the terminology is necessary. In his book, Philippe Rochat offers the following definition for social cognition: “ In general, social cognition can be construed as the process by which individuals develop the ability to monitor, control, and predict the behavior of others” (Rochat, 2001, p. 128). Tak- ing this definition as a point of departure, immediate questions that can be drawn from it are what this process looks like and when these abilities emerge. It does perhaps not come as a surprise that short, concise and simple answers cannot be provided. In making sense of their world and learning to predict, understand and learn from others’ behavior, infants face a complex endeavor. Decades of research have supplied pieces to the puzzle, suggesting that a surprising range of capabilities are already present at birth or emerge throughout the first year of life. The landscape of insights stretches from findings on early face perception (e.g. Hoehl & Peykarjou, 2012; Simion & Giorgio, 2015; Taylor, Batty, & Itier, 2004) and (eye) gaze processing (Batki, Baron-Cohen, Wheelwright, Connellan, & Ahluwalia, 2000; Farroni, Johnson, & Csibra, 2004a; Hoehl et al., 2009), gaze following and joint at- tention (Gredebäck, Fikke, & Melinder, 2010a; Striano, Chen, Cleveland, & Bradshaw, 2006a; Striano & Stahl, 2005) to explorations on infants’ percep- tion and understanding of others’ actions (Biro & Leslie, 2007; Daum, Attig, Gunawan, Prinz, & Gredebäck, 2012; Falck-Ytter, Gredebäck, & von Hof- sten, 2006; Gredebäck & Daum, 2015) as well as the imitation of observed actions (Bekkering, Wohlschläger, & Gattis, 2000; Marshall & Meltzoff, 2014). To the study of such a wide range of topics, different methodological approaches and techniques have been applied.

11 This thesis follows this diversity by incorporating the summary of three stud- ies that target different aspects of social cognition in early life, making an empirical enquiry at the behavioral and/or neural level. The specific ques- tions that were followed in each study were derived from two general ques- tions. First, under what circumstances and in what ways do infants derive meaning from others’ gaze? And second, what kind of neural mechanisms may be related to their observation and imitation of others’ actions? The first of the studies focuses on infants’ observations of others, using Eye-tracking technology to measure infants’ gaze (Study I). The two other studies employ Electroencephalography to measure brain activity and explore neural pro- cessing of others’ gaze and words (Study II) as well as the link between ob- serving and imitating others’ actions (Study III). Before describing these individual studies, the following sections aim to explain why it is meaningful to ask these questions and to provide an over- view of important empirical findings. Furthermore, both eye-tracking and electroencephalography (EEG) are going to be introduced as methods for empirical enquiry in developmental research. Thereby, the emphasis will be on EEG technique and the data analyses applied to answer specific research questions of the studies reported in this thesis.

Looking at others In order to be able to monitor others’ behavior, infants need to be able to detect and attend to it. Interestingly, infants show certain preferences to ori- ent to particularly important visual information in their environment. One example for this tendency is their preference to look at human faces.

Others’ faces Intriguingly, when newborn babies are presented with schematic patterns that are like or unlike a face, they prefer to orient to the face-like pattern (Johnson, Dziurawiec, Ellis, & Morton, 1991; Valenza, Simion, Cassia, & Umiltà, 1996). This early face preference is important because it draws in- fants’ attention to the source of valuable information. Faces of others pro- vide a wealth of socially relevant information, for instance about someone´s age, gender, emotional state and focus of attention (Frank, Vul, & Johnson, 2009). Therefore, the ability to detect a face is an important adaptive skill (Rochat, 2001), the first step towards an interaction with another person (Hoehl et al., 2009) and considered a fundamental aspect of human social cognition (Burton & Bindemann, 2009). Previous research in adults suggests that faces capture attention (Langton, Law, Burton, & Schweinberger, 2008) and can be rapidly detected in an array of other, non-face stimuli (Hershler & Hochstein, 2005). Infants’ capabilities for “reading” faces extend during

12 the first year of life. They increasingly look at faces (Frank, Amso, & John- son, 2014; Frank et al., 2009), improve in their ability to detect faces in complex visual displays (Di Giorgio, Turati, Altoè, & Simion, 2012) and start to differentiate between emotional expressions (Nelson, Morse, & Leavitt, 1979).

Others’ eyes and gaze Apart from the early preference for looking at faces, it has been proposed that newborn babies are sensitive to a particular feature in the face – the eyes. More particularly, infants seem to be equipped with some capability for differentiating between gaze directions from birth. Newborn infants pre- fer to look at faces with the gaze directed at them in comparison to images of faces in which the gaze is directed away from them (Farroni, Csibra, Simion, & Johnson, 2002). Although the ability to discriminate between direct and averted gaze is suggested to be reliable from the age of 4 months (Vecera & Johnson, 1995), it is remarkable that it is present – at least in some form – at birth. These findings raise the question what may facilitate such an early skill. It appears that the morphology of the human eye plays a key role for the successful detection of eyes and gaze direction in particular. Human eyes contain a particularly large area of white sclera which stands in contrast with the darker iris (Kobayashi & Kohshima, 1997, 2001), making it easy to de- termine the position of the iris within the eye. Thus, the human eye appears to be particularly adept at conveying information by displaying salient visual information and the visual system is adapted to detecting this information from the very start of life. Such an adaption is important because the eyes provide vital insights about other people, for instance their intentions (Baron-Cohen, 1995) and willingness to communicate (Goffman, 1963). Furthermore, they play an important part in revealing another person’s focus of attention, or in other words, another’s gaze.

The observation and processing of others’ gaze Being able to detect and follow the direction of others’ gaze is of decisive impact for the development of social cognition. In the course of the first year of life, the gaze of others becomes a beacon to infants, guiding their attention to particular locations in a stimulus-heavy visual environment. By doing so, it highlights what is important information, making learning possible and more efficient. Also, it becomes a place of communication between the in- fant and another person. The idea that social interactions, in which gaze plays a key role, are important for learning is underlined by the theoretical account of natural pedagogy, which suggests that it is by means of commu- nication that social learning takes place and cultural knowledge can be

13 transmitted (Gergely & Csibra, 2013). It proposes that ostensive communica- tion lies at the heart of social learning (Csibra & Gergely, 2006, 2009), in- corporating specific social signals that are generated by the knowledgeable social partner and addressed to the infant. These so-called ostensive cues are interpreted as an indicator for the communicative intention of the social partner and are (most likely) innately specified (Csibra, 2010). Although, as will be pointed out at a later stage, learning does not always occur in relation or response to ostensive cues, the theory of natural pedagogy points out the fundamental contribution of certain cues as well as the key role of learning through social interactions.

Gaze and its different manifestations in social interactions It may come as no surprise that one of these ostensive cues is gaze, more particular a certain type of gaze. When two people look into each other’s faces, their gaze meets, a situation described as mutual gaze. In face-to-face interactions between caregiver and infant, also described as dyadic interac- tions, mutual gaze plays a key role (Hoehl et al., 2009) because through looking at each others faces, both can attend to and monitor each others’ behavior. Likewise, eye contact is typically used to describe a situation in which two people gaze directly into each others’ eyes, which can be differ- entiated from the previously mentioned mutual gaze, the look into each oth- er’s faces (Harper, Wiens, & Matarazzo, 1978; Kleinke, 1986). Generally speaking, the term gaze is used in the literature to describe someone’s point of regard or attentional focus and, in the case of mutual gaze, to indicate an alignment between the gaze of social partners. Further- more, in studies related to gaze perception, two terms are used to indicate the direction of gaze in reference to an observer: Direct gaze typically refers to a situation in which a face is presented (for instance from head to shoulders) in frontal view, with the eyes looking straight ahead at the observer. By con- trast, averted gaze stands for a situation in which the eye gaze is directed to the left or right side, so that the iris is positioned in the left or right corner of the eye. In this case, the direction of the eye gaze determines whether the gaze is direct or averted. In the current thesis, the term gaze is used to refer to where someone is directing her attention (i.e. in relation to describing gaze following or joint attention). Additionally, several terms are mentioned to refer to particular bodily features that express the direction of someone’s gaze: Eye gaze direc- tion, head orientation and body orientation. This kind of differentiation draws on the idea that where someone is directing her attention is revealed by an interplay between these bodily features and that the information that these features provide are integrated with each other during visual perception (Hietanen, 2002; Langton, 2000; Moors, Germeys, Pomianowska, & Verfaillie, 2015). Making a distinction between gaze and eye gaze may seem

14 unnecessary to some, given that gaze refers first and foremost to the eyes of a person as windows for visual perception. However, the term eye gaze is present in the literature and seems particularly prominent in studies investi- gating neural mechanisms of gaze processing. For instance, the term eye gaze (direction) is commonly used to express that the position of the iris in the eye varies in the presented stimuli and that in these cases, visual pro- cessing of gaze solely relies on information from the eyes.

The ability to follow others’ gaze By the time infants are 2 months old, they increasingly look at the eye region in a still face (Maurer & Salapatek, 1976) and in the face of a talking person (Haith, Bergman, & Moore, 1977). Furthermore, at 3 months, infants shift their gaze more rapidly towards a target on a display if the location of the target had previously been indicated by the eye gaze direction of an adult (Hood, Willen, & Driver, 1998). More recent evidence shows that this so- called gaze cueing is even possible at birth (Farroni, Massaccesi, Pividori, & Johnson, 2004b). This ability to direct the attention to a certain location by shifting ones own gaze (overt attention) in response to gaze cues is at the basis of gaze following. This phenomenon of following someone else’s gaze has also been described as looking where someone else is looking (Butterworth & Jarrett, 1991). In reference to earlier work (Baron-Cohen, 1995), Brooks and Meltzoff (2005) state that “gaze following is important for developmental theory because it can be seen as a ‘front end’ ability that contributes to understanding what another is thinking, feeling and intending to do” (p. 535). When investigating gaze following, infants’ tendency to follow the gaze shifts of an adult model is measured (Gredebäck et al., 2010a) in a situation in which an infant faces an adult (experimenter or caregiver) who establishes eye contact and then shifts her gaze to the side towards a target object, either by moving her head, her eyes or both. Since 1975, when the first explora- tions on gaze following were reported (Scaife & Bruner, 1975), a considera- ble body of work has accumulated, providing various interpretations of the findings and ideas about what kind of cognitive processes gaze following abilities might reflect (Moore, Angelopoulos, & Bennett, 1997). Differences in paradigms, task complexities and outcomes have led to various proposi- tions on when gaze following abilities emerge in infancy, ranging from the days after birth (Farroni et al., 2004b) to 3 months (D’Entremont, 2000), 6 months (Butterworth & Jarrett, 1991) and 8 to 15 months (Corkum & Moore, 1998; Morissette, Ricard, & Décarie, 1995). One suggestion to ex- plain this large age range is that infants are capable of following gaze – at least in some tasks early on, but without having an understanding about oth- ers attention (Corkum & Moore, 1998).

15 Joint attention The early emerging sensitivity to gaze direction and the ability to follow gaze are important for the emergence of another social skill: Joint attention. An episode of joint attention occurs when one person observes the direction of gaze of another and follows it, so that both have a common visual regard (Moore et al., 1997). It is joint attention that enables infants to incorporate another element (object or person) into their dyadic interaction (Carpenter, Nagell, Tomasello, Butterworth, & Moore, 1998), so that the interaction becomes triadic. The distinction between gaze following behavior and en- gaging in episodes of joint attention can be subtle (Bedford et al., 2012). However, gaze following is often measured as a mere orienting of attention, whereas joint attention involves the shifting back and forth of attention be- tween the social partner and an object (Carpenter et al., 1998). The social-cognitive model of joint attention underlines this distinction by suggesting that engaging in joint attention requires the ability to understand others as intentional agents and suggests that joint attention behavior emerg- es towards the end of the first year of life (Tomasello, 1995; Tomasello, Carpenter, Call, Behne, & Moll, 2005). For instance, Tomasello (1995) pro- poses that between 9 and 12 months of age, infants begin to follow into and direct others’ attention, but it is not until the age of 12 to 14 months that infants start to engage in joint attention with others, a time when they being to understand others’ intentions. When it comes to joint attention abilities, some authors underline the distinction between responding to (RJA) and initiating joint attention behaviors (IJA) and suggest that these may follow different developmental trajectories (Mundy et al., 2007). This differentia- tion allows for determining instances in which infants respond to others’ gaze in different settings and for interpreting these as rudimentary forms of RJA – such as gaze cueing in 3-month-old infants (Hood et al., 1998). This approach represents an alternative view on joint attention, which sees social cognition as a product of joint attention (Mundy & Jarrold, 2010) rather than its prerequisite (Tomasello, 1995). Joint attention has been considered as playing an important role in lan- guage learning, because children primarily acquire language in the context of social interactions (Tomasello & Farrar, 1986). For instance, Bruner and colleagues proposed that linking words to objects is a fundamental task in language acquisition and this process of mapping is facilitated by joint atten- tion between parent and child (Bruner & Watson, 1983; Scaife & Bruner, 1975). This idea has received support in previous research, for instance by findings indicating a positive effect of joint attention episodes between mother and child on vocabulary development (Akhtar, Dunham, & Dunham, 1991; Tomasello & Farrar, 1986). Taken together, it is beyond doubt that the face and gaze of another per- son provides valuable information to young infants. Not only do they reveal

16 information about other’s attention and intentions, they also help infants to direct their attention in a way that facilitates learning about the world. Alt- hough the developmental trajectory of early social behaviors – such as joint attention – and ways of measuring and conceptualizing them remain a matter of debate, their importance for infant development remains unquestioned. Just as gaze of another person is informative to infants, so is the gaze of in- fants informative to others, revealing insights about what infants look at and what they may understand from what they see.

Learning about infants’ gaze: Eye-tracking technology Before infants are capable of the craftsmanship of words, measuring their gaze is a promising route to gain insights into their perception of, attention to and understanding of their natural environment. Therefore, studying infant looking behavior in relation to a specific task opens up a window into poten- tial cognitive processes at play in a given situation and provides insight into the development of social cognition. Although infants’ gaze has been studied for decades by means of coding infant gaze behavior during live situations or from video recordings of experiments, recent advances in technology pro- vide a more precise way of studying gaze: Eye-tracking technology. Where video recordings and human judgment of infants’ gaze behavior lack preci- sion, eye-tracking technique offers more accurate temporal and spatial in- formation (Aslin, 2012). Originally developed in the 1960s (Haith, 1969; Salapatek & Kessen, 1966), one methodological approach to eye-tracking uses light reflections on the cornea relative to the center of the pupil to estimate the direction of gaze. Hereby, the cornea, iris, sclera and retina reflect light stemming from an infrared light sensor, which is integrated in the eye-tracker. These reflections are captured via a camera and the resulting images are fed into picture pro- cessing algorithms, which calculate the parameters of the gaze. Given a sta- ble location of the infrared light source, this reflection remains approximate- ly at the same location when the eyeball (and pupil) is moving (Duchowski, 2007). In order to be able to determine the gaze (or point of regard), the eye tracker has to be calibrated for each measurement. Once the participant is in a stable position and distance to the eye-tracker, attention is cued to a num- ber of positions in space. The images taken from the eye when the gaze is directed towards these positions is then used to map the gaze position onto the display (Aslin, 2012). Once the data is collected, different analyses can be utilized to investigate specific questions. In order to study overt visual attention, two types of eye movements are of interest. To be able to see the environment clearly, the images need to fall on the area on the retina that is responsible for sharp central vision, the fovea. The eye movements that make this possible are called saccades. They are rapid in nature and have a duration between 10 and 100 ms (Duchowski, 2007). If a stationary object

17 becomes the focus of attention, it falls onto the fovea (or an area very close to it) for a certain amount of time. Stabilizing the position of the retina is achieved through fixations. Although one could assume that fixations, as opposed to saccades, reflect the absence of eye movement, they actually consist of miniature eye movements (Duchowski, 2007). The characteristics of saccades and fixations can be examined by choosing different measures, depending on the presented stimuli and experimental paradigm (e.g. fixation duration or frequency, saccade latency). Eye-tracking technique has become widely used for very different research questions and paradigms. This also includes investigations of infants’ observation of others’ social interactions, the topic of the following section.

The gaze between others: Observations of social interactions From early on, infants are interested in and draw important information from others’ faces and gaze in the context of dyadic and triadic interactions. Pre- vious research has therefore emphasized the importance of infants’ direct involvement in social interactions (Csibra & Gergely, 2009). But does this imply that in early life, learning about others and their intentions is restricted to a dyadic and triadic context? Over recent years, empirical evidence has been accumulating, providing insights that suggest otherwise. These investi- gations demonstrate that infants can draw information from social interac- tions in which they are not involved but merely observe. A term that was introduced to describe such as situation is third-party observations. Similar- ly, another term that has been used is third-party perspective, which particu- larly refers to the infant’s position and perspective in this setting. These terms need to be distinguished from triangular (McHale, Fivaz-Depeursinge, Dickstein, Robertson, & Daley, 2008) or triadic interactions (Barton & Tomasello, 1991), which both refer to situations in which the infants are involved in a social engagement of two other people. The study of infants’ observation, understanding and learning from others’ social engagements has been dedicated to a number of particular questions. These include investigations on infants ability to derive information from others gestures (Gräfenhain, Behne, Carpenter, & Tomasello, 2009), on their sensitivity to emotional information (Repacholi, 2009; Repacholi & Meltzoff, 2007; Repacholi, Meltzoff, & Olsen, 2008), on gesture anticipation (Thorgrimsson, Fawcett, & Liszkowski, 2014), imitation of novel actions (Herold & Akhtar, 2008; Matheson, Moore, & Akhtar, 2013) as well as on the anticipation of goals in others’ irrational (Gredebäck & Melinder, 2010) and collaborative actions (Fawcett & Gredebäck, 2013). Some empirical work on third-party conversations looked at questions related to language acquisition (Gampe, Liebal, & Tomasello, 2012; Shneidman, Buresh, Shimpi, Knight-Schwarz, & Woodward, 2009) whereas other studies inves- tigated aspects of what infants observe and know about others’ conversations

18 (Augusti, Melinder, & Gredebäck, 2010; Bakker, Kochukhova, & von Hof- sten, 2011; Beier & Spelke, 2012; Keitel, Prinz, Friederici, von Hofsten, & Daum, 2013; von Hofsten, Uhlig, Adell, & Kochukhova, 2009).

Learning from others’ words Very plausible demonstrations of infant learning in third-party contexts come from studies focusing on word learning. It has been shown that young children at the age of 18 months are capable of learning words in non- ostensive contexts, when they observe another person labeling objects with- out directly being looked at or addressed (Tomasello & Barton, 1994). Fur- thermore, it also appears that learning new words is possible in situations where young children overhear others. For instance, 16- to 20-month-old infants are capable of learning a new word not only in a situation in which they are directly addressed but also when they overhear an experimenter saying a novel word to another person (Floor & Akhtar, 2006). Also, 30- month-old children are capable of learning new words from others’ interac- tions when seeing a live interaction but also watching it on video (O’Doherty et al., 2011). Furthermore, recent research suggests that by the age of 12 months, infants observe others’ communicative actions and understand that they use speech as a means to communicate (Martin, Onishi, & Vouloumanos, 2012; Vouloumanos, Onishi, & Pogue, 2012). Therefore, if infants observe and learn in non-ostensive contexts, merely from observing the social interactions of others, it is of interest to find out about how they observe these interactions and what factors may influence their attention to what they see.

Third-party gaze: Observing the gaze between others For the study of infants’ understanding of others’ social interactions, one promising route of investigation is to explore which kind of available social cues may guide and affect their observation. For instance, whether and how interactive partners align their gaze is a pre-condition for and a feature of their engagement (Goffman, 1963) but may also be meaningful to observers. Although the number of investigations on third-party observations of interac- tions is steadily increasing, very few of them have dedicated themselves to specifically studying how variations of body orientation, head orientation and eye gaze direction affect infants observations of social scenes. In their seminal study, Augusti and colleagues (2010) presented 4-, 6- and 11- month-old infants with a video clip of a conversation between two women who could be seen in profile, from head to waist. In one video, the women were facing each other (face-to-face condition), whereas in the other, they stood with their backs turned to each other (back-to-back condition). The results suggest that 6- and 11- month-olds were sensitive to the body orienta- tion of the actors because they followed the turns of a conversation more often in the face-to-face than the back-to-back condition.

19 In another recent study, Beier and Spelke (2012) applied a habituation-of- looking time procedure and simultaneously presented 9- and 10-month-old infants with two video clips of a third-party interaction. Two individuals (head and shoulders visible) were facing the infant initially, before turning their heads towards or away from each other, leading to a situation of mutual or averted gaze. The authors found that 10- but not 9-month-olds discrimi- nated between third-party averted and mutual gaze. In comparison, the latter study indicates that infants are sensitive to gaze (in a third-party context) later than reported by Augusti and colleagues (2010). These differences could be due to the paradigms and measures used in the two studies. While Beier and Spelke (2012) used looking time (and a habituation paradigm), Augusti and co-workers (2010) measured turn-taking-related gaze shifts. The latter approach provides more specific information on where infants look in relation to the events of a conversation. Further, different stimuli were used, varying with regards to the events in the interaction (e.g. conversational turns versus an initial greeting) and the way the social partners were dis- played (e.g. variation in body and head orientation). These differences be- tween the studies are perhaps not surprising, given that social interactions are per definition rich in events and constellations of social cues. The nature of social engagements explains a certain pluralism that comes with portray- ing them and measuring how they are observed. The tremendous amount of non-verbal and verbal information present in others’ interactions highlights how vital it is that infants learn how to make sense of this information. Most likely this learning occurs first in the context of their interactions with others (dyadic/triadic situations) before the begin to derive meaning from others’ social engagements (third-party situations). From detecting others faces and eyes, to following their gaze and engaging in acts of joint attention, the previous sections have shown that infants be- come more versatile in navigating their social world and learning about oth- ers and their environment in dyadic and triadic settings. In this context, the gaze of others has been determined as a key source of information. Also, it has been pointed out that measuring infants’ gaze (and eye-tracking) is an important means for gaining insights into early cognitive development. Giv- en these insights, it is of high interest to investigate how the developing brain deals with processing the socially relevant cues available to the infants.

Neural processing of social signals When aiming to find out how social signals are processed in the human brain, it appears to be straightforward to investigate the adult brain. This has to do with the fact that in adults, several neuroimaging techniques can be applied, which are not or only in a very limited manner possible for use in infants. Insights from functional magnetic resonance imaging (fMRI) have identified several brain areas that are involved in processing a number of

20 social cues. One of the cortical brain areas that has received a lot of attention is the superior temporal sulcus (STS). This concerns particularly the posteri- or part of the STS. For instance, a previous fMRI study showed that the right posterior STS in adults shows greater activation in response to seeing a per- son shifting the gaze towards the viewer than when the gaze is averted (Pelphrey, Viola, & McCarthy, 2004). Interestingly, the STS is not only thought to be involved in processing others’ gaze, but also biological motion, facial expression and human action (Allison, Puce, & McCarthy, 2000). An- other area that has been mentioned in relation with gaze processing is the medial prefrontal cortex, which is activated in adults during viewing direct but not averted gaze (Kampe, Frith, & Frith, 2003; Schilbach et al., 2006). These findings are examples from brain imaging work in adults, suggesting that certain cortical areas in the human brain are concerned with processing information about others’ gaze. Whether and to what extent these areas are also involved in processing social cues in infancy is an unresolved matter, due to methodological constraints in using brain imaging techniques in in- fants. However, there are ways of exploring the neural mechanisms involved in detecting and processing social cues in infancy and the next section will be devoted to introducing the EEG as a promising candidate. After a general introduction of the method, a section on event-related potentials (ERPs) will feature a technique to determine changes in the EEG signal related to specif- ic events. This will be followed by a summary of literature specifically tar- geted at the processing of others’ eye gaze direction in infancy.

Measuring brain activity with EEG One of the main challenges of neuroscience is to understand how networks of neurons are orchestrated together to create a well-composed symphony of cognitive functions. Oscillatory brain activity has been considered one of the candidate mechanisms that can provide insight into this puzzle and has there- fore created much interest. Electroencephalography provides a fruitful ap- proach to measuring oscillatory brain activity. Because it is non-invasive, it is suitable for studying infant populations and has therefore frequently been utilized to reveal insights on brain functioning in babies. One advantage of the technique is that it provides data with an excellent temporal resolution that allows for the investigation of fast neural signals (Banaschewski & Brandeis, 2007). For an EEG recording, multiple sensors (electrodes) are used to register an electrophysiological signal from several scalp locations simultaneously. The signal has its origin in the activity of cortical pyramidal cells. The re- lease of neurotransmitters and resulting current flow creates a dipoles and if the dipoles of many neurons summate, the resulting voltage can be recorded at the scalp (Luck, 2005). The EEG signal is composed by series of sine waves at different frequency ranges. The main characteristics of a sine wave

21 are its frequency, its amplitude and its phase. The frequency describes of how many oscillations occur within one second. The amplitude specifies the height of the wave peak relative to 0. The phase refers to where specific time points are in the cycle of the sine wave (Roach & Mathalon, 2008). Typically, the frequency bandwidth of the EEG signal ranges from 0.5 to 100 Hz, with amplitudes between 10 and 300 µV (Thakor & Tong, 2004). This wide range of frequencies can be further classified into smaller fre- quency bands, namely the delta (0.5 - 4 Hz), theta (4 - 7 Hz), alpha (8 - 13 Hz), beta (14 - 30 Hz) and gamma (30 – 70 Hz) bands (e.g. Başar et al., 2001; Urigüen & Garcia-Zapirain, 2015). The oscillations in these bands can be related to arousal (Banaschewski & Brandeis, 2007). The recorded signal is investigated concerning several characteristics, de- pending on the research question and the specific data analysis. Typically, in order to draw conclusions from the recorded data, the signal that is captured during a period of one type of sensory stimulation is compared to a baseline, a period that lacks this particular type of stimulation or includes it only par- tially. Importantly, the data can be plotted across time (the time domain) or frequency (the frequency domain). Both domains contain the same infor- mation but are merely views from a different perspective.

Event-Related Potentials For investigating the EEG in the time domain, event-related potentials (ERPs) have become a state of the art technique. These potentials are small fluctuations of voltage that are time- as well as phase-locked to a specific stimulus or event (Blackwood & Muir, 1990; Luck, 2005). Repeated expo- sure to the event in question will provide multiple event-related data seg- ments. By averaging these segments, spontaneous brain activity and (certain) noise is averaged out (Banaschewski & Brandeis, 2007), leaving behind waveforms that provide insights on the activity directly connected to the events. In order for the data not to be distorted, existing artifacts are removed by discarding data segments in which the voltage exceeds a previously set threshold (Picton et al., 2000). The ERP shows sequences of so-called com- ponents that are characterized by their amplitude, polarity, latency, location and function (Banaschewski & Brandeis, 2007). The components are either named using acronyms (e.g. Nc for Negative central) or are labeled with a single letter and a number. The letter refers to the negative (N) or positive (P) polarity of the component and the number refers to the time when it oc- curs, which can either be the latency in milliseconds (e.g. 170 ms for ) or the relative position in the waveform (N1 for the first negative peak). Ex- ogenous components that occur close in time to the stimulus that elicited them (latency up to 250 ms), are determined by the occurrence of the stimu- lus and its properties whereas later, endogenous components are thought to reflect cognitive processes (Brandeis & Lehmann, 1986). The components

22 are quantified in terms of the latency and amplitude of a peak (relative to a pre-defined baseline), but in cases of a broader deflection without a clear peak, mean amplitudes can be computed as a means of investigation. The morphology and also the associated functions of the components underlie changes across ontogeny (De Haan, 2012), which makes it important to study ERP components in designated age groups. One example for a neural correlate that undergoes a transition from infancy to adulthood is the face- sensitive component N170 (e.g. Taylor & Baldeweg, 2002). When it comes to visual ERPs, the most studied component in infancy is the Negative cen- tral component (Nc). Because the Nc plays an important role for Study II, the following section is dedicated to providing an overview on relevant find- ings in infants.

The Nc component in the infant ERP The Nc is a negative deflection in the infant ERP that is usually most promi- nent over fronto-central electrode sites (Ackles & Cook, 1998; Webb & Nelson, 2001; Wiebe et al., 2006). Its peak latency diminishes over the course of the first year from 1000 -1200 ms at birth (Nelson, 1997) to 400 – 500 ms at 1 to 3 years of life (Goldman, Shapiro, & Nelson, 2004; Nelson & deRegnier, 1992; Parker, Nelson, & The Bucharest Early Intervention Project Core Group, 2005). The amplitude of its peak increases over the first year of life (Richards, 2003; Webb, Long, & Nelson, 2005) and decreases in the third (Parker et al., 2005). Typically, modulations in the Nc concern its amplitude. For instance, the Nc amplitude has been reported to be more negative in response to unex- pected events or infrequently presented stimuli, so-called ‘oddballs’ (Courchesne, Ganz, & Norcia, 1981; Karrer & Ackles, 1987; Nikkel & Karrer, 1994). These findings that support the idea that the Nc is related to the detection of a novel stimulus (Reynolds & Richards, 2005). Another possibility is that the Nc is associated with a more general attentional orient- ing response (Nelson & Collins, 1992) and is insensitive to stimulus proba- bility and novelty (Richards, 2003). More generally, the Nc is thought to reflect attention to an item that is salient, but what exactly is salient to in- fants may change across age (De Haan, 2012). It has also been suggested that the Nc reflects the strength of a memory trace, with infrequent stimuli resulting in a weaker memory trace, which is expressed in a larger Nc ampli- tude (Ackles & Cook, 2007; Courchesne et al., 1981). The Nc has also been found to respond to a mismatch of information across modalities, showing a larger amplitude of a mismatch between visual and auditory emotional in- formation (Grossmann, Striano, & Friederici, 2006). In summary, although the Nc can be detected in most visual ERPs in in- fants, it remains a matter of debate if it reflects mainly attentional processing or also indicates memory function. Further, it is unclear whether attention is

23 allocated automatically (Richards, 2003) or in a more controlled manner (Ackles & Cook, 1998).

The Nc and processing others’ gaze Recent ERP studies report interesting findings on Nc modulations in relation to processing others’ gaze. In one previous ERP study, four-month-old in- fants were presented with a face (frontal view) that had the eye gaze directed towards the left or right (Hoehl, Reid, Mooney, & Striano, 2008). Next to the face, a small object was displayed. In combination with the gaze cue, this lead to a condition in which eye gaze direction and object location were ei- ther congruent (object-directed gaze) or incongruent (non-object-directed gaze). Data analyses showed that the amplitude of the Nc was more negative for the non-object-directed gaze, but the Nc peak occurred at an earlier la- tency when the gaze was object-directed. Hoehl and collaborators (2008) interpreted their findings suggesting that the face with the gaze looking away from the target was a more ambiguous and less expected stimulus, so that more attention was allocated to it, which was expressed in the larger peak amplitude. Another example for an Nc modulation comes from a study in 9-month- old infants. In a live joint attention situation, an experimenter looked at the observing infant and then turned to look to the side, at a monitor displaying an object (Striano, Reid, & Hoehl, 2006b). In a control condition, the exper- imenter did not look at the infant before looking at the monitor. Interesting- ly, a more negative Nc amplitude occurred in response to the first condition. Therefore, in the context of this study, there was no kind of ambiguity (e.g. a face looking away from a target) to which the Nc peak was enhanced, but an attention-capturing situation with socially relevant cues (i.e. direct gaze, head turn towards object). These two examples indicate that the modulation in the Nc component is directly connected to the kind of paradigm that is applied. What remains fairly constant is the tendency to interpret a larger peak as reflecting increased attention to a certain stimulus or/and as taking more processing resources. Taken together, ERP technique has previously been applied to studying infants’ processing of others’ gaze. The Nc component has been reported and discussed in studies on young infants’ processing of gaze and objects as one ERP component that can undergo modulation. However, as will be dis- cussed later on (in the General discussion) not all studies on gaze processing in infants have detected Nc modulations and those that do may, as previously exemplified, differ significantly in their approach. In order for a clearer pic- ture on neural mechanisms of infant gaze processing and its developmental trajectory to emerge, more investigations are necessary.

24 Neural linkages between action and observation As has been elaborated upon, infants attend to others’ faces and are sensitive to the direction of gaze from very early on in life. Furthermore, they chiefly rely on using others’ gaze to learn about their environment. But learning does not only concern gaze, but does also take place in observing and imitat- ing others actions. Imitation refers to the process of producing an action after having seen someone else performing it (Bekkering et al., 2000). It has been described as an important social learning mechanism that is more efficient for the acquisition of new skills than trial-by-error learning (Meltzoff, Kuhl, Movellan, & Sejnowski, 2009). This also means that infants are able to use their observations for generating acts that match what they have seen (Marshall & Meltzoff, 2011). But what constitutes such a link between ac- tion perception and its execution in early life? The reasoning about possible answers to these questions has been heavily influenced by very particular findings made over 20 years ago.

The mirror neurons, direct matching and the quest for answers In the early 1990s, an intriguing discovery in the macaque monkey was made, suggesting the existence of a certain population of neurons that fire when monkeys observe and execute the same action (di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992; Gallese, Fadiga, Fogassi, & Rizzolatti, 1996; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). In the following years, investigations using neuroimaging techniques confirmed that certain areas in the human adult brain are active during both action observation and execu- tion, among them studies using positron emission tomography (PET) or fMRI (Decety et al., 1997; Grafton, Arbib, Fadiga, & Rizzolatti, 1996; Iacoboni, 1999). This work suggests that the areas in the human brain that functionally cor- respond to the areas with mirror neurons in monkeys are the premotor cortex (F5 in monkeys) and the inferior (PF in monkeys). Depending on the particular task and modality of the stimulation, more brain areas can be active and considered part of the network (Molenberghs, Cunnington, & Mattingley, 2012). Some researchers only count those areas that have classically been considered mirror neuron areas whereas others also take non-classical mirror neuron areas into account (Cook, 2014). Direct evidence on mirror neurons in humans comes from only one study in adults that used single-cell recordings, suggesting that some cells in the supplemen- tary motor area (SMA) and in different areas of the medial temporal lobe (e.g. hippocampus) respond to both the observation and execution of actions (Mukamel, Ekstrom, Kaplan, Iacoboni, & Fried, 2010). Thus, this finding indicates that some populations of neurons have mirroring properties, even

25 beyond the premotor cortex and inferior parietal lobe as previously suggest- ed by neuroimaging work. Following the discovery of the mirror neurons in the monkey brain, a par- ticular hypothesis was brought forward that builds on the principle of motor simulation. Built on the idea of the mirror neurons, it gains plausibility by pinpointing a concrete neuronal substrate in the human brain that makes simulation possible. The direct matching hypothesis proposes that observed actions are directly and automatically mapped onto the observer’s own mo- tor system (Rizzolatti, Fogassi, & Gallese, 2001). While seeing someone else performing an action, the neurons that represent this particular action are activated in the of the observer. The motor representation acti- vates during action observation thereby matches the representation that would be active if the observer performed the same action. Interestingly, this matching process has been suggested to be at work when the outcomes of others’ actions are predicted (Ambrosini, Costantini, & Sinigaglia, 2011; Flanagan & Johansson, 2003). Flanagan and Johansson (2003) found that during the observation and performance of an action, adults look at the ac- tion goal ahead in time. The measurement of these so-called predictive eye movements have also been applied in infant eye-tracking work (Gredeback & Falck-Ytter, 2015). Evidence showing predictive eye movements in in- fants (Falck-Ytter et al., 2006; Rosander & von Hofsten, 2011) has therefore been seen as a support for the direct matching hypothesis. Several findings in infants also suggest that their motor experience plays a role in their ability to predict actions (Cannon, Woodward, Gredebäck, von Hofsten, & Turek, 2012; Falck-Ytter et al., 2006; Kanakogi & Itakura, 2011), which makes sense in the light of a direct matching idea that would require a matching of an observed action to an already existing motor program in the observer. Following the discovery of the mirror neurons and neuroimaging work in adults, a growing body of work in infants and young children has been dedi- cated to exploring possible neural links between action observation and per- formance.

Using the EEG to study action observation and execution The idea that EEG measurements can be utilized to study the perception of others’ action is based on intriguing findings from the early 1950s. When Gastaut and collaborators presented film sequences to adult participants, they found a systematic modulation of the EEG signal in a frequency range that is close to the alpha rhythm, which is between 8 and 13 Hz (Gastaut & Bert, 1954). They referred to the rhythmic activity that was modified as the rolandic rhythm “en arceau”. More particularly, during the participants own body movement or during the observation of others actions, in this case a movie clip of a boxing match, the rolandic rhythm disappeared. However, it reappeared when camera in the film shifted away from the fighting boxers to

26 the spectators in the hall but vanished again when the camera shifted back to the boxers. This empirical report was the first to indicate a modification of an EEG rhythm that is nowadays referred to as the mu rhythm or sensorimo- tor alpha rhythm. The pattern of blocking, now often described as ‘suppres- sion’, ‘attenuation’ or ‘desynchronization’, was confirmed by EEG investi- gations in the late 1990s (Cochin, Barthelemy, Lejeune, Roux, & Martineau, 1998; Cochin, Barthelemy, Roux, & Martineau, 1999). Measuring the mu rhythm and its patterns of desynchronization provides a non-invasive alternative for studying brain activity linked to action obser- vation and execution. In infants, the mu rhythm typically occurs, similar to that of adults, at central sites, although recent work suggests that it may also be recorded at frontal and or parietal sites (Marshall & Meltzoff, 2011). It has been detected in infants as young as 4 months of age (Smith, 1939, 1941). Its frequency lies, depending on age, between 6 and 9 Hz, which overlaps with the frequency band of the (occipital) alpha rhythm (Marshall, Bar-Haim, & Fox, 2002; Stroganova, Orekhova, & Posikera, 1999). The alpha rhythm is the most dominant frequency in the infant EEG (Bell & Cuevas, 2012) and is measured at occipital sites, which is why it also re- ferred to as occipital alpha rhythm. The distinctive feature of the occipital alpha rhythm lies in its sensitivity to visual input: It is strongest when a ho- mogeneous visual field is given (Lehtonen & Lehtinen, 1972), for instance when the eyes are closed, but decreases during visual stimulation (Compston, 2010). Although the mu rhythm overlaps in frequency with that of the occipital alpha, it has been proposed that it differs in its function and may be an equivalent of the adult mu rhythm (Stroganova et al., 1999). In order to investigate the mu rhythm or in fact any frequency band, the EEG data has to be analyzed in a certain way to obtain information on changes of the EEG signal in a certain frequency range during experimental tasks. This is achieved by using a frequency analysis.

Frequency analysis In order to investigate how the EEG signal changes in response to different experimental conditions, it needs to be decomposed into its frequency bands (Saby & Marshall, 2012). This is achieved by conducting a Fourier trans- form, which converts the data from the time into the frequency domain. The strength of the signal of each frequency bin is expressed in the spectral pow- er, a unit which is thought to reflect neuronal excitability (Bell & Cuevas, 2012). Fluctuations in spectral power have been related to an increase or decrease in synchronization of underlying neuronal populations (Pfurtschel- ler, 1992; Pfurtscheller & Lopes da Silva, 1999) and changes in interactions between neurons and interneurons that are responsible for controlling the frequency components of the EEG signal (Pfurtscheller & Lopes da Silva, 1999). A common practice for quantifying mu rhythm desynchronization is by calculating its percentage. This is achieved in comparing the power in the

27 chosen frequency band from a baseline condition to that of an experimental condition (mu desynchronization% = condition - baseline/baseline * 100). Resulting negative values indicate mu rhythm desynchronization, whereas positive values reflect synchronization. Zero values represent no changes in power (Cuevas, Cannon, Yoo, & Fox, 2014).

Findings on mu rhythm modulations in infancy Recent years have shown a substantial increase in the investigation of the mu rhythm in early life (Marshall & Meltzoff, 2014). Although the specific questions and methodological approaches differ between studies, two com- mon themes can be identified (Cuevas et al., 2014). The first major line of enquiry focuses on the study of mu desynchronization in response to the observation of goal- and non-goal-directed actions. Similar to children (Lepage & Théoret, 2006) and adults (Muthukumaraswamy, Johnson, & McNair, 2004; Nyström, 2008), infants at 8 months of age (Nyström, Ljung- hammar, Rosander, & von Hofsten, 2011) show greater mu desynchroniza- tion in response to goal-directed than non-goal-directed actions. The same pattern appears in 9-month-old infants when the outcome of a potentially goal-directed action is occluded (Southgate, Johnson, El Karoui, & Csibra, 2010). At 6 months however, infants do not yet show a significant difference in mu power in response to observing a goal-directed and non-goal-directed action (Nyström, 2008). Importantly, infants at the age of 9 months do not only show mu desynchronization during action observation but also during action execution tasks (Southgate et al., 2010). Furthermore, another prominent theme in empirical explorations targets the relation between experience and mu desynchronization. For instance, in a previous study, 14- to 16-month-old infants, who all had more experience with crawling than with walking, showed stronger desynchronization of the mu and beta rhythm when observing videos of crawling instead of walking locomotion (van Elk, van Schie, Hunnius, Vesper, & Bekkering, 2008). Consistent with these findings are reports from another study on 14-month- old infants, indicating that mu desynchronization is stronger in response to the execution of actions that infants are capable of doing than to actions that are not within their motor repertoire (Reid, Striano, & Iacoboni, 2011). In the same vein, at the beginning of their second year of life, infants’ observa- tion of others’ actions and their expectations of outcomes with regards to object weight might be influenced by their own action experience (Marshall, Saby, & Meltzoff, 2013). Another study focused on a particular aspect with regards to infants’ fa- miliarity with others’ actions and their outcomes. Stapel and colleagues showed 12-month-old infants movie clips of a female actor performing ac- tions that had either an expected or unexpected outcome (Stapel, Hunnius, Elk, & Bekkering, 2010). The observation of actions with an unexpected outcome resulted in greater mu desynchronization than those with an ex-

28 pected outcome. As an interpretation, the authors proposed that infants have formed expectations about action goals (either before or during the experi- ment) and perceiving an action goal being violated results in a greater mu desynchronization in comparison to when the goal matches previous expec- tations. Interestingly, mu rhythm modulation does not only occur in response to observed actions: A recent study showed that 8-month-old infants show stronger mu desynchronization upon hearing the sound usually connected to the action effect (the rattle sound) than when hearing other sounds (Paulus, Hunnius, van Elk, & Bekkering, 2012). These findings do not only provide insights about the functionality of the mu rhythm itself but also indicate that infants are capable of learning an association between an action and an effect (i.e. shaking a rattle results in/is accompanied by a particular sound). As these examples show, mu rhythm desynchronization has been studied with different tasks, focusing on various aspects of the perception and imita- tion of actions, demonstrating mu desynchronization in a variety of circum- stances from the age of 8 months onwards and suggesting a link between action and perception early on in life.

29 Aims of the thesis

This thesis incorporates a compilation of three empirical studies that aim to advance our understanding on particular aspects of social cognition in early life: How infants and young children perceive and process important social information. The studies apply different methodological approaches, investi- gating behavior, neural mechanisms or both. Study I used eye-tracking to explore how infants observe others’ social engagements. Current knowledge on third-party observations is limited, par- ticularly when it comes to the question of what kind of role different visual cues play for how infants look at social scenes. Therefore, Study I aimed to provide insights on this matter. In order to do so, it tested whether infants would be able to discriminate between different social scenarios based on visual information available in images. In particular, the study focused on the role of third-party gaze (as expressed in variations of body orientation) and the status of the eyes of the social partners. Although visual information represents important social cues, for instance about the direction of another person’s gaze, Study II concentrated on the fact that young infants naturally face information from multiple sensory channels at the same time. To date, it is largely unknown how the developing brain processes sensory input from more than one channel and at what age infants are able to integrate information from simultaneously occurring visu- al and auditory cues. To aid in filling this gap in knowledge, Study II used EEG with the aim to explore neural processing of gaze and speech in young infants. While the focus of Study I was on measuring infants’ gaze and Study II explored neural mechanisms, Study III incorporated explorations at behav- ioral and neural level. The main objective was to investigate a possible neu- ral link between action observation and imitation in toddlers, which appears to exist in infancy, childhood and adulthood. In particular, it was of interest whether seeing and performing complex goal-directed actions would lead to mu desynchronization. Furthermore, the study also explored mu desynchro- nization in response to mimed actions, a tendency previously demonstrated in adults but not infants.

30 Methods

Participants For Study I, participants were recruited through a pool of interested families who had been contacted on the basis of birth records of the population living in the area of Uppsala, Sweden. Three different age groups were tested for the study. The final sample consisted of 15 individuals at the age of 9 months (M = 279 days, SD = 5 days, 5 male and 10 female) and 16 months (M = 506 days, SD = 9 days, 8 male and 7 female) as well as 16 infants at the age of 24 months (M = 736 days, SD = 10 days, 5 male and 11 female). In addition, one infant at the age of 9 and 16 months as well as two 24- month-old participants were excluded because of a lack of attention. Fur- thermore, 9 infants were excluded in the youngest age group due to an error in the stimulus presentation. For Study II, interested families were recruited at maternity wards of sev- eral hospitals in Leipzig, Germany, as well as contacted and invited for par- ticipation if their child met the age criteria of the study and was born full term (38 to 42 weeks). The study incorporated two experiments. The final sample of Experiment I included 15 4-month-old infants (M = 4 months 22 days, SD = 8.82 days, 7 male and 8 female). Further, 22 additional infants were tested but excluded because of crying (n = 12), lack of attention (n = 1), technical problems (n = 1) or an insufficient amount of artifact-free data (n = 8). Experiment II (final sample) contained 15 infants at the age of 4 months (M = 4 months and 23 days, SD = 10.97 days, 7 male and 8 female). Another 23 infants were tested but excluded because of crying (n = 10) or a lack of artifact-free data (n = 13). Participants for Study III were recruited through Flemish day-care centers as well as advertisements in magazines and websites. 17 healthy children between the age of 18 and 30 months (M = 24.54, SD = 3.96 months, 9 male and 8 female) were included in the final sample. Further 19 children were tested but excluded due to a general lack of cooperation (n = 2), failure to complete the imitation task (n = 2), technical malfunctioning (n = 2) or lack of artifact-free data (n = 11). Additionally, the data of two participants were identified as outliers and were also excluded.

31 Stimuli In Study I, the stimuli were colorful images of two adults in profile view before a white background. More specifically, the images showed a close-up of the models, displaying only their head down to the shoulders. The adults were either turned towards or away from each other and had open or closed eyes. The stimuli were created in the software GIMP and based on individual portraits of six university students (3 women, 3 men). Study II presented colorful images of a female face with a happy facial expression (frontal view). The direction of her gaze was either towards the infant or to the side, away from the infant (Experiment I, II). In Experiment II of this study, an object was displayed beside the face. The background of each picture was black. The images were taken from a previous study for which they had originally been created (Grossmann et al., 2006). In addition to an image, an audio recording was presented. Each recording contained either a forward- or backward-spoken German verb. Study III incorporated several live situations in which a child was pre- sented with a display of an object (object observation condition), followed by instances with an experimenter showing an action with (action observa- tion condition) or without (hand movement condition) the previously seen object. In an additional scenario, the children were encouraged to imitate the action. Different colorful objects were used in the experiment: A hippopota- mus soft-toy, an egg-cup, a puppet, a car and a magnifying glass that looked like the head of a frog. In the object observation condition, each object was presented hanging on a string from behind a white blind, swinging slightly back and fourth. In the hand movement/object observation conditions, the experimenter sat behind a table, which was empty except for a white box that stood in the middle of the table. Facing the child, the experimenter moved her left or right hand with or without an object and incorporated the white box into the actions, for instance as a platform onto which the object was lifted.

Apparatus In Study I, a Tobii T120 corneal-reflection eye tracker (precision 1°, accura- cy 0.5°, sampling rate 50 Hz) was used for data acquisition. The stimuli were presented on a 17-inch monitor. A standard 5-point calibration was used, containing the display of an image of an object in the center and four corners of the screen in combination with an attention-capturing sound (Gredebäck, Johnson, & von Hofsten, 2010b). Studies II and III employed EEG technique for data acquisition. In Study II, the EEG was recorded by using an infant-sized EEG cap, conductive gel and Ag-AgCl electrodes at 23 scalp locations of the 10-20 system (Jasper,

32 1958). In addition, the Electrooculogram (EOG) was recorded with two elec- trodes at the outer canthi for horizontal and above and below the right eye for vertical eye movements. The signal was amplified via a Twente Medical Systems 32-channel REFA amplifier (Twente Medical Systems, Internation- al, Enschede, The Netherlands). The vertex (Cz) was used as an online refer- ence. The sampling rate for data acquisition was 250 Hz. The data were rec- orded while the infants sat in a shielded cabin. Stimuli were presented on a 70 Hz 17-inch monitor. A video camera above the screen recorded the in- fants’ behavior during the experiment. In Study III, EEG data were registered with 28 Ag-AgCl active electrodes on locations of the 10-20-system, mounted on an infant-sized EEG cap. The signal was amplified (QuickAmp, Brain Products GmbH, Munich, Germany) and referenced to the average of all electrodes. The sampling rate was set to 500 Hz. The EOG was recorded by two electrodes at the outer canthi for horizontal and calculated offline for vertical eye movements. Two video cameras were used to record the experiment, one angled at the infant and the other at the experimenter. A response box was integrated with the EEG re- cording software for sending markers at several points in time during the live procedure. More specifically, before the start of each condition, the experi- menter pressed the button of the response box, which sent a marker to the EEG device and, at the same time, activated an LED lamp that was visible on the video recordings. These markers were used for the synchronization of the video data with the EEG data and for data segmentation.

Procedure All studies were conducted in accordance with the 1964 Declaration of Hel- sinki and approved by the respective ethical committees. At the beginning of their visit, each caregiver was informed about the procedure and provided written consent. At the end of their visit, the participating families received either a gift card for a local book/toy store (Study I & III) or a small gift (Study II) as a token of appreciation. Study I was conducted in the Uppsala Child and Baby Lab at Uppsala University, Sweden. In the beginning of the experiment, the infants were brought into position in front of the screen of the eye-tracker at a distance of 60 cm. In the youngest age group (9-month-olds), the participants were seat- ed in a safety car seat, whereas older children were sitting on a chair on their parent’s lap (16-month-olds) or sat on their own (24-month-olds). Before the presentation of the stimuli started, the eye-tracker was calibrated to the indi- vidual infant. In total, the experiment took approximately 30 minutes. Study II took place at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany. Two experimenters were present dur- ing the experimental session. After familiarizing the infant with the envi-

33 ronment, the infants’ head circumference was measured to choose an EEG cap of the right size and to attach the ring electrodes into their positions. While one experimenter distracted the infant, the second experimenter placed the EEG cap onto the infant’s head and added a conductive, non- abrasive gel into the ring of each electrode. The electrodes for the EOG measurements were attached to their positions using skin-friendly adhesive stickers. A hairnet and chest strap was used to maintain a good position of the cap on the infant’s head throughout the experimental procedure. With the EEG cap in place, the infants were seated on the caregiver’s lap in a dimly lit, electrically shielded and sound-attenuated cabin. The viewing distance between monitor and infant was approximately 60 cm. During the EEG re- cording phase, the experimenters monitored the infants’ attention via the online camera recording (camera above the stimulus presentation monitor). When the infant stopped looking at the screen, an animated spiral with an attention-capturing tone was presented to regain the infant’s attention. When the infants stopped being attentive, the session was terminated. Study III was conducted in a laboratory room of the Department of Psy- chology at the University of Gent, Belgium. As in Study II, two experiment- ers were present throughout the experimental procedure. The preparations for the EEG measurement were similar to those reported for Study II. How- ever, in the current study, the children were watching an age-appropriate cartoon while the EEG cap was put into position. During the experiment, the children were seated on their caregiver’s lap in front of a table. On the other side of the table sat an experimenter who was facing the child throughout the entire procedure. At certain points in the experiment, the experimenter and the table needed to be hidden from the view of the infant and were therefore occluded by a white blind. The blind moved up at the beginning of each experimental trial. The complete experiment lasted for approximately 15 to 20 minutes. After the experiment, the parents were asked to fill in a Dutch version of the MacArthur-Bates Communicative Development Inventories (Fenson et al., 1993; Zink & Lejaegere, 2002).

Data Analysis In Study I, the data were exported from the Tobii Studio software (Tobii Technology, Stockholm, Sweden) as gaze replay videos (time-locked at 50 Hz) and analyzed in a frame-by-frame coding analysis in VirtualDub (www.virtualdub.org). The Tobii fixation-filter with a pixel radius of 35 pixels was used for the visualization of fixations. Two Areas of Interest (AOIs) were defined for the images, whereby each rectangular AOI covered one of the faces. The size and position of the AOIs remained the same for all experimental conditions. In the coding analysis, it was determined how many gaze shifts each infant made from one face to the other for each pre-

34 sented picture. In order to be counted as a valid gaze shift, a saccade had to be preceded by a fixation in one AOI and followed by a fixation in the other AOI. For each infant, the valid gaze shifts per picture were averaged across all pictures for each condition. In Study II, the EEG data were offline filtered (band-pass, 0.3-20 Hz), re- referenced to the algebraic mean of the right and left mastoids as well as segmented into epochs of 1600 ms. Each epoch included 200 ms of blank screen, 400 ms of face presentation and 1000 ms of face and sound presenta- tion. The data were edited and inspected for artifacts in two steps. First, by excluding all epochs in which the infants were not looking at the screen (us- ing the video recording for reference). Second, by excluding epochs with values exceeding a threshold of 50 µV (80 µV for EOG electrodes). Partici- pants with less than eight artifact-free trials per condition were excluded from further analysis. ERPs were calculated and the first 400 ms of the trial, when only the face was presented, were used as a baseline for the data. Based on visual inspections of the grand average (averaged waveforms across all participants), the choice of channels for the regions of interest (ROI) for each waveform were decided upon for further analyses. Depending on the component morphology, a peak (latency and amplitude) or an aver- aged wave (amplitude) analysis was conducted. For Study III, the video recordings of each participant were coded prior to the EEG data analysis. For the offline coding of the videos, the software Observer XT 9.0. (Noldus Information Technology, 2009) was used. Data of all experimental conditions were coded for body movements and vocaliza- tions. The three conditions in which the infants were observing were coded for attentiveness (non-attentive, attentive). Data from the action imitation condition were coded for whether the infants imitated the previously demon- strated action and whether they were able to imitate all the movements of the action (score of 0 to 3). Following the video coding, the EEG data were ana- lyzed. The raw data were filtered using a high-pass filter of 0.1 Hz, a low- pass filter of 30 Hz and a 50 Hz notch filter. The Gratton and Coles algo- rithm (Gratton, Coles, & Donchin, 1983) was used to correct for horizontal and vertical eye movements. The vertical eye movements were calculated from the electrode Fp2 (above the right eye) and the common reference. In the EEG data analysis, the electrodes F3, F4, C3, C4, P3 and P4 were inves- tigated. Additionally, analysis on the data of electrode Oz was carried out to determine desynchronization in the alpha rhythm. The data for each condition was segmented into 2-second epochs. For da- ta rejection procedures, the markers from the video coding were imported and were, via the markers sent during the experimental procedures, synchro- nized with the data epochs. All segments in which the children were not attentive (three observation conditions), did not imitate (imitation condition), were vocalizing or showed excessive movements (all conditions) were dis- carded. Furthermore, segments exceeding the maximal voltage step of 100

35 µV per sampling point or/and a 400 µV difference between two data points were also excluded. For subsequent analyses, only data from participants that had at least 20 artifact-free segments per condition were included. Fol- lowing artifact rejection, the data was transformed using a Fast Fourier Transform (FFT) with a Hanning window of 10% and the data segments were averaged for each condition. The mu frequency band of each individual was determined and mu wave desynchronization calculated. In order to do that, the data of the object observation condition was used as a baseline con- dition. Mu desynchronization was then calculated as a ratio of mu wave power between the baseline and the other experimental conditions (Marshall, Young, & Meltzoff, 2011) In all three studies, statistical analyses were conducted with Statistica (StatSoft, Inc., Tulsa, Oklahoma), using general linear models. Study I was the only study using a mixed design, incorporating both within- and a be- tween-subject factor.

36 Study I – Third-party observations in infants

The investigation of infants’ observations of others’ interactions and their sensitivity to the third-party gaze is, as earlier mentioned, a rather new and largely unexplored topic in infancy research. The current study therefore lent itself to exploring this topic further and to adding to the variety of methodo- logical approaches by following in the footsteps of Augusti and colleagues (2010) in some respects, but also taking a novel route concerning the applied stimuli and chosen study design. To be concrete, the study maintained the operationalization of gaze and constellations of body orientation by present- ing face-to-face and back-to-back displays. On the other hand, the format of the display of the social scene included static visual information only. This property highlights the specific purpose of the study: The exploration of infants’ sensitivity to body orientation in the absence of any audiovisual events, using a minimalistic approach by presenting images instead of video clips. While such a strategy has the advantage of highlighting the role of relevant social cues in isolation, it automatically elicits the question whether static images can be sufficient for infants to discriminate between different scenarios. This is particularly important in a situation that falls, at least in comparison, short of providing attention-capturing information, such as mo- tion or ostensive cues (e.g. direct gaze). One previous investivation however, indicates that images alone can be informative in this context and even influ- ence infant behavior. In this study, 18-month-old infants were presented with images depicting two dolls beside an object (Over & Carpenter, 2009). The specific research question was whether varying the orientation of the dolls would impact the infants’ subsequent helping behavior. The authors found that infants helped an experimenter more frequently after having seen imag- es in which the two dolls were facing each other than when they were turned away from each other. These findings suggest that body orientation can be meaningful in some contexts, even when presented in the form of static visu- al information. However, it is unclear how task-specific this finding is and whether the same would apply to infants younger than 18 months of age. In the current study, three different age groups were tested: 9-, 16- and 24- month-old infants. The reason for this choice of age groups was on one hand to include a group that had been reported as being sensitive to body orienta- tion before. Given that the results of Augusti and colleagues (2010) imply just that for infants at 6 and 10 months of age, 9-month-old infants were chosen as the youngest age group for the current study. However, because it

37 was unclear whether images would be sufficient for such a sensitivity to emerge, 16- and 24-month-old infants were included, that is, infants both younger and older than those tested by Over and Carpenter (2009). Choosing a broad range of age groups was also important for another enquiry that was part of this study. Having drawn inspiration from earlier gaze following studies (Brooks & Meltzoff, 2002, 2005; Caron, Caron, Roberts, & Brooks, 1997; Tomasello, Hare, Lehmann, & Call, 2007), a second aim for this study was to examine a potential effect of another visual cue on infants’ gazes shifts: Whether the eyes of the social partners are open or closed. According to work by Brooks and Meltzoff (2005), infants from the age of 10 months follow an experimenter’s head turn more often when her eyes were open than when they were closed. Whether someone’s eyes are open or closed during a social interaction is not a marginal matter. If the direction of gaze provides insights into another’s focus of attention and intention, then closed eyes should prevent that information from being available to a recipi- ent. Given the saliency of eyes in a face and reports of infants’ sensitivity to them (Jessen & Grossmann, 2014) it is valuable to find out whether closed or open eyes would make any difference for how frequently infants shifted their gaze between the displayed individuals. To summarize, one aim of the study was to find out whether infants would discriminate between face-to-face and back-to-back displays and, more particularly, if gaze shifts would occur more frequently in a face-to- face than back-to-back context, a pattern reported by Augusti and collabora- tors (2010) and recently referred to as the face-to-face effect (Wilkinson, Metta, & Gredeback, 2011). The second objective was to explore whether open or closed eyes would have an effect on the infants’ gaze shifts and, if that was the case, whether this effect would be dependent on variations in body orientation.

Design All images that were created for the purpose of the study contained two indi- viduals, showing the entire head down to the shoulders. The two individuals displayed on the images are for practical reasons frequently referred to as faces in the context of the current study although more than their faces are visible. Furthermore, the visibility of the shoulders indicates that the whole body (but at least the torso) was turned to the side and not towards the infant. Therefore, although the entire body is not visible in the images, body orien- tation and not head orientation is used as a term to describe the orientation of the social partners. Furthermore, body orientation is discussed as one pos- sible manifestation of body posture in which mutual or averted gaze can be achieved between them. In the images, the faces were turned towards (face- to-face) or away from each other (back-to-back) and had either open (eyes

38 open) or closed (eyes closed) eyes. The experimental conditions were there- fore face-to-face/eyes open, face-to-face/eyes closed, back-to-back/eyes open and back-to-back/eyes closed (see Figure 1). In total, 30 different im- ages were created for each condition, including all possible variations of face pairs (3 male, 3 female faces) and their position on the image (left or right side). From the resulting 120 images, two sets were created, so that half of the participants of each age group saw one set of 60 images and the remain- ing half was presented with the other set. Therefore, the experiment con- tained 60 trials in total (15 trials per condition), which were divided into two blocks of 30 trials (3.5 min per block), separated by a short break. Each ex- perimental trial incorporated the presentation of a centrally located, animated attention grabber on a white background (2 seconds), followed by the dis- play of the image (5 seconds). In order to vary the order of the images be- tween participants, eight different versions of the experiment were created with a different order of images and in addition, the latin square function of Tobii studio was used for the presentation. Statistical analyses were performed using a general linear model with a mixed factorial 3 x 2 x 2 design. Thereby, Age (9, 16, 24 months) was de- fined as the between-subject factor and Body orientation (face-to-face, back- to-back) as well as Eye status (open, closed eyes) as within-subject factors.

Figure 1. Display of the four experimental conditions. Areas of Interest (AOIs) used for all conditions are indicated with black rectangles in the upper left image.

39 Results The analysis resulted in a main effect of Age (F(2,43) = 3.74, p = .032, ηp2 = .148) and a main effect of Body orientation (F(1,43) = 7.95, p = .007, ηp2 = .156). There was no significant effect of Eye (F(1,43) = 1.09, p = .032, ηp2 = .025) and no significant interactions between any of the factors. Regarding the main effect of Age, post-hoc comparisons showed a significant differ- ence between the oldest and youngest age group regarding the numbers of gaze shifts across conditions. More precisely, the 24-month-old participants made significantly (p = .004) more gaze shifts between the AOIs (M = 2.041, SE = 0.152) than did the 9-month-old infants (M = 1.461, SE = 0.157). With regards to the main effect of Body orientation, post hoc comparisons showed that infants made more gaze shifts between the two faces in the face-to-face (M = 1.799, SE = 0.105) than the back-to-back (M = 1.617, SE = 0.086) con- dition.

Figure 2. Mean gaze shifts in response to face-to-face and back-to-back images for each age group. The y-axis displays the mean number of gaze shifts during the 5 s of stimulus presentation. For each age group, face-to-face and back-to-back images are contrasted (x-axis). Error bars of the columns indicate standard error values.

Discussion The results of the study demonstrate that, across ages, infants respond differ- ently to face-to-face and back-to-back images in the way they shift their gaze between the faces. Taking these findings at face level, they answer the initial question of whether infants are able to discriminate between body orienta-

40 tions in static images. Furthermore, the results suggest that when infants look at face-to-face images, they respond with more gaze shifts than to back-to- back images, a pattern previously described as the face-to-face effect (Wilkinson et al., 2011). It appears that in the current study, this was the case for the majority of children in all three age groups (Figure 3).

Figure 3. Individual difference scores of gaze shifts for body orientation. Positive values represent more gaze shifts in the face-to-face than the back-to-back condition.

In the light of the results, the main question that lingers on is what kind of (cognitive) mechanisms this effect is based on. For instance, Wilkinson and colleagues (2011) applied computational modeling in order to follow up on this question, using the images presented in the current study to create a sali- ency model. This investigation suggested that eye movements are attracted to the facial regions because of their high saliency. A shift from one face to the other in the face-to-face images is easier to maintain because the space between both areas is empty, which is not the case in the back-to-back imag- es. Here, the features of the body and the topology of the images are one factor causing the face-to-face effect. But it may not be the only one. Alter- native explanations for the face-to-face effect have previously been brought up by Augusti and colleagues (2010): In the context of their findings on in- fants’ observation of turn-taking events of a conversation, they propose that either gaze following or some kind of understanding of the interaction may be at the heart of the effect. The discussion of these alternative explanations will be resumed in the general discussion. When considering the overall number of gaze shifts the infants made be- tween the faces, it becomes apparent that their number, although increasing with age, remains relatively small. This outcome is perhaps not too surpris- ing, given that there was less information available that could have motivat-

41 ed gaze shifts: There were no events. The heads did not move, there was no sound (and no audiovisual speech) and no information on the contingency and temporal structure of potential events. This is important because motion and temporal change have been suggested as strong predictors for eye movement (Itti, 2005). Ultimately, the lack of them would also mean fewer eye movements (current study) than their presence (Augusti et al., 2010). Thus, given that there we no turn-taking events included in the stimuli, the infants had less reason to shift their gaze between the two faces. Still, it could be assumed that upon the presentation of an image, infants would enter each AOI at least once, while visually exploring the image. Such a proposi- tion would lead to the expectation of at least one gaze shift per image (pre- ceded and followed by one fixation). This was the case for all age groups and conditions. In fact, from 9 months to 24 months of age, the gaze shifts per image increased from an average of 1.46 to 2.04 per image. This change could have to do with the older infants being better at disengaging their at- tention from one face in order to shift to the other face. Returning to a sug- gestion made earlier on, it could also reflect a growing understanding of others and their communicative intentions, which could increase the interest in third-party engagements, leading to a slight but consistent increase in gaze shifts, even in response to images. One indication for infants’ growing abil- ity to monitor others’ social engagement is their improvement in predicting the onset of conversational turns between 1 and 3 years of age, which most likely goes hand in hand with a growing understanding of language (von Hofsten et al., 2009). Interestingly, the current results did not yield significant findings regard- ing the eyes. For an explanation, it could be argued that a lack of differentia- tion between open and closed eyes is grounded in a reduced visibility of the eyes. In the present study, the faces were displayed from the side with only one eye of each model visible, containing less high-contrast information of the eye than a frontal view would hold. Given this reduction of salient in- formation, it is possible that the infants either did not detect the eyes or, de- spite detecting them, the difference between open and closed eyes was of no significant consequence for their gaze shifts, for instance because body ori- entation became the more salient cue given the circumstances. To summarize, the core finding of the current study is the following: In- fants discriminated between the images on the basis of static visual infor- mation of body orientation. Furthermore, they shifted their gaze more fre- quently between the images when the social partners were facing each other than when they did not. This pattern of gaze shifts, previously referred to as the face-to-face effect (Wilkinson et al., 2011) is therefore not dependent on the presence audiovisual events (e.g. vocalizations, turn-taking events).

42 Study II – Neural processing of gaze and words

During their first months of life, infants become an increasingly active part of dyadic social interactions and experience everyday situations in which they are both looked at and spoken to. How infants are able to combine the available visual and auditory information and what this process may look like at neural level is to date largely unknown. Despite previous literature that suggests an existing ability for matching face-voice pairs from the age of 2 months (Patterson & Werker, 2003) and crossmodal neural representa- tion for audiovisual speech in 5-month-old infants (Kushnerenko, Teinonen, Volein, & Csibra, 2008), current and future research has yet to map out the role of the direction of others’ gaze for processing their speech. This is by no means a marginal issue, given its importance for social interactions and learning (e.g. Csibra, 2010). The objective of the current study was to con- tribute information to this unsolved puzzle, using neurophysiological meas- urements. In two ERP experiments, it aimed to inform about how 4- to 5- month-olds process eye gaze information in combination with spoken words. While the first experiment focused on the eye gaze directed either towards or away from the infant, the second experiment modulated referential gaze, in which the eye gaze of the face was directed towards or away from an object. The age of 4 to 5 months was chosen because 5-month-old infants show signs of processing and integrating speech-related audiovisual information (Kushnerenko et al., 2008) and are sensitive to gaze in a triadic context (Striano & Stahl, 2005). Furthermore, 4-month-old infants’ processing of object-directed and non-object-directed gaze elicits modulations in the Nc component (Hoehl et al., 2008), suggesting that this component could also be modified in the current design. As described earlier (in the introduction) the Nc is associated with attention and recognition memory in processing novel and familiar stimuli (Courchesne et al., 1981; Reynolds, Courage, & Richards, 2010; Reynolds & Richards, 2005). Because the design contrasted forward- with backward-spoken words, it was of particular interest to find out whether the Nc amplitude or latency would differ between the familiar and unfamiliar speech type and as a function of eye gaze direction. Inde- pendent of what component could be important for the processing mecha- nisms, it can be reasoned that if direct gaze is an ostensive cue that expresses communicative intent, then its combination with forward- or backward-

43 speech should result in differential activation, one condition being in line with previous experience (forward-spoken words), whereas the other (back- ward-spoken words) is not.

Design In all experimental trials, an image of a female face with a friendly facial expression was presented on a computer screen and combined with the audi- tory stimulus. Three different images were used as visual stimuli, which varied with respect to the woman’s eye gaze direction (see Figure 4). In Ex- periment I, she looked either towards the infant (direct gaze) or towards the left or right side of the screen (averted eye gaze, left and right side counter- balanced across trials). In Experiment II, the eye gaze was always averted and either directed towards (referential) or away (averted) from an object displayed beside the face.

Figure 4. Stimuli presented in Experiment I and II. In Experiment I, the female face looked at the infant (A) or to the side of the screen (B). In Experiment II, her gaze was directed towards an object (C) or away from it (D).

The auditory speech stimulus was one of 74 German forward- or backward- spoken verbs, spoken by a female voice with a happy prosody. Both experi- ments consisted of one block with 296 trials, with 74 trials for each of the four conditions. For Experiment I, these were direct gaze/forward-spoken word, direct gaze/backward-spoken word, averted gaze/forward-spoken word and averted gaze/backward-spoken word. The software ERTS (BeriSoft Corporation, Frankfurt, Germany) was used for presenting the stimuli. Trials were presented in a pseudo-randomized order and the same condition was not repeated more than twice in a row. The number of trials per condition was counterbalanced with five trials per condition in each 20 trials, so that a similar number of trials per condition could be guaranteed when the experiment was terminated after the infant lost interest in the presentation. For the entire length of each trial (1400 ms), the face was pre- sented. The sound onset was 400 ms after the onset of the image (sound du- ration 1000 ms). The inter trial interval had a random duration in the range between 800 and 1200 ms. For both experiments, repeated-measures ANO- VAs were conducted with the within-subject factors Gaze (Experiment 1: direct, averted and Experiment 2: referential, averted), Speech type (forward,

44 backward) and Location (left, central, right for frontal-central and left, right on parietal-occipital sites). The Duncan test was applied for all post hoc comparisons. The analyses included the Nc component (minimum peak am- plitude and latency, 0 – 250 ms from sound onset) at left (F3, FC3, C3), cen- tral (Fz, Cz) and right (F4, FC4, C4) frontal-central channels and a negative Slow Wave (SW) at left (P3, O1) and right (P4, O2) posterior channels (mean amplitude analysis, 650 - 1000 ms from sound onset).

Results

Experiment I The grand average of Experiment I is displayed in Figure 5, showing the Nc at a frontal location and the SW at a posterior (right) side.

Figure 5. Grand average of Experiment 1 (n = 15). The Nc peak (Fz) and SW (P4) are indicated with arrows in the two enlarged frames to the left. Negative amplitudes are displayed above, positive below 0 on the y-axis. The x-axis indicates time in seconds. Sound onset occurred at 0, the image onset at -0.4 s (indicated by a vertical line). Direct gaze is indicated with solid lines, averted with dotted lines. Forward- spoken words are represented by red lines, backward-spoken words by blue lines.

Nc peak analysis The Nc peak analysis resulted in a significant interaction of the factors Gaze and Speech (F(1,14) = 9.68, p = .008), displayed in Figure 6. Post hoc com-

45 parisons revealed that the Nc peaked significantly earlier in the condition with direct gaze/backward-spoken word (M = 106 ms, SE = 8) and averted gaze/forward-spoken word (M = 111 ms, SE = 14) than in the condition with direct gaze/forward-spoken word (M = 143 ms, SE = 14).

Figure 6. The Nc latency interaction (fronto-central channels) between Gaze and Speech for Experiment 1. The y-axis shows the peak latency in milliseconds, lower values indicate an earlier peak. FS-Word = forward-spoken word; BS-Word = backward-spoken word.

SW mean amplitude analysis The mean amplitude analysis for the Slow Wave revealed a significant main effect of Speech (F(1,14) = 5.34, p = .04) with a larger amplitude for the backward- (M = -6.00 µV, SE = 1.80) than the forward-spoken word (M = - 2.58 µV, SE = 2.05). There was also an interaction between Gaze and Speech (F(1,14) = 13.23, p = .003), with a more negative amplitude for di- rect gaze/backward-spoken word (M = -8.13 µV, SE = 2.16) than present in all other conditions (direct gaze/forward-spoken word: M = -1.70 µV, SE = 2.32, averted gaze/forward-spoken word: M = -3.46 µV, SE = 2.05, averted gaze/backward-spoken word: M = -3.87 µV, SE = 1.65).

Experiment II The grand average of Experiment II is displayed in Figure 7, showing the Nc at a frontal location and the SW at a posterior (right) side.

46

Figure 7. Grand average of Experiment 2 (n = 15). The Nc peak (Fz) and SW (P4) are indicated with arrows in the two enlarged frames to the left. Negative amplitudes are displayed above, positive below 0 on the y-axis. The x-axis indicates time in seconds. Sound onset occurred at 0, the image onset at -0.4 s (indicated by a vertical line).Referential gaze is indicated with solid lines, averted with dotted lines. For- ward-spoken words are represented by red lines, backward-spoken words by blue lines.

Nc peak analysis No significant interactions or main effects were found regarding Gaze and Speech in the amplitude or latency of the Nc.

SW mean amplitude analysis The SW analysis revealed a significant main effect of Speech (F(1,14) = 7.92, p = .01) and post hoc comparisons showed that the amplitude was more negative in response to the backward-spoken word (M = -6.52 µV, SE = 1.65) than to the forward spoken word (M = -2.59 µV, SE = 1.42). Further- more, an interaction between Gaze and Speech (F(1,14) = 7.75, p = .01) appeared. The condition with referential gaze and a backward-spoken word resulted in a more negative mean amplitude (M = -8.64 µV, SE = 2.22) than the other conditions (referential/forward-spoken word: M = -1.39 µV, SE = 1.91, averted/forward-spoken word: M = -3.80 µV, SE = 1.40, avert- ed/backward-spoken word: M = -4.41 µV, SE = 1.59).

47 Discussion In summary, the results of the study showed that images in which the eye gaze that was directed towards the infant (Experiment I) or the object (Ex- periment II) were processed differently in the context of a backward-spoken and forward-spoken word. In Experiment I, an interaction effect appeared for the latency of the Nc. One possible interpretation of this is that the unfa- miliar backward-spoken word was particularly attention capturing when the eye gaze was directed at the infant. This is in line with the idea that direct gaze is an important cue, signaling communicative intent. When it is com- bined with an unfamiliar and unusual cue (i.e. the backward-spoken word), it violates communicative expectancy, leading to a modulation in the Nc, in this case the latency. Interestingly, in the condition with averted gaze and a forward-spoken word, the Nc peaked also significantly earlier than in the condition with direct gaze and a forward-spoken word. This case could also be interpreted under the umbrella of expectation violation: the eye gaze of the presented face is directed sideways and is coupled with a forward-spoken word that is not directed at the infant but also does not seem to refer to any- thing else (since there was no object displayed in Experiment I). Interesting- ly, the Nc interaction appeared in the first, but not in the second experiment, when the eyes of the face never look directly at the infant. Possibly, the stronger the contrast between the cues (in terms of their saliency) in different experimental conditions, the greater is the probability for an Nc modulation to occur. Direct gaze and a backward-spoken word are both salient infor- mation, even though for different reasons, which might have contributed to an Nc modulation in Experiment I, but under the lack of direct gaze, the modulation did not reappear in Experiment II. It is also interesting to note that combinations of infant- and adult-directed speech and direct or averted gaze do not necessarily lead to an Nc modulation in 5-month-old infants (Parise & Csibra, 2013). It is possible that the resulting combinations of gaze and speech are more or less informative for young infants, but these combi- nations are perhaps not as surprising and attention capturing as the combina- tions used in the current study and particularly in Experiment I. The grand average of both Experiments showed a slow negative deflec- tion over parietal channels. Similar slow waves following the Nc, here re- ferred to as the SW, have been reported in many infant ERP studies and are discussed as being involved in attentional processes and memory. More par- ticularly, a negative SW has been suggested to reflect the detection of a nov- el item whereas a positive SW has been discussed as an expression of memory update (de Haan & Nelson, 1999; Nelson, 1997). In Study II, the SW is a negative peak, which, in the light of previous research would sug- gest it to indicate the detection of a novel stimulus. This matches well with the enhanced amplitude to the backward-spoken word, except that here it would make sense that it is the combination of this novel stimulus with an

48 informative social cue (direct gaze in Experiment I, referential gaze in Ex- periment II) that modifies attention in the sense that more attentional re- sources are used to process this stimulus combination. Taken together, the current study suggests that infants at the age of 4 to 5 months are capable of integrating visual and auditory information at neural level. Furthermore, it appears as though direct gaze in combination with backward-spoken word is particularly salient, possibly because it indicates communicative intention and violates communicative expectations at the same time.

49 Study III – The mu rhythm in toddlers: Seeing and performing actions

Recent EEG work has brought forth insights into the infant sensorimotor mu rhythm and the circumstances of its desynchronization (Marshall & Meltzoff, 2014). As new evidence accumulates, more questions are formu- lated, for instance about its developmental trajectory, function, and connec- tion to motor and cognitive abilities. The current study selected several in- teresting questions, aiming to provide empirical data that would assist with resolving some of them. Previous investigations suggest sensorimotor mu desynchronization during action observation and execution tasks in infants from the age of 8 months (Nyström et al., 2011) up to 16 months (van Elk et al., 2008) as well as in children from the age of 3 years onwards (Fecteau et al., 2004; Lepage & Théoret, 2006; Meyer, Hunnius, van Elk, van Ede, & Bekkering, 2011). The resulting gap in age between these studies was the target of this study. Measuring brain activity in children between 18 and 30 months of age (here referred to as toddlers), the first question was whether central mu desynchronization would co-occur with both the observation and imitation of goal-directed actions. Furthermore, findings in 14-month-old infants indicate that desynchronization can also occur over frontal and parie- tal sites during the observation but not execution of actions (Marshall et al., 2011). To determine whether a similar pattern would appear for toddlers, frontal and parietal locations were included in the data analysis. Moving from this topographical aspect to a more functional one, another interesting question is what characteristics an observed action needs to possess in order for desynchronization to occur. Could, for instance, a mimed action (a hand movement mimicking an object-directed action) result in mu desynchroniza- tion? Interestingly, desynchronization is not present during the observation of a mimed grasp in 9-month-old infants (Southgate et al., 2010), but does occur in adults, although to a lesser degree than during the observation of a hand grasping an object (Muthukumaraswamy et al., 2004). Possibly, mu desynchronization in response to mimed actions emerges in childhood, rais- ing the question whether it may be present in toddlerhood. To address this issue, this study included mimed equivalents of presented object-directed actions. Finally, based on the suggestion that the infant mu rhythm can pro- vide insights into action perception and production linkages early in ontoge- ny (Marshall & Meltzoff, 2011), it is of interest to examine a potential con-

50 nection between mu desynchronization and motor and cognitive abilities. Therefore, the last, rather explorative question was whether mu desynchroni- zation (in the different conditions) would be related to language ability, imi- tation quality and age.

Design The experiment contained four different conditions of which all were live scenarios. In Condition 1, the object observation condition, infants were presented with an object hanging on a string in front of a white blind. The object was moving slightly back and forth. In Condition 2, the action obser- vation condition, infants faced an experimenter who was sitting behind a table opposite the infant. The experimenter reached out to grasp an object and moved it in a playful, goal-directed manner from one side of a white box to the other. In Condition 3, the action execution condition, the infants were asked to imitate the goal-directed action they had seen. In Condition 4, the hand movement condition, infants saw the same demonstration without the object being present. Each of these conditions contained five trials; Five different objects were presented to the infants (Condition 1), they saw five different goal-directed actions (Condition 2), were asked to imitate these actions (Condition 3) and observed five hand movements (Condition 4), which were the mimed version of the action with the object. Thus, with five trials in four conditions, the experiment contained a total of 20 trials. The duration of the imitation trials was adapted to the child’s own speed of imita- tion, which was usually no more than 40 seconds. All other trials lasted for 30 seconds. The experiment began with presenting the five object presentation trials (Condition 1) in one block. This strategy was chosen to make sure that all the infants had seen all objects before the action observation and execution trials. The presented objects were a puppet, a car, a hippopotamus soft toy, a magnifying glass that looked like the head of a frog and an eggcup. Then, the experiment commenced with the remaining 15 trials. The trials were split up into clusters of three trials (totaling 5 clusters), each containing one trial from the action observation (Condition 2), action execution (Condition 3) and hand movement condition (Condition 4). The order of trials in the clus- ters was counterbalanced between participants with respect to when the hand movement was shown. It was presented either before or after the infants observed and imitated the action. For all participants and in all clusters, the action execution trial always followed the action observation trial. In the action observation condition (Condition 2), the infants watched the experimenter move the object on the table in a playful manner. In all cases, a white box was placed in the middle of the table and the objects were moved in various ways in relation to the box (e.g. moving from one side of the box

51 to the other by moving around it or over it). Each object was moved from the start to a goal position in a different manner. The rationale behind this choice was to make the action as interesting as possible in order to maintain the infants’ attention. In each of the action observation trials, the demonstrated action was performed six times, three times with the left and three times with the right hand. The starting hand was counterbalanced between objects. Be- fore the experimenter started to move the object, she made eye contact with the child and asked for her attention (‘Name child, look!’). Furthermore, in the action execution condition (Condition 3), infants were encouraged to imitate the observed action and were given the object. Finally, in the hand movement condition (Condition 4), the experimenter did not look at or address the child prior to the movement. Thus, with regards to ostensive cues, this condition was different from Condition 3. Each hand movement was presented six times. None of the presented actions and hand movements were accompanied by vocalizations from the experimenter or particular sounds of the objects (other than unspecific noise from moving the object). For the observation trials, the white blind was pulled up at the be- ginning and lowered at the end of the trial. The statistical analysis included several steps. First, data were analyzed per condition and across scalp locations. For that purpose, a repeated- measures ANOVA was conducted for each condition (hand movement, ac- tion observation, action execution, hand movement) with Region (frontal, central, parietal) as the within-subject factor. The aim of the analysis was to find out whether central mu desynchronization was present in Conditions 2-4 and to investigate desynchronization at frontal and parietal locations. Se- cond, an analysis across all conditions was conducted for the central chan- nels (C3, C4) only with Condition (action observation, action execution, hand movement) as a within-subject factor. Here, the purpose was to look at differences in mu power between conditions. Third, for each condition, paired-sample t-tests were used to compare mu desynchronization at central sites (C3, C4) relative to alpha desynchronization at the occipital location (Oz). Fourth, at central locations, the relationship between mu desynchroni- zation of each experimental condition (except baseline) and child character- istics (age, comprehensive and expressive language level, imitation score) were analyzed using Pearson’s correlations. For the interpretation of the results it is important to know that negative val- ues indicate mu desynchronization, whereas positive values represent mu intensification relative to condition 1, the baseline condition.

52 Results

Regional effects of mu desynchronization per condition The repeated-measures ANOVA revealed an effect of Region in Condition 3, the action execution condition (F(2,15) = 17.006, p < .001) and Condition 4, the hand movement condition (F(2,15) = 9.145, p < .01). This was not the case in Condition 2, the action observation condition (F(2,17) = 1.144, p = ns). In Condition 3, follow-up contrasts showed that mu desynchronization was significantly stronger over central (M = -.41, SD = .29) when compared to frontal (M = -.13, SD = .36) or parietal (M = -.17, SD = .22) sites. A simi- lar pattern appeared in Condition 4, with significantly stronger desynchroni- zation over central (M = -.26, SD = .20) than frontal (M = -.08, SD = .19) and parietal (M = -.11, SD = .15) sites. In all three above-mentioned experi- mental conditions, mu desynchronization differed significantly from zero over central and parietal, but not at frontal sites.

Differences in central mu desynchronization The repeated-measures ANOVA with Condition as a within-subject factor, yielded a significant effect of Condition (F(1.25,20.04) = 10.84, p < .01) More concretely, the desynchronization in Condition 3 (M = -.41, SD = .268) was stronger than in Condition 2 (M = -.13, SD = .20) and Condition 4 (M = -.26, SD = .20). Likewise, there was more desynchronization in Condition 4 than in Condition 2 (see Figure 8).

Figure 6. Mean mu desynchronization for the action observation, action execution and hand movement condition over central cites. Significant differences from zero are indicated (* p < .05; ** p < .01; *** p < .001). Error bars show standard errors.

53 Central mu desynchronization versus occipital alpha desynchronization The paired-sample t-tests showed that the desynchronization at central chan- nels was significantly stronger than the desynchronization at the occipital channel in Condition 3 (t(16) = - 3.315, p < .01) as well as Condition 4 (t(16) = - 2.161, p < .05). There was no significant difference between the mu desynchronization at central and occipital sites in Condition 2 (t(16) = 1.689, p = ns).

Central mu desynchronization and child characteristics Pearson’s correlations indicated that age, language level and imitation quali- ty were correlated (all values except one r > .550, p < .05). Furthermore, imitation quality correlated positively with central mu desynchronization in Condition 4 (r = .483, p < .05) and Condition 3 (r = .586, p < .05) but not in Condition 2 (r = .285, p = ns). Additionally, mu desynchronization in Condition 4 was positively corre- lated with desynchronization in Condition 2 (r = .516, p < .05) and Condi- tion 3 (r = .751, p < .01). However, the latter two conditions were not corre- lated (r = .126, p = ns).

Discussion This study is the first to demonstrate significant mu desynchronization dur- ing the observation and imitation of goal-directed actions in children be- tween 18 and 30 months of age. The current findings fill the gap between earlier studies and support previous reports on mu desynchronization in in- fancy and childhood. With regards to the spatial distribution of mu desyn- chronization, the analyses showed that it was strongest over central sites but occurred also over parietal sites in all three experimental conditions.This finding suggests that between 18 and 30 months, mu desynchronization might not be restricted to central sites, a notion supported by other recent findings (Marshall & Meltzoff, 2011; Nyström et al., 2011; Saby & Mar- shall, 2012). When discussing mu desynchronization at different locations, another aspect is noteworthy. As previously mentioned, the alpha frequency range of the EEG signal accommodates the functionally different mu and occipital alpha rhythm. Consequently, when discussing mu desynchronization and its occurrence at different scalp locations, it is meaningful to differentiate between the central and occipital alpha rhythms before concluding about the meaning of power changes in one of them. It was therefore tested whether the desynchroniza- tion over central would differ significantly from desynchronization over occipital sites. The results showed greater desynchronization at central sites than at the occipital location during the hand movement and action execution

54 but not during the action observation condition. Therefore, both the hand movement and the action execution can clearly be distinguished from the occipital alpha rhythm, indicating that what was measured at central sites was indeed the mu rhythm. One of the strengths of the current study is that it employed a live para- digm, which has recently been discussed as being preferable to video presen- tations (Cuevas et al., 2014; Ruysschaert, Warreyn, Wiersema, Metin, & Roeyers, 2013). Furthermore, two aspects make this study unique. The first concerns the kind of actions that were demonstrated. Instead of simple grasp actions, as have often been previously used in infants (Nyström et al., 2011; Southgate et al., 2010), complex actions were demonstrated. These object- related actions contained several steps. For instance reaching for a car (step 1), lifting it onto one side of the box on the table (step 2), moving it across the box to the opposite edge (step 3). This kind of action implementation was chosen to make the actions interesting enough to be observed and imi- tated. Particularly, it was important that the children would be able to main- tain attention throughout the experiment, given four conditions and several trials for each type of action. Furthermore, a compelling feature of the current design was the inclusion of mimed actions, a decision inspired by previous work in adults (Muthukumaraswamy et al., 2004). To be more specific, Muthukumaraswa- my and colleagues (2004) demonstrated that when adults observe a target- directed grasp of a human hand, they show greater mu desynchronization than when seeing an empty grip, leading the authors to conclude that “(…) we can infer that observations of effector–object interactions are more effective in engaging the underlying neuronal populations than are observations of motorically equivalent, but nonobject-directed movements” (p. 199). However, this was not the case for the current study, since the desynchronization was greater in the hand movement than the action observation condition. The compelling ques- tion is why this was the case. First of all, it is helpful to know that a recent study in 18- to 36-month-old children used the same action demonstrations as the current study as well as identical hand movements (Ruysschaert et al., 2013). As was the case in the current study, the hand movement condition resulted in a greater desynchronization than the action observation condition when being part of a live presentation. Thus, it appears that hand movements that are not directed at and performed with an object, can result in mu desyn- chronization in early life. Interestingly, a previous study in infants showed the opposite outcome. Southgate and colleagues (2010) reported a mu desynchronization in 9-month-old infants in response to a (goal-directed) grasping action with an occluded outcome but synchronization to a mimed grasp (without an occluded outcome). These differences in outcomes most likely correspond to particularities of the study designs and might also be related to developmental changes between the first and second year of life.

55 This particular finding and possible interpretations will be further discussed in the general discussion of this thesis. Finally, the results showed a positive correlation between the quality of imi- tation and mu desynchronization in the action execution and observation condition. In other words, a better imitation of the observed action was relat- ed to a weaker desynchronization. Why this would be the case is unclear, in fact the opposite pattern would rather be expected if a stronger mu desyn- chronization indicated more activation in the motor system, likely to occur during successful imitation. In order to find out whether this finding is not just a singular finding, particular only to the current study (e.g. related to the measurement of imitation and its quality, movement artifacts or other as- pects), future studies will have to investigate this aspect further. The current study had several limitations. For instance, the age range of the participants was rather large, particularly in relation to the sample size. This had to do with the fact that there was no available pool of interested families, making the recruitment more difficult and limiting the sample size. Also, measuring EEG in toddlers is a very challenging endeavor, resulting in a considerable dropout rate. Furthermore, the differences in the experimental procedure between the action observation and hand movement condition (i.e. whether or not looking at, addressing the child was included) were not ideal and should be better controlled in future studies. Taken together, the study provided valuable insights into mu desynchro- nization in toddlerhood. First, the mu rhythm desynchronizes in children between 18 and 30 months of age in response to observing and imitating goal-directed actions. Second, desynchronization also occurs in response to mimed actions at this age, a pattern resembling previous findings in adults but not infants.

56 General discussion

Enveloping three different empirical studies, this thesis aimed to add further knowledge to the understanding of how young infants and toddlers observe and process information vital to their social cognitive development: Others’ gaze, words and actions. These social cues have been featured and investi- gated in a number of ways, focusing either on behavior (Study I), neural processes (Study II) or both (Study III). In the first two studies presented in this thesis, an exploration of gaze as a social cue was the main objective. Focusing on the gaze in a third-party context, Study I indicated that 9-, 16- and 24-month-old infants are sensitive to whether or not social partners face each other and shift their gaze more frequently between them when this is the case. Furthermore, a differentiation between face-to-face and back-to- back scenarios was found on the basis of static visual information alone, without the necessity for dynamic visual or auditory information. Study II examined gaze as a social cue in the context of speech and, based on infants’ early sensitivity to gaze and speech, followed the question of whether 4- to 5-month-old infants show indications of integrating these multimodal cues at a neural level. The results showed that this is indeed the case and infants process speech differently depending on whether it is combined with direct or averted gaze. Study III was dedicated to exploring the neural link between action observation and execution in toddlerhood by studying modulations in the infant mu rhythm. This study showed that mu desynchronization in 18- to 30-month-old infants occurred not only during the observation of complex actions, but also when infants imitated these actions and when they saw mimed versions of the original object-directed actions. These findings indi- cate that mu desynchronization in toddlers occurs – like in adults (Mu- thukumaraswamy et al., 2004) – during the perception and performance of actions, even in situations when an action is not targeted at a visible object. With the aim to discuss these findings in the context of other empirical stud- ies on the development of social cognition, the following sections feature and elaborate on different aspects related to the importance of others’ gaze and actions.

57 Gaze – a key player in infant development Perhaps it cannot be stressed enough how important gaze as a social cue is for navigating within the complex, stimulus-laden sensory environment, understanding others’ intentions and learning from others and about the world. The start of it all can be seen in newborn infants’ preference for faces (Valenza et al., 1996) and sensitivity to gaze (Farroni et al., 2002). On the basis of this preference towards very important social cues and within the margins of mutual gaze, infants begin their journey of life-long process of learning about and making sense of others behavior. In gaze following, in its earliest forms reported at 3 (D’Entremont, 2000) or 6 months (Butterworth & Jarrett, 1991), another persons’ gaze draws the infants’ attention and gaze to locations in the environment. This is the basis for the ability to jointly attend to and share the attention about an object of interest, which develops in the second half of the first year and the second year of life (Mundy & Newell, 2007; Tomasello, 1995). Thus, the world of dyadic interactions, with mutual gaze as the foundation on which communication between infant and another person occurs, opens up and starts to become triadic. One of the reasons why joint attention abilities are so important is because it is in epi- sodes of joint attention that infants start to learn that certain labels are used to refer to the objects involved in the interaction, before learning to use these labels themselves (Tomasello & Farrar, 1986). Not only gaze plays an im- portant role in this process but also other joint attention behaviors like point- ing (Butterworth & Morissette, 1996; Goodwyn, Acredolo, & Brown, 2000). Here, the link between gaze and language acquisition becomes clear, one that is the target of Study II. As Study II has shown, infants show signs of integrating information from gaze and speech very early on, before they are capable of understanding the actual meaning of the words they process from around 9 months (Bergelson & Swingley, 2015). It could be argued that the infant brain, as early as 4 to 5 months, is – at least in some ways – prepared for dealing with the kind of information that it will face during the process of language acquisition. This interpretation fits well with the idea that infants have a bias for speech prior to language acquisition (Vouloumanos & Werk- er, 2007), which is supported by findings in newborns that suggest differ- ences in processing backward-speech and forward-speech (Peña et al., 2003). Once infants, towards the beginning of their second year of life, start to produce their own words (Kuhl, 2004), they also seem to become more flex- ible with learning from others without relying on mutual gaze as a scaffold- ing. Previous studies have shown that at 24 months of age or earlier, infants can withdraw information from others’ social interactions and learn words by overhearing their conversations (Tomasello & Barton, 1994). By this time, infants have formed expectations about how other people communicate with each other and how they make use of speech in order to do that. For

58 instance, 24-month-old infants shift their gaze faster from one person (A) to another (B) when person A produces speech than when she utters non- speech sound (Thorgrimsson, Fawcett, & Liszkowski, 2014). Interestingly, this is only the case in a face-to-face but not in a back-to-back context. These findings are coherent with the face-to-face effect found in Study I.

Gaze and intentions in a third-party context As has been indicated, others’ gaze plays a vital role in early social cogni- tion, in a dyadic, triadic but also in a third-party context. Particularly in the latter situation, the alignment of gaze between two social partners indicates their communicative intent to an observer. Of course, gaze is just one possi- ble form of non-verbal behavior. It naturally co-occurs with gestures, facial expressions, body posture and body movements. Study I, in line with other previous studies (Augusti, Melinder, & Gredebäck, 2010; Thorgrimsson et al., 2014), indicates that the coordination of others’ gaze in social interac- tions is something that infants are sensitive to. The expression of this sensi- tivity has been suggested to lie in the gaze shifts infants make in response to images or video clips. Either the frequency of the gaze shifts (e.g. Augusti, Melinder, & Gredebäck, 2010, Study I) or their latency (e.g. Thorgrimsson, Fawcett, & Liszkowski, 2015; von Hofsten, Uhlig, Adell, & Kochukhova, 2009) are used as measures to pinpoint this sensitivity or the extent to which infants have expectations about the occurrence of events (e.g. who is speak- ing next and when) in social settings. As explanatory approaches to findings showing that infants make more frequent or more rapid gaze shifts in face- to-face than back-to-back scenarios, three different interpretations have been brought forward. One of them is the suggestion that the particular features of the visual scene play a role for how often infants shift their gaze back and fourth between social partners and facilitate more gaze shifts in face-to-face than back-to-back scenarios (Wilkinson et al., 2011). This idea is associated with a computational model, which is based on an analysis of the images used in Study I. The interpretation exemplifies a feature-based, bottom-up mechanism that is not concerned with an understanding or sensitivity to oth- ers’ communicative intention. Furthermore, Augusti and colleagues (2010) forwarded two explanatory approaches for finding more gaze shifts in in- fants’ observations of face-to-face than of back-to-back conversations. First, infants use the gaze direction of the conversational partners as a directive cue to be able to shift their gaze back and forth between them. In the face-to- face situation, this will lead to more “successful” gaze shifts, because fol- lowing the gaze direction of one person will always lead to the other person. This mechanism requires the detection of gaze prior to following its direc- tion but it does not require infants to attend to the ongoing conversation. Second, the social cognitive explanation relies on the idea that infants have formed expectations about how others’ social interactions are performed. As

59 a consequence, they follow the events of a conversation to a larger degree when it is in line with these expectations. Like in the explanation based on gaze following, the detection of gaze is also necessary for the social cogni- tive explanation. In their paper, Augusti and colleagues (2010) do not specif- ically discuss an understanding of others’ communicative intentions but it is possible that it would be connected to infants’ expectations about others’ communicative acts. At this point in time, a lack of theories on how infants look at, perceive and understand others’ social interactions makes it difficult to interpret the findings that have so far emerged and to decide whether and under what circumstances any of the three explanations may apply. At the same time, possibly because only a limited number of studies have been conducted, a theoretical framework concerning the observation and understanding of oth- ers’ social interactions has not yet been forwarded. When returning to the thoughts on others’ communicative intentions, it appears that what we can conclude about infants’ gaze shifts and their under- standing of others’ gaze might largely depend on what kind of intentions are displayed in a certain situation and by what means these are expressed. For instance, when seeing one person asking a question and another person an- swering this question, we could see the first person as having the intention to receive information from the other person by asking the question and the second person answering the question with the intention to provide this in- formation. When we observe one person holding an open box of chocolate and stretching forward her arm towards another person, we may infer that she is doing that with the intention to offer a piece of chocolate. In both sce- narios, intentions are expressed in a particular way. For instance, in the first case, speech is the means by which the goal of an action is achieved, where- as in the second, it is non-verbal behavior, a gesture of offering that is opera- tionalized via arm and hand movements. In both scenarios, mutual gaze is indicating a communicative intend as a prerequisite for an interaction, which might be seen as a more general intention, but it is most likely combined with other, more specific intentions (getting information, offering chocolate). At what level these latter intentions are understood might depend on the age or certain acquired abilities. For instance, the usage of speech for that pur- pose might require the understanding that speech is a means to communi- cate, which seems to be present at 12 months of age (Thorgrimsson et al., 2015), or even an understanding of the meaning of the words, which is still evolving at this age. I would like to argue that in a situation where only static visual infor- mation is available, as was the case in Study I, the only intention that infants could associate with the faces displayed on the screen is the general intention or willingness to engage with another person (face-to-face images) or the lack of it (back-to-back images). It is possible that in order to connect such an intention with the faces on the screen, infants associate the images with

60 real-life situations, which would presuppose that they are, at least to some degree, familiar with third-party interactions. When infants or young chil- dren see two people look at each other (face-to-face images) they might identify their intention to communicate with each other. In this situation, the lack of any communicative events or additional information that would indi- cate what this social engagement is about (i.e. more specific intentions) could violate their expectation (to return to the previous examples, this could be saying a sentence or offering a piece of chocolate). Such a violation of expectation would not be the case when they see back-to-back images. This interpretation would be in line with the findings in the 24-month-old infants, showing expectations about the use of speech in face-to-face but not in back- to-back scenarios (Thorgrimsson et al., 2015). However, at this stage, it is difficult to rule out alternative explanations for the face-to-face effect and to determine the role of bottom-up mechanisms of visual processing, which could for instance be influenced by the topography of the images (Wilkinson et al., 2011). To conclude, whatever scenario is used to study infants’ understanding of gaze in a social context, the intentions of social partners in a given situation are particular to this situation and so are perhaps the processes at play for forming an understanding of these intentions. Furthermore, these processes are likely to be influenced by the type of information available (e.g. static versus dynamic information), their salience (e.g. visibility of the eye region), context (e.g. gaze in the context of speech, gestures or actions) and the ex- tent to which the available information might compete for attention as well as the setting of the experiment (live versus screen-based). Thus, when examining the role of gaze for infants’ understanding of others’ intentions, it appears to be important to know what kind of intentions under- lie behavior, how they are expressed in behavior and to what extent gaze can reveal this information, given the situation and experimental set-up.

Neural processing of gaze and its integration with other social cues The hypothesis of the social brain network suggests that specific areas in the brain are concerned with processing social information, the so-called social brain (Adolphs, 1999, 2001, 2003; Pelphrey & Carter, 2008). This hypothe- sis leads to further questions: Can it be assumed that infants are equipped with this type of specialization at birth (the role of nature)? How does the social brain emerge or specialize during development as a result of experi- ence (the role of nurture)? One possibility for how the brain could adapt throughout development to become specialized in processing social infor- mation is the hypothesis of interactive specialization (Johnson, 2001). In short, the idea is that, initially, cortical regions have rather broad functions,

61 but become increasingly specified throughout development. Over time, areas that specialize on processing social information become part of the social brain network. This also implies that neural mechanisms at the very begin- ning of life may differ from those at a later stage in development in terms of the brain structures that are involved in processing social information. For instance, the detection of direct gaze at birth might be based on activation in subcortical structures, but over time, these structures may start to interact with cortical areas such as the STS, which is activated in response to eye gaze in mid-childhood and adulthood (Allison et al., 2000). Findings from ERP work indicate that gaze and face processing changes across develop- ment (Taylor, Edmonds, McCarthy, & Allison, 2001), which also involves a transition in ERP components that indicate face and gaze processing from infancy to adulthood (Grossmann, 2015; Hoehl et al., 2009).

Integration of gaze with other social cues If gaze processing underlies developmental changes, the question of how gaze is processed in combination with other social cues becomes even more exciting – and an answer definitely harder to obtain. Study II showed that infants integrate information from gaze and speech very early on, at the age of 4 to 5 months. This is in line with recent findings in adults, suggesting that whether a person is turned towards and looks at an observer has an ef- fect on processing his speech and gestures (Straube, Green, Jansen, Chatterjee, & Kircher, 2010). In general, the study of multisensory integra- tion in infancy is still in its early stages and the number of studies looking at gaze and speech perception is yet limited. One of the insights drawn from the existing studies is that when looking at infants’ processing of gaze in comparison to speech, the chosen paradigm and stimuli may play an important role for the outcomes and the conclusions drawn from them. A recent ERP study presented 5-month-old infants with images of a face with direct or averted gaze and pseudowords, spoken with an infant- or adult-directed speech (Parise & Csibra, 2013). In the first ex- periment, auditory and visual stimuli were presented in separate trials, but were combined in the second experiment. This strategy was a smart way of disentangling how each stimulus was processed separately before looking at their added effect. Interestingly, adding the visual and auditory information did not seem to lead to an add-on effect. In other words, ostensive cues in combination (i.e. direct gaze, infant-directed speech) did not lead to “more” activation than a single ostensive cue. However, reports of a recent near- infrared spectroscopy (NIRS) study in in 6-month-old infants indicate that this might be a conclusion based on the applied paradigm (Lloyd-Fox, Széplaki-Köllőd, Yin, & Csibra, 2015). In comparison, in a live paradigm, Lloyd-Fox and colleagues (2015) showed that the combination of (di- rect/averted) gaze and (infant-/adult-directed) speech affects processing. To

62 be more specific, the combination of two ostensive signals (direct gaze and infant-directed speech) enhanced brain activation in some areas in compari- son to only one ostensive cue. This example demonstrates that, for studying processing of multimodal cues in infancy, the choice of paradigm (e.g. live versus screen-based) is not a marginal matter.

Infants’ perception of others’ actions and the mu rhythm Study III provides evidence on changes in neural activation in response to young children’s observation and execution of manual actions. It appeared that children between the age of 18 and 30 months show mu desynchroniza- tion in response to seeing complex actions as well as imitating those actions. What makes this study particularly exciting is the finding that the mu rhythm desynchronizes even when infants observe a manual action without an object (hand movement condition), which is also confirmed by more recent investi- gations (Ruysschaert et al., 2013; Ruysschaert, Warreyn, Wiersema, Oostra, & Roeyers, 2014). Interestingly, this also seems to be the case in adulthood. Several studies in adults show mu desynchronization to movement and non- object-related actions (Babiloni et al., 2002; Pfurtscheller, Neuper, Andrew, & Edlinger, 1997). For instance, adults show mu desynchronization to an empty grasp, even though it is, unlike in Study III, greater when the grasp is performed on an actual object (Muthukumaraswamy, Johnson, & McNair, 2004). Likewise, observations of actions that are related to a target (e.g. pointing at a mark on a table) result in a stronger desynchronization than gestures that lack a target (e.g. clenching a fist), but desynchronization also occurs when the target is missing (Avanzini et al., 2012). Thus, in toddlers as well as adults, mu desynchronization can be found in actions that do not integrate a target. To date, only one study in infants integrated non-object directed actions into the study of mu desynchronization. Interestingly, 9- month-old infants do not show mu desynchronization when observing non- object related actions, also referred to as mimed actions (Southgate et al., 2010). Can it be concluded that there is a developmental shift between infan- cy and toddlerhood with regards to how the mu rhythm responds to different types of actions? There are several reasons for considering such a conclusion premature. The following section elaborates on why this is the case.

Mimed actions and the mu rhythm First of all, it is not very straightforward to find the best possible term for the actions employed in the hand movement condition of Study III. One possi- bility for a term is to call these actions “mimed”, as previous work in infants has done (Southgate et al., 2010). This term captures some of the ambiva- lence of the situation. On one hand, it is possible to argue that an action

63 without an object is not object-directed. That is true in the sense that there is no object present. On the other hand, the hand movements are preformed in a manner that – at least to some extent – imply that this action is meant to be targeted at an object. Therefore, it could be argued that these actions are object-directed. However, describing them as mimed actions gets around this problem by referring to the fact that the performed action is in some sense a copy of an object-directed action. In order to make sense of a mimed action and to interpret it as goal-directed, it appears necessary to infer the missing object. Doing so might be based on associating the mimed action with an object-directed action. However, it is not clear what kind of cognitive processes are involved when young children observe these actions that might be familiar in some ways (i.e. the movement) but unfamiliar in others (i.e. the lack of an object). Do they try to identify the action, recruiting their own memory of previously seen actions? What role does the context of the action play? Is it important how closely the mimed version of the grasping action resembles the actual grasping action? Due to a lack of studies focusing on these questions in early life, no clear answer can yet be given. Taken together, it is possible that the children in Study III could have in- ferred an object when seeing the mimed actions, which might have helped them to interpret the action as goal-directed. In this argumentation, mu desynchronization could be seen as a signature of goal-directedness. This is exactly the reasoning of Southgate and colleagues (Southgate et al., 2010), when interpreting the finding that 9-month-old infants show mu desynchro- nization in response to seeing a hand grasping behind an occluder. Although the infants do not see whether an object is behind the occluder, mu desyn- chronization is interpreted as an indicator for the infants’ assumption that there is an object hidden behind the occluder. To reverse the argument, if that was not the case, the grasp would not be interpreted as goal-directed and the mu rhythm would not desynchronize. The problem with this argumenta- tion is that it is unclear if mu rhythm desynchronization is that strongly tied to goal-directedness. Despite of their reasoning, I would argue that the study design of Southgate and colleagues (2010) does not allow for a conclusion on whether or not the infants assumed that an object was behind the occlud- er. A previous study in macaque monkeys recorded neuronal activity in the premotor cortex, showing grasping actions that were either fully visible or partly occluded (Umiltà et al., 2001). The study included trials with and without an object hidden behind the occluder. At the beginning of each trial, the occluder was moved aside so the monkeys could see whether an object was present or not. This way, the authors could test whether seeing the com- plete action would be necessary to activate the monkeys’ motor system or whether knowing about an occluded object would suffice. Interestingly, the neurons fired when the monkeys knew that the object was present and could thus infer a goal to the grasping action. This was not the case when there was

64 no object behind the occluder. In this case, the design of the study allowed the authors to identify under what circumstances the monkeys inferred the object as a goal of an action. By lacking a similar control condition, the study design by Southgate and colleagues (2010) has difficulties with con- vincing that mu desynchronization stands for inferring an object behind the occluder. As an alternative explanation, the infants could have interpreted the occluder as a goal for the grasping action, as if the hand was going to take it away. Although the authors do not discuss such an option, this could also explain the mu desynchronization. The reason why the discussion of these findings in infants and monkeys is important is because it demonstrates the impact of the study design for the argumentation about and interpretation of potential cognitive processes (i.e. inferring an object) and their relations to neural mechanism (i.e. mu desynchronization). It is also meant to demon- strate that it is difficult to determine cognitive processes at work during ac- tion observation in infants or toddlers, when this is achieved with a measure (i.e. mu desynchronization) whose functionality and modulation in response to action observation is not yet well understood. In general, unless more insights are gained on how mu desynchronization of occluded actions compares to fully visible actions including an object and a mimed version of the actions, it is difficult to come to conclusions about what the mu rhythm stands for and if the desynchronization is in any way influenced by the actions chosen for demonstration in the study. Still, it can be concluded that the mu rhythm desynchronizes under different circum- stances, in response to partly occluded (but not necessarily mimed) actions in infants and mimed actions in toddlers (Ruysschaert et al., 2013, 2014). However, we are not yet able to come to any conclusion about a develop- mental trajectory concerning mimed actions. Furthermore, it appears as though the scientific community has only just gotten started with exploring the neural and cognitive processes involved when young children observe them.

When does the infant mu rhythm desynchronize? Given the current state of knowledge and its limitations, what can we con- clude about the mu rhythm in infancy and toddlerhood? Importantly, it ap- pears that desynchronization in response to action observation and imitation is a reoccurring pattern, which is also supported by the results of Study III. Beyond that, there are several important questions, which I will shortly pose and answer in the light of the current findings. First of all, is the desynchronization restricted to object-directed actions? This does not seem to be the case, because desynchronization of the mu rhythm also occurs when an action is not directed at an object in infants (Nyström et al., 2011) and toddlers (Study III). When comparing desynchro- nization of object-directed actions with non-object-directed actions, results

65 in infants indicate a stronger desynchronization in response to actions di- rected at an object (Nyström et al., 2011), whereas the findings of Study III indicate that this may not necessarily be the case in toddlers. Also, desyn- chronization does not only occur in response to object-directed actions but also to intransitive actions, such as crawling (van Elk, van Schie, Hunnius, Vesper, & Bekkering, 2008), in response to partly-occluded actions (Southgate et al., 2010) and to situations that suggest impending hand ac- tions (display of hand and object) but do not actually contain the perfor- mance of these actions (Southgate & Begus, 2013). Further, seeing an object that is associated with an action seems sufficient for a desynchronization to appear (Paulus et al., 2012). These findings demonstrate that mu desynchro- nization occurs in response to a broad range of situations. They also suggest that the occurrence of mu desynchronization and motor activation does not necessitate a visible goal or seeing object-directed action.

What role does experience play? Another important question is whether motor experience is a prerequisite for mu desynchronization during action observations. Very recent findings in 9- month-old infants indicate an existing association between mu desynchroni- zation and action competence in reaching tasks (Cannon et al., 2016). How- ever, another recent study did not find such a relation between 9 month-olds observation and performance of an action with a tool (Yoo, Cannon, Thorpe, & Fox, 2015). Interestingly, the authors also found that a relation between the desynchronization in response to action observation and performance appeared at the age of 12 moths. As Yoo and colleagues (2015) suggest, it is possible that differences in the tasks and analyses in these two studies ac- count for such discrepancy. Findings on intransitive actions such as walking and crawling (van Elk et al., 2008) as well as jumping or dancing (Reid et al., 2011) suggest that mo- tor competence plays a role for motor system activation during action obser- vation. Taking a glance beyond the work on the mu rhythm, the importance of motor experience is highlighted by behavioral work. Intriguingly, at the age of 3 months, before infants typically are able to reach, the experience of performing a reaching action (with the help of sticky mittens) alters infants’ action perception, in particular the sensitivity to the action goal (Sommerville, Woodward, & Needham, 2005). The results from recent in- fant ERP studies also support the idea that motor experience with an action, rather than visual experience, alters neural processing of goal-directed ac- tions (Bakker, Sommerville, & Gredebäck, 2016). Also, reaching compe- tence seems to play a role for infants’ encoding of the relationship between a hand and a goal (Bakker, Daum, Handl, & Gredebäck, 2015). Taken together, insights from work on the mu rhythm and beyond indi- cates that motor experience plays an important role for how infants and

66 young children perceive others’ actions, although not all studies find a clear indication for that. This may be the result of differences between studies, but it could also mean that experience, may it be of motor or visual nature, might be more or less crucial for action observation, goal attribution and action understanding, depending on the situation and action that is observed.

Future directions With regards to Study I, there are key questions that need to be followed up in future research. For instance, how frequently do infants observe others’ interactions (in comparison to being involved in them)? Under what circum- stances do infants and children listen in on or look at others’ conversations? More specifically, what could be cues that trigger them to orient their atten- tion to them? For instance, are there certain signals that cause an automatic orienting to an ongoing conversation (e.g. hearing one’s own name being mentioned) and if that is so, what other factors might play a role here (e.g. age or the extent to which a child is currently engaged in another activity)? Furthermore, once the attention of a young child is drawn towards a third- party interaction, what may cause them to keep on looking at it and follow the flow of turn-taking events? Although in recent years, empirical studies have yielded initial insights that relate to some of these questions, the cur- rently available evidence is insufficient to provide answers. One possible route of investigation would be to follow in the footsteps of Study I and use eye-tracking and on-screen stimulus presentation as a means to investigate infants’ observations of third-party interactions. Although this approach has its drawbacks, screen-based presentations of stimuli have the advantage of being well controlled and provide the opportunity to find out more about the role of particular visual (and also audio-visual) cues. For instance, it could be very interesting to further explore infants’ sensitivity to others’ eyes in a third-party context. In Study I, infants did not show sensitivity to whether the eyes of the faces were open or closed. Because 10-month-old infants are sensitive to an experimenter’s eyes status in a gaze following live situation (Brooks & Meltzoff, 2005), it is possible that at some point in the second year of life, infants would start to differentiate between open and closed eyes in a third-party context. In order to find out whether this is the case, the visi- bility of the eye region (i.e. so that both eyes are visible) could be increased in a future study. This might help finding out under what circumstances in- fants or young children pay attention to and derive meaning from others’ eyes in a third-party context. More generally speaking, in potential follow-up studies of Study I, it would be beneficial to include a broader range of measures to investigate infants’ observation of images of social interactions, such as fixation dura- tions, change in fixation durations over time or saccade latencies. Which of

67 those measures would be included is of course dependent on the design and particular question of the study. As Study II has shown, young infants are already capable of integrating information from gaze cues and speech. However, as mentioned earlier on, neural markers for such integration, for instance the Nc, may undergo modu- lations in certain paradigms (Study II) but not in others (Parise & Csibra, 2013). This may depend on how and in what context gaze and speech infor- mation are presented (e.g. live versus screen-based). It would be very valua- ble to map out trends concerning what kind of gaze/speech combinations result in modulations of ERP components, such as the Nc. Once this is clear- er for infants at around 5 months of age, it would be intriguing to explore if and to what extend these patterns continue to occur over the second half of the first year of life and the second year of life, when infants start to form their own words. When it comes to the investigation of the mu rhythm in infancy, the fu- ture looks promising. There are plenty of possibilities for advancing this field of research. Studying mu desynchronization in connection to mimed actions marks just one of those opportunities. The role of others’ gaze for action perception in early life remains unclear, due to the fact that in action perception studies on manual actions, the head of the experimenter is often occluded in order to not constitute a confound. Thus, it would be intriguing to present action observation and executing tasks with and without the full view of the person performing the actions to find out whether for instance the availability of gaze information is of any consequence for the mu desyn- chronization. In this context, it would also be very interesting to see whether this might affect desynchronization in the occipital alpha rhythm, which has recently been suggested to be sensitive to eye contact between infant and experimenters in triadic settings (Hoehl, Michel, Reid, Parise, & Striano, 2014). Considering that we know still very little about the mu rhythm and how it relates to cognitive processes during the observation (and imitation) of ac- tions, it might be beneficial to combine EEG measurements and eye- tracking. For instance, it would be very interesting to measure predictive gaze shifts during action observation and co-register with EEG measure- ments (for an example see Stapel, Hunnius, Elk, & Bekkering, 2010). Also, when investigating perception of mimed actions in in early life, it might be valuable to use motion tracking to find out if the kinematics of the mimed actions differ from their object-directed counterparts (Umiltà et al., 2001) and whether this affects mu desynchronization.

68 Conclusions From the very beginning of life, infants encounter the multi-faceted world of others’ faces, words and actions, one that they start to make sense of with the help important social cues such as others’ gaze. This thesis has pulled to- gether three different topics of interrelated areas in the field of early social cognition, focusing particularly on the perception of gaze and actions and their neural underpinnings. Beginning with the findings of the youngest age group, the results of Study II showed that young infants process direct or referential gaze differ- ently in the context of backward-spoken and forward-spoken words. The unfamiliar backward-spoken words were particularly attention capturing when gaze was directed at the infants. This suggests that gaze is an im- portant cue, signaling communicative intent. When it is combined with an unfamiliar and unusual cue (i.e. the backward-spoken words), it challenges communicative expectancy. These findings propose that already in the first half of the first year of life, young infants are capable of integrating social information across sensory channels. In comparison, Study I indicates that later on, towards the end of the first and in the second year of life, infants attend to information that is not directed at them. More specifically, in Study I it was demonstrated that whether or not two social partners are turned to- wards or away from each other affects how frequently infants alternate their gaze between them. Importantly, this differentiation is not dependent on audio-visual information and communicative events. From questions related to gaze perception, as pursued by the first two studies, the focus turned to the perception of actions in Study III. Addressing ages that had previously not been investigated, Study III demonstrated that between the middle of the second to the end of the third year of life, the mu rhythm undergoes modulations similar to those that have been reported in infants and adults. First, the mu rhythm desynchronized in response to the observation or complex actions and during the imitation of these actions. This finding strengthens the idea that action perception and action imitation are linked at neural level, not only in infants, as shown before, but also in toddlerhood. Secondly, desynchronization also occurred in response to mimed actions at this age, a pattern resembling previous findings in adults but not infants. This might be a result of acquired motor skills and increased experience with performing manual actions during the second and third year of life, which makes it possible to determine goal-directed aspects in a mimed action, such as inferring an object, leading to motor activation. In combination, the studies described in this thesis cover an age range from as early as 4 up to 30 months of age, a time that incorporates impressive changes in infants’ cognitive development, which are most likely linked to processes of maturation and specialization at the neural level. The complexi- ty of the social information available in real-life situations demands that

69 future research should increase its focus on studying how multiple social cues are processed by the developing brain, in dyadic, triadic and third-party contexts. Doing so will enhance our understanding of the processes that ena- ble infants to “(…) monitor, control, and predict the behavior of others” (Rochat, 2001)

70 Acknowledgements

This is the end (or the beginning?). And there would be no end (and no be- ginning) without the impact of lot of important people and most likely many more than the following pages will reveal First of all, I would like to thank Gustaf Gredebäck and Claes von Hof- sten, my supervisors, for their guidance, inspiration and support, for giving me the opportunity to start of on this step in my career as well as to bring it to completion. Gustaf, thank you for sharing your insights and providing advice on countless occasions! For knowing when help meant carrying box- es and when it meant patiently listening to my (confused) elaborations on the countless riddles I found myself entrapped in, looking under every empirical stone and being clearly very annoying. Thank you for face-to-face-ing all these things and giving me the opportunity to grow professionally! Claes, I would like to thank you very much for believing in my capabilities and en- couraging me to keep on walking the path and for all the scaffolding that helped my eye gaze to orient to the most crucial of things, particularly in the countdown phase of my doctoral studies. Kerstin, thank you for the many occasions on which you have encouraged and reassured me, for sharing your thoughts and for providing me with use- ful and interesting literature! If I will accumulate thoughts and keep on writ- ing about them, it will, to a considerable extend, be due to all your positive feedback! Furthermore, I would very much like to thank Ann-Margret Rydell and Christine Fawcett for all their contributions to and questions on this thesis and Terje Falck-Ytter and Nazar Akrami for their comments during the halvtidsseminarium! I would also like to acknowledge all my collaborators: Eugenio Parise, Letizia Palumbo, Angela D. Friederici, Petra Warreyn, Lieselott Ruyscheart, Roeljan Wiersema, Griet Pattyn, Hebert Roeyers, Sara Norling and Therese Mahlberg. I am very happy I had the opportunity to work with you, learn from you and have the chance to explore exciting topics together! Specifical- ly, I would like to thank Petra Warreyn for her insight, persistence and en- couragement and Eugenio Parise for sharing his technical know-how and most of all for getting me started on the exciting journey of scientific inves- tigations! Sara & Therese, you were the best students one could have wished for. Thank you very much for your enthusiasm and work! In particular, I

71 would like to thank all the families who, with their interest and study partici- pation helped to make all this happen! Furthermore, I would like to express my gratitude to Knut Berner, who has not only seen academic potential in me many years ago but has contin- ued to be a great support during my time at Uppsala University. Thank you for speaking the right words at the right time, Knut! Dear colleagues and friends of the Uppsala Child and Baby Lab, I am most grateful for your company, your expertise and for being great at what you do! For sharing ideas, experience, insights and for bravely turning your face to the lens of my camera! Although much could be said and acknowl- edged when it comes to the many colleagues that I have had the pleasure to meet in the past six years, let me limit my elaborations to a few comments and particular colleagues, both from the Child and Baby Lab and beyond. Elisabeth, I am so glad for the joke you made, we both forgot but still laugh about. All hail to decay! Johan, thank you for all the conversations and fun kids activities, I am so glad you have been a part of this journey! Janny, I very much treasure the conversation marathons we have had – with you, time flies! Your knowledge, critical thinking, your honesty and strive to watch out for others are inspiring! Cloudy, no more ventures into the North without you! Thank you for all your kindness, the sharing of thoughts and support as well as the second Mojito in Barcelona! Und deine Sprüche...echt krasse Wurst! Janna, thank you for all the conversations and the calendar – I think we made it! Benjamin, you rock and it’s been great to share an office with you on various occasions throughout the years! Mehr Schokolade? Martyna, I wish we would have had more office overlap but I am very glad about the one we had! Emilia, thank you for all the train rides, conversations and the ice-skating experience (or whatever I was trying to do on that ice)! Carin, I am very thankful that you have been part of my professional journey at several points and in several ways! Thank you for your support and hones- ty! Matilda and Andreas, conversations with you are as enjoyable as they are inspiring, thank you for all of them (hopefully to be continued)! Pär, thanks for all the technical support, the countless encouragements, big smiles and the lovely music experience! And Ben & Terje, thanks for joining in! Lars- Gunnar, thank you for all your patience with my Svenska and all the practi- cal help! Christine, you not only saved my dinner in Budapest but have also helped me to keep my sanity on many occasions and your thoughtful com- ments were indispensable! Thank you! Hanna, thanks for the café time in Stockholm and many lovely conversations! You define style! Laura, you shine and you are such an amazing person! Can I keep you? Kristin, I have very much enjoyed listening to your insightful comments on various occa- sions and am so glad we actually made it to Fotografiska! Mona, your sharp- ness of mind continues to impress me and I have so much enjoyed our con- versations! Keep them coming!

72 To all the RobotDoc people – it was a pleasure to meet you, learn from you and laugh with you! Our meetings remain unforgettable! I would also like to thank my family and friends for their understanding and various ways of support! Mama, Papa, thanks for all the thoughts, phone calls, wood piling, fence repairing, prayers and care packages! Susanna, your words moved mountains and your friendship has meant a lot to me! Måns, you were the miracle man! From the Christmas tree to the endless occasion of support calls, drives, cooking, food delivery and what not! And I know that this journey would not have been the same without you! Louise, Sandra, Therese and Sara, you were (and are) inspiring, sweet and very important and I consider myself very lucky to know you! András. You have been there from the Alpha to the Omega of this jour- ney, which, without you, would not even have existed. We have taken this winding road across all depths and heights together and I am deeply grateful for your insisting, your belief in me, for your understanding and love. The countless occasions on which your technical skills saved the day are not forgotten! Needless to say that without your life-saving and delicious cook- ing, the world would probably have ended! Szeretlek! Albert and Emma, you continue to inspire and amaze me! With your own developmental trajectory, you have shaped mine and helped me to grow beyond what I thought possible! You make life vibrant, busy, incredible, muddy, chaotic, funny and sweet! Ich hab’ euch ganz sehr lieb!

73 References

Ackles, P. K., & Cook, K. G. (1998). Stimulus probability and event-related poten- tials of the brain in 6-month-old human infants: a parametric study. Internation- al Journal of Psychophysiology: Official Journal of the International Organiza- tion of Psychophysiology, 29(2), 115–143. Ackles, P. K., & Cook, K. G. (2007). Attention or memory? Effects of familiarity and novelty on the Nc component of event-related brain potentials in six-month- old infants. The International Journal of Neuroscience, 117(6), 837–867. Adolphs, R. (1999). Social cognition and the human brain. Trends in Cognitive Sci- ences, 3(12), 469–479. Adolphs, R. (2001). The neurobiology of social cognition. Current Opinion in Neu- robiology, 11(2), 231–239. Adolphs, R. (2003). Cognitive neuroscience of human social behaviour. Nature Reviews. Neuroscience, 4(3), 165–178. Akhtar, N., Dunham, F., & Dunham, P. J. (1991). Directive interactions and early vocabulary development: the role of joint attentional focus. Journal of Child Language, 18(1), 41–49. Allison, T., Puce, A., & McCarthy, G. (2000). Social perception from visual cues: role of the STS region. Trends in Cognitive Sciences, 4(7), 267–278. Ambrosini, E., Costantini, M., & Sinigaglia, C. (2011). Grasping with the eyes. Journal of Neurophysiology, 106(3), 1437–1442. Aslin, R. N. (2012). Infant Eyes: A Window on Cognitive Development: Infant Eyes And Cognitive Development. Infancy, 17(1), 126–140. Augusti, E.-M., Melinder, A., & Gredebäck, G. (2010). Look Who’s Talking: Pre- Verbal Infants’ Perception of Face-to-Face and Back-to-Back Social Interac- tions. Frontiers in Psychology, 1, 161. Avanzini, P., Fabbri-Destro, M., Dalla Volta, R., Daprati, E., Rizzolatti, G., & Can- talupo, G. (2012). The dynamics of sensorimotor cortical oscillations during the observation of hand movements: an EEG study. PloS One, 7(5), e37534. Babiloni, C., Babiloni, F., Carducci, F., Cincotti, F., Cocozza, G., Del Percio, C., … Rossini, P. M. (2002). Human cortical electroencephalography (EEG) rhythms during the observation of simple aimless movements: a high-resolution EEG study. NeuroImage, 17(2), 559–572. Bache, C., Kopp, F., Springer, A., Stadler, W., Lindenberger, U., & Werkle-Bergner, M. (2015). Rhythmic neural activity indicates the contribution of attention and memory to the processing of occluded movements in 10-month-old infants. In- ternational Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology, 98(2), 201–212. Bakker, M., Daum, M. M., Handl, A., & Gredebäck, G. (2015). Neural correlates of action perception at the onset of functional grasping. Social Cognitive and Af- fective Neuroscience, 10(6), 769–776.

74 Bakker, M., Kochukhova, O., & von Hofsten, C. (2011). Development of social perception: a conversation study of 6-, 12- and 36-month-old children. Infant Behavior & Development, 34(2), 363–370. Bakker, M., Sommerville, J. A., & Gredebäck, G. (2016). Enhanced Neural Pro- cessing of Goal-directed Actions After Active Training in 4-Month-Old Infants. Journal of Cognitive Neuroscience, 28(3), 472–482. Banaschewski, T., & Brandeis, D. (2007). Annotation: What electrical brain activity tells us about brain function that other techniques cannot tell us – a child psy- chiatric perspective. Journal of Child Psychology and Psychiatry, 48(5), 415– 435. Baron-Cohen, S. (1995). Mindblindness: an essay on and theory of mind. Cambridge, Mass: MIT Press. Barton, M. E., & Tomasello, M. (1991). Joint Attention and Conversation in Moth- er-Infant-Sibling Triads. Child Development, 62(3), 517–529. Başar, E., Başar-Eroglu, C., Karakaş, S., & Schürmann, M. (2001). Gamma, alpha, delta, and theta oscillations govern cognitive processes. International Journal of Psychophysiology, 39(2–3), 241–248. Batki, A., Baron-Cohen, S., Wheelwright, S., Connellan, J., & Ahluwalia, J. (2000). Is there an innate gaze module? Evidence from human neonates. Infant Behav- ior and Development, 23(2), 223–229. Bedford, R., Elsabbagh, M., Gliga, T., Pickles, A., Senju, A., Charman, T., … the BASIS team. (2012). Precursors to Social and Communication Difficulties in Infants At-Risk for Autism: Gaze Following and Attentional Engagement. Jour- nal of Autism and Developmental Disorders, 42(10), 2208–2218. Beier, J. S., & Spelke, E. S. (2012). Infants’ Developing Understanding of Social Gaze. Child Development, 83(2), 486–496. Bekkering, H., Wohlschläger, A., & Gattis, M. (2000). Imitation of gestures in chil- dren is goal-directed. The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology, 53(1), 153–164. Bell, M. A., & Cuevas, K. (2012). Using EEG to Study Cognitive Development: Issues and Practices. Journal of Cognition and Development : Official Journal of the Cognitive Development Society, 13(3), 281–294. Bergelson, E., & Swingley, D. (2015). Early Word Comprehension in Infants: Rep- lication and Extension. Language Learning and Development, 11(4), 369–380. Biro, S., & Leslie, A. M. (2007). Infants? perception of goal-directed actions: devel- opment through cue-based bootstrapping. Developmental Science, 10(3), 379– 398. Blackwood, D. H., & Muir, W. J. (1990). Cognitive brain potentials and their appli- cation. The British Journal of Psychiatry. Supplement, (9), 96–101. Brandeis, D., & Lehmann, D. (1986). Event-related potentials of the brain and cog- nitive processes: approaches and applications. Neuropsychologia, 24(1), 151– 168. Brooks, R., & Meltzoff, A. N. (2002). The Importance of Eyes: How Infants Inter- pret Adult Looking Behavior. Developmental Psychology, 38(6), 958–966. Brooks, R., & Meltzoff, A. N. (2005). The development of gaze following and its relation to language. Developmental Science, 8(6), 535–543. Bruner, J. S., & Watson, R. (1983). Child’s talk: learning to use language (1st ed). New York: W.W. Norton. Burton, A. M., & Bindemann, M. (2009). The role of view in human face detection. Vision Research, 49(15), 2026–2036.

75 Butterworth, G., & Jarrett, N. (1991). What minds have in common is space: Spatial mechanisms serving joint visual attention in infancy. British Journal of Devel- opmental Psychology, 9(1), 55–72. Butterworth, G., & Morissette, P. (1996). Onset of pointing and the acquisition of language in infancy. Journal of Reproductive and Infant Psychology, 14(3), 219–231. Cannon, E. N., Simpson, E. A., Fox, N. A., Vanderwert, R. E., Woodward, A. L., & Ferrari, P. F. (2016). Relations between infants’ emerging reach-grasp compe- tence and event-related desynchronization in EEG. Developmental Science, 19(1), 50–62. Cannon, E. N., Woodward, A. L., Gredebäck, G., von Hofsten, C., & Turek, C. (2012). Action production influences 12-month-old infants’ attention to others’ actions. Developmental Science, 15(1), 35–42. Caron, A. J., Caron, R., Roberts, J., & Brooks, R. (1997). Infant sensitivity to devia- tions in dynamic facial-vocal displays: the role of eye regard. Developmental Psychology, 33(5), 802–813. Carpenter, M., Nagell, K., Tomasello, M., Butterworth, G., & Moore, C. (1998). Social Cognition, Joint Attention, and Communicative Competence from 9 to 15 Months of Age. Monographs of the Society for Research in Child Development, 63(4), i–174. Cochin, S., Barthelemy, C., Lejeune, B., Roux, S., & Martineau, J. (1998). Percep- tion of motion and qEEG activity in human adults. Electroencephalography and Clinical Neurophysiology, 107(4), 287–295. Cochin, S., Barthelemy, C., Roux, S., & Martineau, J. (1999). Observation and exe- cution of movement: similarities demonstrated by quantified electroencephalog- raphy. The European Journal of Neuroscience, 11(5), 1839–1842. Compston, A. (2010). The Berger rhythm: potential changes from the occipital lobes in man, by E.D. Adrian and B.H.C. Matthews (From the Physiological Labora- tory, Cambridge). Brain 1934: 57; 355–385. Brain, 133(1), 3–6. Cook, J. L. (2014). Task-relevance dependent gradients in medial prefrontal and temporoparietal cortices suggest solutions to paradoxes concerning self/other control. Neuroscience and Biobehavioral Reviews, 42, 298–302. Corkum, V., & Moore, C. (1998). The origins of joint visual attention in infants. Developmental Psychology, 34(1), 28–38. Courchesne, E., Ganz, L., & Norcia, A. M. (1981). Event-Related Brain Potentials to Human Faces in Infants. Child Development, 52(3), 804–811. Csibra, G. (2010). Recognizing Communicative Intentions in Infancy. Mind & Lan- guage, 25(2), 141–168. Csibra, G., & Gergely, G. (2006). Social learning and social cognition: The case for pedagogy. In Y. Munakata & M. H. Johnson (Eds.), Processes of change in brain and cognitive development (pp. 249 – 274). Oxford University Press. Csibra, G., & Gergely, G. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148–153. Cuevas, K., Cannon, E. N., Yoo, K., & Fox, N. A. (2014). The infant EEG mu rhythm: Methodological considerations and best practices. Developmental Re- view, 34(1), 26–43. Daum, M. M., Attig, M., Gunawan, R., Prinz, W., & Gredebäck, G. (2012). Actions Seen through Babies’ Eyes: A Dissociation between Looking Time and Predic- tive Gaze. Frontiers in Psychology, 3, 370. Decety, J., Grèzes, J., Costes, N., Perani, D., Jeannerod, M., Procyk, E., … Fazio, F. (1997). Brain activity during observation of actions. Influence of action content and subject’s strategy. Brain: A Journal of Neurology, 120 ( Pt 10), 1763–1777.

76 De Haan, M. (2012). Infant EEG and event-related potentials. Hove: Psychology. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db= nlabk&AN=570524 de Haan, M., & Nelson, C. A. (1999). Brain activity differentiates face and object processing in 6-month-old infants. Developmental Psychology, 35(4), 1113– 1121. D’Entremont, B. (2000). A perceptual–attentional explanation of gaze following in 3- and 6-month-olds. Developmental Science, 3(3), 302–311. Di Giorgio, E., Turati, C., Altoè, G., & Simion, F. (2012). Face detection in complex visual displays: An eye-tracking study with 3- and 6-month-old infants and adults. Journal of Experimental Child Psychology, 113(1), 66–77. di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992). Un- derstanding motor events: a neurophysiological study. Experimental Brain Re- search, 91(1), 176–180. Duchowski, A. T. (2007). Eye tracking methodology: theory and practice (2nd ed). London: Springer. Falck-Ytter, T., Gredebäck, G., & von Hofsten, C. (2006). Infants predict other people’s action goals. Nature Neuroscience, 9(7), 878–879. Farroni, T., Csibra, G., Simion, F., & Johnson, M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences of the United States of America, 99(14), 9602–9605. Farroni, T., Johnson, M. H., & Csibra, G. (2004a). Mechanisms of eye gaze percep- tion during infancy. Journal of Cognitive Neuroscience, 16(8), 1320–1326. Farroni, T., Massaccesi, S., Pividori, D., & Johnson, M. H. (2004b). Gaze Following in Newborns. Infancy, 5(1), 39–60. Fawcett, C., & Gredebäck, G. (2013). Infants use social context to bind actions into a collaborative sequence. Developmental Science, 16(6), 841–849. Fecteau, S., Carmant, L., Tremblay, C., Robert, M., Bouthillier, A., & Théoret, H. (2004). A motor resonance mechanism in children? Evidence from subdural electrodes in a 36-month-old child. Neuroreport, 15(17), 2625–2627. Fenson, L., Dale, P. S., Reznick, J. S., Thal, D., Bates, E., Hartung, J. P., … Reilly, J. S. (1993). The MacArthur Communicative Development Inventories: User´s guide and technical manual. San Diego, CA: Singular Publishing Group. Flanagan, J. R., & Johansson, R. S. (2003). Action plans used in action observation. Nature, 424(6950), 769–771. Floor, P., & Akhtar, N. (2006). Can 18-Month-Old Infants Learn Words by Listen- ing In on Conversations? Infancy, 9(3), 327–339. Frank, M. C., Amso, D., & Johnson, S. P. (2014). Visual search and attention to faces during early infancy. Journal of Experimental Child Psychology, 118, 13– 26. Frank, M. C., Vul, E., & Johnson, S. P. (2009). Development of infants’ attention to faces during the first year. Cognition, 110(2), 160–170. Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119(2), 593–609. Gampe, A., Liebal, K., & Tomasello, M. (2012). Eighteen-month-olds learn novel words through overhearing. First Language, 32(3), 385–397. Gastaut, H. J., & Bert, J. (1954). EEG changes during cinematographic presentation; moving picture activation of the EEG. Electroencephalography and Clinical Neurophysiology, 6(3), 433–444.

77 Gergely, G., & Csibra, G. (2013). Natural Pedagogy. In M. R. Banaji & S. A. Gel- man (Eds.), Navigating the Social World (pp. 127–132). Oxford University Press. Goffman, E. (1963). Behavior in public places; notes on the social organization of gatherings. [New York: Free Press of Glencoe. Goldman, D. Z., Shapiro, E. G., & Nelson, C. A. (2004). Measurement of vigilance in 2-year-old children. Developmental Neuropsychology, 25(3), 227–250. Goodwyn, S. W., Acredolo, L. P., & Brown, C. A. (2000). Impact of Symbolic Ges- turing on Early Language Development. Journal of Nonverbal Behavior, 24(2), 81–103. Gräfenhain, M., Behne, T., Carpenter, M., & Tomasello, M. (2009). One-year-olds’ understanding of nonverbal gestures directed to a third person. Cognitive Devel- opment, 24(1), 23–33. Grafton, S. T., Arbib, M. A., Fadiga, L., & Rizzolatti, G. (1996). Localization of grasp representations in humans by positron emission tomography. 2. Observa- tion compared with imagination. Experimental Brain Research, 112(1), 103– 111. Gratton, G., Coles, M. G., & Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55(4), 468–484. Gredebäck, G., & Daum, M. M. (2015). The Microstructure of Action Perception in Infancy: Decomposing the Temporal Structure of Social Information Pro- cessing. Child Development Perspectives, 9(2), 79–83. Gredeback, G., & Falck-Ytter, T. (2015). Eye Movements During Action Observa- tion. Perspectives on Psychological Science, 10(5), 591–598. Gredebäck, G., Fikke, L., & Melinder, A. (2010a). The development of joint visual attention: a longitudinal study of gaze following during interactions with moth- ers and strangers. Developmental Science, 13(6), 839–848. Gredebäck, G., Johnson, S., & von Hofsten, C. (2010b). Eye tracking in infancy research. Developmental Neuropsychology, 35(1), 1–19. Gredebäck, G., & Melinder, A. (2010). Infants’ understanding of everyday social interactions: a dual process account. Cognition, 114(2), 197–206. Grossmann, T. (2015). The development of social brain functions in infancy. Psy- chological Bulletin, 141(6), 1266–1287. Grossmann, T., Striano, T., & Friederici, A. D. (2006). Crossmodal integration of emotional information from face and voice in the infant brain. Developmental Science, 9(3), 309–315. Haith, M. M. (1969). Infrared television recording and measurement of ocular be- havior in the human infant. The American Psychologist, 24(3), 279–283. Haith, M. M., Bergman, T., & Moore, M. J. (1977). Eye contact and face scanning in early infancy. Science (New York, N.Y.), 198(4319), 853–855. Harper, R. G., Wiens, A. N., & Matarazzo, J. D. (1978). Nonverbal communication: the state of the art. New York: Wiley. Herold, K. H., & Akhtar, N. (2008). Imitative learning from a third-party interaction: relations with self-recognition and perspective taking. Journal of Experimental Child Psychology, 101(2), 114–123. Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45(13), 1707–1724. Hietanen, J. K. (2002). Social attention orienting integrates visual information from head and body orientation. Psychological Research, 66(3), 174–179.

78 Hoehl, S., Michel, C., Reid, V. M., Parise, E., & Striano, T. (2014). Eye contact during live social interaction modulates infants’ oscillatory brain activity. Social Neuroscience, 9(3), 300–308. Hoehl, S., & Peykarjou, S. (2012). The early development of face processing — What makes faces special? Neuroscience Bulletin, 28(6), 765–788. Hoehl, S., Reid, V., Mooney, J., & Striano, T. (2008). What are you looking at? Infants’ neural processing of an adult’s object-directed eye gaze. Developmental Science, 11(1), 10–16. Hoehl, S., Reid, V. M., Parise, E., Handl, A., Palumbo, L., & Striano, T. (2009). Looking at Eye Gaze Processing and Its Neural Correlates in Infancy— Implications for Social Development and Disorder. Child De- velopment, 80(4), 968–985. Hood, B. M., Willen, J. D., & Driver, J. (1998). Adult’s Eyes Trigger Shifts of Visu- al Attention in Human Infants. Psychological Science, 9(2), 131–134. Iacoboni, M. (1999). Cortical Mechanisms of Human Imitation. Science, 286(5449), 2526–2528. Itti, L. (2005). Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Visual Cognition, 12(6), 1093–1123. Jasper, H. H. (1958). The ten-twenty electrode system of the International Federa- tion. Electroencephalography and Clinical Neurophysiology, 10(1958), 371– 375. Jessen, S., & Grossmann, T. (2014). Unconscious discrimination of social cues from eye whites in infants. Proceedings of the National Academy of Sciences, 111(45), 16208–16213. Johnson, M. H. (2001). Functional brain development in humans. Nature Reviews. Neuroscience, 2(7), 475–483. Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns’ preferen- tial tracking of face-like stimuli and its subsequent decline. Cognition, 40(1-2), 1–19. Kampe, K. K. W., Frith, C. D., & Frith, U. (2003). “Hey John”: signals conveying communicative intention toward the self activate brain regions associated with “mentalizing,” regardless of modality. The Journal of Neuroscience: The Offi- cial Journal of the Society for Neuroscience, 23(12), 5258–5263. Kanakogi, Y., & Itakura, S. (2011). Developmental correspondence between action prediction and motor ability in early infancy. Nature Communications, 2, 341. Karrer, R., & Ackles, P. K. (1987). Visual event-related potentials of infants during a modified oddball procedure. Electroencephalography and Clinical Neuro- physiology. Supplement, 40, 603–608. Keitel, A., Prinz, W., Friederici, A. D., von Hofsten, C., & Daum, M. M. (2013). Perception of conversations: The importance of semantics and intonation in children’s development. Journal of Experimental Child Psychology, 116(2), 264–277. Kleinke, C. L. (1986). Gaze and eye contact: A research review. Psychological Bul- letin, 100(1), 78–100. Kobayashi, H., & Kohshima, S. (1997). Unique morphology of the human eye. Na- ture, 387(6635), 767–768. Kobayashi, H., & Kohshima, S. (2001). Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the pri- mate eye. Journal of Human Evolution, 40(5), 419–435. Kuhl, P. K. (2004). Early language acquisition: cracking the speech code. Nature Reviews Neuroscience, 5(11), 831–843.

79 Kushnerenko, E., Teinonen, T., Volein, A., & Csibra, G. (2008). Electrophysiologi- cal evidence of illusory audiovisual speech percept in human infants. Proceed- ings of the National Academy of Sciences, 105(32), 11442–11445. Langton, S. R. H. (2000). The mutual influence of gaze and head orientation in the analysis of social attention direction. The Quarterly Journal of Experimental Psychology Section A, 53(3), 825–845. Langton, S. R. H., Law, A. S., Burton, A. M., & Schweinberger, S. R. (2008). Atten- tion capture by faces. Cognition, 107(1), 330–342. Lehtonen, J. B., & Lehtinen, I. (1972). Alpha rhythm and uniform visual field in man. Electroencephalography and Clinical Neurophysiology, 32(2), 139–147. Lepage, J.-F., & Théoret, H. (2006). EEG evidence for the presence of an action observation-execution matching system in children. The European Journal of Neuroscience, 23(9), 2505–2510. Lloyd-Fox, S., Széplaki-Köllőd, B., Yin, J., & Csibra, G. (2015). Are you talking to me? Neural activations in 6-month-old infants in response to being addressed during natural interactions. Cortex, 70, 35–48. Luck, S. J. (2005). An introduction to the event-related potential technique. Cam- bridge, Mass: MIT Press. Marshall, P. J., Bar-Haim, Y., & Fox, N. A. (2002). Development of the EEG from 5 months to 4 years of age. Clinical Neurophysiology: Official Journal of the In- ternational Federation of Clinical Neurophysiology, 113(8), 1199–1208. Marshall, P. J., & Meltzoff, A. N. (2011). Neural mirroring systems: Exploring the EEG mu rhythm in human infancy. Developmental Cognitive Neuroscience, 1(2), 110–123. Marshall, P. J., & Meltzoff, A. N. (2014). Neural mirroring mechanisms and imita- tion in human infants. Philosophical Transactions of the Royal Society B: Bio- logical Sciences, 369(1644). doi:10.1098/rstb.2013.0620 Marshall, P. J., Saby, J. N., & Meltzoff, A. N. (2013). Infant Brain Responses to Object Weight: Exploring Goal-Directed Actions and Self-Experience. Infancy, 18(6), 942–960. Marshall, P. J., Young, T., & Meltzoff, A. N. (2011). Neural correlates of action observation and execution in 14-month-old infants: an event-related EEG desynchronization study. Developmental Science, 14(3), 474–480. Martin, A., Onishi, K. H., & Vouloumanos, A. (2012). Understanding the abstract role of speech in communication at 12 months. Cognition, 123(1), 50–60. Matheson, H., Moore, C., & Akhtar, N. (2013). The development of social learning in interactive and observational contexts. Journal of Experimental Child Psy- chology, 114(2), 161–172. Maurer, D., & Salapatek, P. (1976). Developmental changes in the scanning of faces by young infants. Child Development, 47(2), 523–527. McHale, J., Fivaz-Depeursinge, E., Dickstein, S., Robertson, J., & Daley, M. (2008). New evidence for the social embeddedness of infants’ early triangular capaci- ties. Family Process, 47(4), 445–463. Meltzoff, A. N., Kuhl, P. K., Movellan, J., & Sejnowski, T. J. (2009). Foundations for a New Science of Learning. Science, 325(5938), 284–288. Meyer, M., Hunnius, S., van Elk, M., van Ede, F., & Bekkering, H. (2011). Joint action modulates motor system involvement during action observation in 3- year-olds. Experimental Brain Research, 211(3-4), 581–592. Molenberghs, P., Cunnington, R., & Mattingley, J. B. (2012). Brain regions with mirror properties: a meta-analysis of 125 human fMRI studies. Neuroscience and Biobehavioral Reviews, 36(1), 341–349.

80 Moore, C., Angelopoulos, M., & Bennett, P. (1997). The role of movement in the development of joint visual attention. Infant Behavior and Development, 20(1), 83–92. Moors, P., Germeys, F., Pomianowska, I., & Verfaillie, K. (2015). Perceiving where another person is looking: the integration of head and body information in esti- mating another person’s gaze. Perception Science, 909. Morissette, P., Ricard, M., & Décarie, T. G. (1995). Joint visual attention and point- ing in infancy: A longitudinal study of comprehension. British Journal of De- velopmental Psychology, 13(2), 163–175. Mukamel, R., Ekstrom, A. D., Kaplan, J., Iacoboni, M., & Fried, I. (2010). Single- neuron responses in humans during execution and observation of actions. Cur- rent Biology: CB, 20(8), 750–756. Mundy, P., Block, J., Delgado, C., Pomares, Y., Van Hecke, A. V., & Parlade, M. V. (2007). Individual Differences and the Development of Joint Attention in Infan- cy. Child Development, 78(3), 938–954. Mundy, P., & Jarrold, W. (2010). Infant joint attention, neural networks and social cognition. Neural Networks: The Official Journal of the International Neural Network Society, 23(8-9), 985–997. Mundy, P., & Newell, L. (2007). Attention, Joint Attention, and Social Cognition. Current Directions in Psychological Science, 16(5), 269–274. Muthukumaraswamy, S. D., Johnson, B. W., & McNair, N. A. (2004). Mu rhythm modulation during observation of an object-directed grasp. Cognitive Brain Re- search, 19(2), 195–201. Nelson, C. A. (1997). Electrophysiological Correlates of Memory Development in the First Year of Life. In H. Reese & M. Franzen (Eds.), Biological and neuro- psychological mechanisms: Life span developmental psychology (pp. 95–131). Hillsdale, NY: Lawrence Erlbaum Associates Inc. Nelson, C. A., & Collins, P. F. (1992). Neural and behavioral correlates of visual recognition memory in 4- and 8-month-old infants. Brain and Cognition, 19(1), 105–121. Nelson, C. A., & deRegnier, R. ‐A. (1992). Neural correlates of attention and memory in the first year of life. Developmental Neuropsychology, 8(2-3), 119– 134. Nelson, C. A., Morse, P. A., & Leavitt, L. A. (1979). Recognition of facial expres- sions by seven-month-old infants. Child Development, 50(4), 1239–1242. Nikkel, L., & Karrer, R. (1994). Differential effects of experience on the ERP and behavior of 6‐month‐old infants: Trends during repeated stimulus presentations. Developmental Neuropsychology, 10(1), 1–11. Nyström, P. (2008). The infant mirror neuron system studied with high density EEG. Social Neuroscience, 3(3-4), 334–347. Nyström, P., Ljunghammar, T., Rosander, K., & von Hofsten, C. (2011). Using mu rhythm desynchronization to measure mirror neuron activity in infants. Devel- opmental Science, 14(2), 327–335. O’Doherty, K., Troseth, G. L., Shimpi, P. M., Goldenberg, E., Akhtar, N., & Saylor, M. M. (2011). Third-Party Social Interaction and Word Learning from Video. Child Development, 82(3), 902–915. Over, H., & Carpenter, M. (2009). Eighteen-Month-Old Infants Show Increased Helping Following Priming with Affiliation. Psychological Science, 20(10), 1189–1193. Parise, E., & Csibra, G. (2013). Neural Responses to Multimodal Ostensive Signals in 5-Month-Old Infants. PLoS ONE, 8(8). doi:10.1371/journal.pone.0072360

81 Parker, S. W., Nelson, C. A., & The Bucharest Early Intervention Project Core Group. (2005). The Impact of Early Institutional Rearing on the Ability to Dis- criminate Facial Expressions of Emotion: An Event-Related Potential Study. Child Development, 76(1), 54–72. Patterson, M. L., & Werker, J. F. (2003). Two-month-old infants match phonetic information in lips and voice. Developmental Science, 6(2), 191–196. Paulus, M., Hunnius, S., van Elk, M., & Bekkering, H. (2012). How learning to shake a rattle affects 8-month-old infants’ perception of the rattle’s sound: elec- trophysiological evidence for action-effect binding in infancy. Developmental Cognitive Neuroscience, 2(1), 90–96. Pelphrey, K. A., & Carter, E. J. (2008). Brain mechanisms for social perception: Lessons from autism and typical development. Annals of the New York Academy of Sciences, 1145, 283–299. Pelphrey, K. A., Viola, R. J., & McCarthy, G. (2004). When strangers pass: pro- cessing of mutual and averted social gaze in the superior temporal sulcus. Psy- chological Science, 15(9), 598–603. Peña, M., Maki, A., Kovacić, D., Dehaene-Lambertz, G., Koizumi, H., Bouquet, F., & Mehler, J. (2003). Sounds and silence: an optical topography study of lan- guage recognition at birth. Proceedings of the National Academy of Sciences of the United States of America, 100(20), 11702–11705. Pfurtscheller, G. (1992). Event-related synchronization (ERS): an electrophysiologi- cal correlate of cortical areas at rest. Electroencephalography and Clinical Neu- rophysiology, 83(1), 62–69. Pfurtscheller, G., & Lopes da Silva, F. H. (1999). Event-related EEG/MEG synchro- nization and desynchronization: basic principles. Clinical Neurophysiology, 110(11), 1842–1857. Pfurtscheller, G., Neuper, C., Andrew, C., & Edlinger, G. (1997). Foot and hand area mu rhythms. International Journal of Psychophysiology, 26(1–3), 121–135. Picton, T. W., Bentin, S., Berg, P., Donchin, E., Hillyard, S. A., Johnson, R., … Taylor, M. J. (2000). Guidelines for using human event-related potentials to study cognition: recording standards and publication criteria. Psychophysiology, 37(2), 127–152. Reid, V. M., Striano, T., & Iacoboni, M. (2011). Neural correlates of dyadic interac- tion during infancy. Developmental Cognitive Neuroscience, 1(2), 124–130. Repacholi, B. M. (2009). Linking actions and emotions: evidence from 15- and 18- month-old infants. The British Journal of Developmental Psychology, 27(Pt 3), 649–667. Repacholi, B. M., & Meltzoff, A. N. (2007). Emotional eavesdropping: infants se- lectively respond to indirect emotional signals. Child Development, 78(2), 503– 521. Repacholi, B. M., Meltzoff, A. N., & Olsen, B. (2008). Infants’ understanding of the link between visual perception and emotion: “If she can’t see me doing it, she won’t get angry.” Developmental Psychology, 44(2), 561–574. Reynolds, G. D., Courage, M. L., & Richards, J. E. (2010). Infant attention and visual preferences: Converging evidence from behavior, event-related poten- tials, and cortical source localization. Developmental Psychology, 46(4), 886– 904. Reynolds, G. D., & Richards, J. E. (2005). Familiarization, Attention, and Recogni- tion Memory in Infancy: An Event-Related Potential and Cortical Source Local- ization Study. Developmental Psychology, 41(4), 598–615. Richards, J. E. (2003). Attention affects the recognition of briefly presented visual stimuli in infants: an ERP study. Developmental Science, 6(3), 312–328.

82 Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3(2), 131–141. Rizzolatti, G., Fogassi, L., & Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neurosci- ence, 2(9), 661–670. Roach, B. J., & Mathalon, D. H. (2008). Event-Related EEG Time-Frequency Anal- ysis: An Overview of Measures and An Analysis of Early Gamma Band Phase Locking in Schizophrenia. Schizophrenia Bulletin, 34(5), 907–926. Rochat, P. (2001). The infant’s world. Cambridge, Mass: Harvard University Press. Rosander, K., & von Hofsten, C. (2011). Predictive gaze shifts elicited during ob- served and performed actions in 10-month-old infants and adults. Neuropsycho- logia, 49(10), 2911–2917. Ruysschaert, L., Warreyn, P., Wiersema, J. R., Metin, B., & Roeyers, H. (2013). Neural mirroring during the observation of live and video actions in infants. Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology, 124(9), 1765–1770. Ruysschaert, L., Warreyn, P., Wiersema, J. R., Oostra, A., & Roeyers, H. (2014). Exploring the Role of Neural Mirroring in Children with Autism Spectrum Dis- order. Autism Research, 7(2), 197–206. Saby, J. N., & Marshall, P. J. (2012). The Utility of EEG Band Power Analysis in the Study of Infancy and Early Childhood. Developmental Neuropsychology, 37(3), 253–273. Salapatek, P., & Kessen, W. (1966). Visual scanning of triangles by the human newborn. Journal of Experimental Child Psychology, 3(2), 155–167. Scaife, M., & Bruner, J. S. (1975). The capacity for joint visual attention in the in- fant. , Published Online: 24 January 1975; | doi:10.1038/253265a0, 253(5489), 265–266. Schilbach, L., Wohlschlaeger, A. M., Kraemer, N. C., Newen, A., Shah, N. J., Fink, G. R., & Vogeley, K. (2006). Being with virtual others: Neural correlates of so- cial interaction. Neuropsychologia, 44(5), 718–730. Shneidman, L. A., Buresh, J. S., Shimpi, P. M., Knight-Schwarz, J., & Woodward, A. L. (2009). Social Experience, Social Attention and Word Learning in an Overhearing Paradigm. Language Learning and Development, 5(4), 266–281. Simion, F., & Giorgio, E. D. (2015). Face perception and processing in early infan- cy: inborn predispositions and developmental changes. Perception Science, 969. Smith, J. R. (1939). The “Occipital” and “Pre-Central” Alpha Rhythms During the First two Years. The Journal of Psychology, 7(2), 223–226. Smith, J. R. (1941). The Frequency Growth of the Human Alpha Rhythms During Normal Infancy and Childhood. The Journal of Psychology, 11(1), 177–198. Sommerville, J. A., Woodward, A. L., & Needham, A. (2005). Action experience alters 3-month-old infants’ perception of others’ actions. Cognition, 96(1), B1– 11. Southgate, V., & Begus, K. (2013). Motor Activation During the Prediction of Non- executable Actions in Infants. Psychological Science, 24(6), 828–835. Southgate, V., Johnson, M. H., El Karoui, I., & Csibra, G. (2010). Motor system activation reveals infants’ on-line prediction of others’ goals. Psychological Sci- ence, 21(3), 355–359. Southgate, V., Johnson, M. H., Osborne, T., & Csibra, G. (2009). Predictive motor activation during action observation in human infants. Biology Letters, 5(6), 769–772.

83 Stapel, J. C., Hunnius, S., Elk, M. van, & Bekkering, H. (2010). Motor activation during observation of unusual versus ordinary actions in infancy. Social Neuro- science, 5(5-6), 451–460. Straube, B., Green, A., Jansen, A., Chatterjee, A., & Kircher, T. (2010). Social cues, mentalizing and the neural processing of speech accompanied by gestures. Neu- ropsychologia, 48(2), 382–393. Striano, T., Chen, X., Cleveland, A., & Bradshaw, S. (2006a). Joint attention social cues influence infant learning. European Journal of Developmental Psychology, 3(3), 289–299. Striano, T., Reid, V. M., & Hoehl, S. (2006b). Neural mechanisms of joint attention in infancy. The European Journal of Neuroscience, 23(10), 2819–2823. Striano, T., & Stahl, D. (2005). Sensitivity to triadic attention in early infancy. De- velopmental Science, 8(4), 333–343. Stroganova, T. A., Orekhova, E. V., & Posikera, I. N. (1999). EEG alpha rhythm in infants. Clinical Neurophysiology: Official Journal of the International Federa- tion of Clinical Neurophysiology, 110(6), 997–1012. Taylor, M., Batty, M., & Itier, R. (2004). The Faces of Development: A Review of Early Face Processing over Childhood. Journal of Cognitive Neuroscience, 16(8), 1426–1442. Taylor, M. j., & Baldeweg, T. (2002). Application of EEG, ERP and intracranial recordings to the investigation of cognitive functions in children. Developmental Science, 5(3), 318–334. Taylor, M. J., Edmonds, G. E., McCarthy, G., & Allison, T. (2001). Eyes first! Eye processing develops before face processing in children. Neuroreport, 12(8), 1671–1676. Thakor, N. V., & Tong, S. (2004). Advances in Quantitative Electroencephalogram Analysis Methods. Annual Review of Biomedical Engineering, 6(1), 453–495. Thorgrimsson, G. B., Fawcett, C., & Liszkowski, U. (2014). Infants’ expectations about gestures and actions in third-party interactions. Developmental Psycholo- gy, 5, 321. Thorgrimsson, G. B., Fawcett, C., & Liszkowski, U. (2015). 1- and 2-year-olds’ expectations about third-party communicative actions. Infant Behavior & De- velopment, 39, 53–66. Tomasello, M. (1995). Joint attention as social cognition. In C. Moore & P. Dunham (Eds.), Joint Attention: Its Origins and Role in Development (pp. 103–130). Hillsdale, NY: Lawrence Erlbaum Associates Inc. Tomasello, M., & Barton, M. E. (1994). Learning words in nonostensive contexts. Developmental Psychology, 30(5), 639–650. Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: the origins of cultural cognition. The Behavioral and Brain Sciences, 28(5), 675–691; discussion 691–735. Tomasello, M., & Farrar, M. J. (1986). Joint attention and early language. Child Development, 57(6), 1454–1463. Tomasello, M., Hare, B., Lehmann, H., & Call, J. (2007). Reliance on head versus eyes in the gaze following of great apes and human infants: the cooperative eye hypothesis. Journal of Human Evolution, 52(3), 314–320. Umiltà, M. A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L., Keysers, C., & Riz- zolatti, G. (2001). I know what you are doing. a neurophysiological study. Neu- ron, 31(1), 155–165. Urigüen, J. A., & Garcia-Zapirain, B. (2015). EEG artifact removal—state-of-the-art and guidelines. Journal of Neural Engineering, 12(3), 031001.

84 Valenza, E., Simion, F., Cassia, V. M., & Umiltà, C. (1996). Face preference at birth. Journal of Experimental Psychology. Human Perception and Perfor- mance, 22(4), 892–903. van Elk, M., van Schie, H. T., Hunnius, S., Vesper, C., & Bekkering, H. (2008). You’ll never crawl alone: neurophysiological evidence for experience- dependent motor resonance in infancy. NeuroImage, 43(4), 808–814. Vecera, S. P., & Johnson, M. H. (1995). Gaze detection and the cortical processing of faces: Evidence from infants and adults. Visual Cognition, 2(1), 59–87. von Hofsten, C., Uhlig, H., Adell, M., & Kochukhova, O. (2009). How children with autism look at events. Research in Autism Spectrum Disorders, 3(2), 556–569. Vouloumanos, A., Onishi, K. H., & Pogue, A. (2012). Twelve-month-old infants recognize that speech can communicate unobservable intentions. Proceedings of the National Academy of Sciences of the United States of America, 109(32), 12933–12937. Vouloumanos, A., & Werker, J. F. (2007). Listening to language at birth: evidence for a bias for speech in neonates. Developmental Science, 10(2), 159–164. Webb, S. J., Long, J. D., & Nelson, C. A. (2005). A longitudinal investigation of visual event-related potentials in the first year of life. Developmental Science, 8(6), 605–616. Webb, S. J., & Nelson, C. A. (2001). Perceptual priming for upright and inverted faces in infants and adults. Journal of Experimental Child Psychology, 79(1), 1– 22. Wiebe, S. A., Cheatham, C. L., Lukowski, A. F., Haight, J. C., Muehleck, A. J., & Bauer, P. J. (2006). Infants’ ERP Responses to Novel and Familiar Stimuli Change Over Time: Implications for Novelty Detection and Memory. Infancy, 9(1), 21–44. Wilkinson, N., Metta, G., & Gredeback, G. (2011). Modelling the face-to-face ef- fect: Sensory population dynamics and active vision can contribute to percep- tion of social context. In 2011 IEEE International Conference on Development and Learning (ICDL) (Vol. 2, pp. 1–6). Yoo, K. H., Cannon, E. N., Thorpe, S. G., & Fox, N. A. (2015). Desynchronization in EEG during perception of means-end actions and relations with infants’ grasping skill. British Journal of Developmental Psychology, n/a–n/a. Zink, I., & Lejaegere, M. (2002). N-CDIs: Lijsten voor communicatieve ontwikkel- ing. Leuven: Acco.

85 Acta Universitatis Upsaliensis Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences 126 Editor: The Dean of the Faculty of Social Sciences

A doctoral dissertation from the Faculty of Social Sciences, Uppsala University, is usually a summary of a number of papers. A few copies of the complete dissertation are kept at major Swedish research libraries, while the summary alone is distributed internationally through the series Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences. (Prior to January, 2005, the series was published under the title “Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences”.)

ACTA UNIVERSITATIS UPSALIENSIS Distribution: publications.uu.se UPPSALA urn:nbn:se:uu:diva-281242 2016