MODALITY-SPECIFIC SEMANTIC PROCESSING OF ACTION CONCEPTS: AN EYETRACKING INVESTIGATION

By

CHING-I HUNG

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2015

© 2015 Ching-I Hung

To my family

ACKNOWLEDGMENTS

I thank my chair and doctoral supervisor Dr. Jamie Reilly for his guidance. He has given me many opportunities to grow independently. It is through him that I realize what a research scientist is and learn to push my limits. I am very grateful to my committee members for their patience and feedback, Dr. Lori Altmann for her advice during my doctoral program, and the participants for their flexible schedule in the study.

I wish to acknowledge my colleagues, members in the Cognition and Language Lab at the University of Florida, and members in the Memory, Concepts, Cognition Lab at Temple

University. Life at UF would have been tough without my doctoral colleagues Hyejin, Audrey,

Jonathan, Supraja, Nour, Lisa and Jimena. I appreciate the valuable time we spent together. It motivates me to be better both academically and personally. I also appreciate members at

Temple University for their warm welcome as I relocated to a new environment. I would not have gone this far without their supports.

Above all, I thank my family for their unconditional love and my mom for reminding me to stay faithful. I especially thank my best friend, Hao-yun for the incredible support when the tide was high in this journey.

4

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ...... 4

LIST OF TABLES ...... 7

LIST OF FIGURES ...... 8

ABSTRACT ...... 9

CHAPTER

1 INTRODUCTORY OVERVIEW ...... 11

2 LITERATURE REVIEW ...... 15

Semantic Processing – Access to Meanings ...... 15 Action Concepts ...... 22 Actions vs. Objects ...... 23 Action Understanding ...... 26 Verbal and Visual Access to Action Concepts ...... 29 Working Hypotheses ...... 32

3 STIMULUS STANDARDIZATION ...... 33

Overview ...... 33 Single Word Norming ...... 33 Participants ...... 33 Single Word Lexical Norming ...... 34 Single Word Visual Norming ...... 36 Summary of Single Word Norming ...... 37 Action Pair Selection ...... 38 Participants ...... 38 Pair Selection ...... 38

4 ACTION SEMANTIC JUDGMENT TASK ...... 41

Overview ...... 41 Participants ...... 41 General Procedure ...... 42 Semantic Task and Procedure ...... 42 Motor Familiarization Task and Semantic-Motor Control Task ...... 44 Motor Task and Procedure ...... 45 Eyetracking Recording and Apparatus ...... 45 Data Design and Analysis ...... 46 Behavioral Analysis ...... 46

5

Eyetracking Analysis ...... 47 Research Questions Remarks and Predictions ...... 48

5 RESULTS ...... 50

Data Trimming ...... 50 Behavioral Results ...... 50 Response Accuracy ...... 51 Reaction Time ...... 52 Motor Rates ...... 53 Behavioral Summary ...... 54 Interim Discussion of the Behavioral Results ...... 56 Eyetracking Results ...... 59 Fixation Count ...... 59 Revisits ...... 59 Fixation Duration ...... 60 Eyetracking Summary ...... 61 Interim Discussion of the Eyetracking Results ...... 63 Secondary Analysis ...... 64 Discussion of the Experimental Findings ...... 65

6 GENERAL DISCUSSION ...... 81

Direct vs. Indirect Semantic Processing ...... 81 Motor Experience during Semantic Processing ...... 83 Action Semantic Processing – A Summary ...... 85 Limitations and Future Directions ...... 87

APPENDIX

A INSTRUCTIONS FOR PSYCHOLINGUISTIC RATINGS ...... 91

B ACTION STIMULI IN SETS 1 AND 2 ...... 93

C NON-ACTION STIMULI ...... 95

LIST OF REFERENCES ...... 96

BIOGRAPHICAL SKETCH ...... 106

6

LIST OF TABLES

Table page

3-1 Mean characteristics of action stimuli ...... 40

3-2 Mean and standard deviation of the 7-point Likert scale for the pair-relation survey ...... 40

5-1 Summary of F statistics for the ANOVA results examining response accuracy in action and object concepts via subject (F1) and item (F2) analysis ...... 68

5-2 Mean and standard deviation of response accuracy ...... 68

5-3 Summary of F statistics for the ANOVA results examining reaction time in action and object concepts via subject (F1) and item (F2) analysis ...... 69

5-4 Mean and standard deviation of reaction time ...... 69

5-5 Mean, standard deviation, and percent rate change of the motor tapping rate ...... 70

5-6 Summary of F statistics for the ANOVA results examining fixation count in the target region in action and object concepts via subject (F1) and item (F2) analysis ...... 70

5-7 Mean and standard deviation of the fixation count in the target AOI ...... 70

5-8 Summary of F statistics for the ANOVA results examining revisits in the target region in action and object concepts via subject (F1) and item (F2) analysis ...... 71

5-9 Mean and standard deviation of revisits in the target AOI ...... 71

5-10 Summary of F statistics for the ANOVA results examining fixation duration in the target region in action and object concepts via subject (F1) and item (F2) analysis ...... 72

5-11 Mean and standard deviation of fixation duration in the target AOI ...... 72

B-1 Experimental stimuli in the semantic judgment task ...... 93

C-1 Non-action pairs in the semantic judgment task ...... 95

7

LIST OF FIGURES

Figure page

4-1 Example of the trial procedure ...... 49

5-1 Main effects of Task Condition (single, dual) and Modality (picture, word) for accuracy for action concepts ...... 73

5-2 Main effect of Relation (association, similarity) for accuracy for object concepts ...... 74

5-3 Main effect of Task Condition (single, dual) and Modality (picture, word) for reaction time for action concepts ...... 75

5-4 Main effect of Task Condition (single, dual) for reaction time for object concepts ...... 76

5-5 Finger tapping rate by baseline control and experimental trials (Domain x Modality) .....77

5-6 Main effects of Domain (action, object) and Modality (picture, word) for fixation count ...... 78

5-7 Main effect of Modality (picture, word) for revisits ...... 79

5-8 Main effect of Domain (action, object) for fixation duration ...... 80

6-1 Schematic depiction of action sematic processing ...... 90

8

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

MODALITY-SPECIFIC SEMANTIC PROCESSING OF ACTION CONCEPTS: AN EYETRACKING INVESTIGATION

By

Ching-I Hung

August 2015

Chair: Jamie Reilly Major: Communication Sciences and Disorders

One of the key issues in cognitive neuroscience is the partially distinct semantic representation and processing between actions and objects. While conceptual flexibility has been well-documented for objects, little is known about how semantic access to action concepts is moderated by input modality (i.e., pictures vs. words) and motor activation. One prominent hypothesis holds that pictures offer faster or ‘privileged’ semantic retrieval than words, resulting in a picture superiority effect for object concepts. Another prominent hypothesis known as embodied cognition emphasizes the contribution of semantic-motor interaction in action representation due to active enactment of our bodily experience. We tested these two hypotheses in an action semantic judgment task with and without a concurrent motor tapping task. The semantic judgment task employed pictorial and orthographic presentation of the same items, allowing modality performance contrasts. Furthermore, we manipulated the types of action concept (i.e., physical vs. non-physical) and task conditions (i.e., semantic vs. semantic + motor task composed of sequential finger tapping of a number pad) to examine the effect of embodied experience at category and task levels.

Both behavioral and eyetracking measures revealed a general processing difficulty in actions vs. objects (i.e., actions are ‘harder’ to process than objects). Moreover, participants

9

showed a reversal of the picture superiority effect for actions, whereas objects elicited comparable patterns of performance. These results challenge the privileged access hypothesis and demonstrate further distinctiveness of the action-object processing dichotomy. We did not find evidence of motor interference in that there was no processing difference between physical vs. non-physical actions. Furthermore, the concurrent motor task induced a dual task effect regardless of conceptual domain and action category. Although there was a greater processing delay and lower response accuracy in the action concepts in the semantic + motor dual task condition, it did not attain statistical significance. These results suggest that the picture superiority effect reverses for action verbs (i.e., words > pictures) and that a concurrent motor task does not necessarily interfere with action semantic categorization abilities.

10

CHAPTER 1 INTRODUCTORY OVERVIEW

Meaning constitutes a very significant part of our everyday functioning. , the frequently used term for meaning, refers to various world knowledge, words or concepts of our experience, beliefs, or learned facts. It provides the foundation for thinking and reasoning; most importantly, semantic memory is also multifaceted. It is not a passive storage of information but an active and dynamic system allowing us to retrieve and access consistently and adaptively in response to different contexts for effective interaction with the world (Thompson-

Schill, D'Esposito, Aguirre, & Farah, 1997).

Dominant theories of object semantic memory first emphasized a multimodal representation in the brain and modality-specific information may be weighted differently during processing (McCarthy & Warrington, 2015; Yee, Chrysikou, & Thompson-Schill, 2013). For example, to determine whether an apple and an orange taste the same, one is less likely to use his knowledge of color or shape to make the decision. Thus, only part of the semantic knowledge is actively retrieved whilst others are not necessarily required in this case. Furthermore, studies also showed that different kinds of object categories presented differential weights of semantic features (Martin & Chao, 2001). Living categories such as fruits and animals are more salient in their perceptual features (e.g., color, shape) whereas non-living categories like tools are more definitive by their functions. Therefore, while there may be all kinds of semantic knowledge about one concept, some knowledge is weighted more heavily than the others and could be more accessible as governed by focusing task demands (Amsel, Urbach, & Kutas, 2013).

Semantic memory is also accessible via a number of ways; each varies by the degree of symbolic representation of semantic information. For example, pictures and words are used rather frequently for semantic representation. One of the fundamental questions in human

11

cognition is how information processing is affected by stimulus modality. Asymmetries in the processing of pictures and words have been frequently reported in the literature with regard to object concepts (Nelson, Reed, & McEvoy, 1977; Paivio, 1991; Potter & Faulconer, 1975). It is commonly suggested that pictures provide privileged access to the semantic system relative to words, which must first engage symbolic lexical representations.

While many characteristics of semantic memory have been elaborated extensively for the object concepts, fewer have systematically examined how we process action concepts. Action concepts denote a significant portion of human semantic memory. Processing an action concept may involve recognizing a reference (e.g., object) and making appropriate relativistic inference.

Many studies implicate some form of motor representation in the brain for action concepts, although the degree of motor activation and the question of whether this activation is necessary remain debatable (Chatterjee, 2010). Following the current theories of semantic memory for objects and actions, I examined if action concepts also carry such conceptual flexibility in response to input modality and motor simulation.

My first aim was to evaluate semantic access as a function of input modality for action concepts. The typical view is that pictures hold a processing advantage over words because pictures gain direct access to the semantic system. However, Hung, Edmonds, & Reilly

(submitted) found that this picture advantage in fact reverses for action pictures. We examined two standardized neuropsychological assessments of semantic associative relationship for object

(i.e., Pyramids and Palm Trees Test) and action (i.e., Kissing and Dancing Test). Participants took longer to process action images as compared to action words. Therefore, we questioned whether picture superiority effect applied only to objects and hypothesized that there was a

12

reversed picture superiority effect for action concepts due to its complex surface form and semantic representation.

My second aim was to investigate the processing demand of action related knowledge as a function of first-person, embodied experience. The embodied cognition approach predicts a processing advantage of related sensorimotor experience for action-related semantic processing due to motor enactment. It has been demonstrated that neural activation or behavioral performance can be modulated by the degree of an object’s sensorimotor salience. For example, we might readily imagine the gesture and grasp characteristics associated with a verb such as cutting, whereas other verbs such as melting are not subject to motor simulation. Therefore, it was hypothesized that physical actions (i.e., actions humans can perform) are more likely to observe processing advantage in a semantic judgment task but not with non-physical actions (i.e., actions demonstrating changes of an object). Additional manipulation included a concurrent motor task composed of sequential finger tapping on a number pad. I reasoned that if both action semantic judgment task and the motor task utilized the same resource in the brain (e.g., motor enactment), performing both tasks at the same time would introduce resource competition effect, especially for physical actions.

There were two experimental conditions: a single task condition with the action semantic judgment task and a dual task condition with the semantic judgment task and a concurrent motor task. Before the experiment, I controlled for many lexical and visual word characteristics for action words to ensure the comparability of physical and non-physical actions in most psycholinguistic measures.

As predicted, input modality impacted action concepts in a different way than static objects. Participants spent a longer time to judge for semantic relatedness for actions via pictures

13

than words. Moreover, participants were in general slower to respond and had low response accuracy in action vs. object concepts. These reversed processing differences were also evident in eyetracking measures, such as increased number of fixation and prolonged fixation duration in the target area. I suggested that the processing disadvantage with action pictures were resulted from both its visual form and semantic content. These two levels of processing challenge thus led to a reversal of picture superiority effect for action concepts.

I did not find strong evidence of the influence of embodied experience on action concept processing. There were no processing differences between physical vs. non-physical actions.

Instead, the concurrent motor task induced an overall dual task effect to both conceptual domains and both action categories. However, a general trend showed that it was more effortful to processing action concepts. Behaviorally, there was a greater processing delay and lower response accuracy in the action concepts in the dual task condition. Although the motor tapping rate did not show statistical significance among baseline, objects and actions, a trend of greater tapping rate was found in action picture modality.

In the current study, I have found comparable results for object pictures vs. object words.

Moreover, actions did not elicit a picture superiority effect. Instead, there was a reversal of picture superiority effect for action concepts. Although the current findings are not sufficient enough to draw definitive claims about the semantic-motor interaction observed in previous studies, I discuss alternative venues to investigate the dynamic, task-dependent, motor processing for the action concepts.

14

CHAPTER 2 LITERATURE REVIEW

Semantic Processing – Access to Meanings

Semantic memory is accessible through a variety of routes (e.g., static or dynamic visual input, printed or spoken words), among which static pictures and words are frequently used.

Pictures and words differ in many ways with regard to what kind of linguistic or semantic information they contact initially (Dell & O'Seaghdha, 1992; Huttenlocher & Lui, 1979; Levelt,

2001). When seeing a picture of a cat, sensory perception of its color, shape, or texture maps consistently to the mental representation of the stored conceptual knowledge which activates meaning associated with cat. In contrast, visual input of the word cat typically activates a series of phonological and orthographical analysis of the letters, which is then followed by the activation of mental lexicon before meaning is actually retrieved. Comparisons between picture naming and word reading usually demonstrate faster naming speed with words (i.e., oral reading) than with pictures. The surface structures of words make contact with the phonological system, which has only arbitrarily symbolic links to the semantic system. In contrast, picture naming requires additional sub-steps to access the output lexicon (e.g., Potter & Faulconer, 1975; Glaser

& Glaser, 1989; Theios & Amrhein, 1989).

Pictures typically post advantages in processing speed and depth of encoding over words.

It has been reported that when asked to determine the congruency of a named category (e.g., living and nonliving) preceding a target word vs. black-and-white line drawing, healthy participants were faster in categorizing drawings than words (N = 16, Potter & Faulconer, 1975).

Seifert (1997) also demonstrated that the picture superiority effect was readily observed in a categorization task after matching the perceptual size of the picture and word stimuli.

Additionally, in his third experiment, when participants were asked to determine whether items

15

were related, response times were still faster for categorically related pictures than for words but the superiority effect was diminished when judging for non-categorically related items in both picture and word conditions (e.g., car-street). The behavioral studies find converging support from neurophysiological studies. Azizian, Watson, Parvaz, and Squires (2006), for example, compared picture and word processes in a dual-target forced-choice task via the event-related potentials (ERPs). While both pictures and their corresponding names were presented simultaneously, participants were asked to classify all stimuli into target (i.e., low probability) and nontarget (i.e., high probability) categories. The authors found earlier activation and greater peak amplitude of P300, an ERP component enhanced by anomalous (e.g., name mismatches) or low frequency events, showing faster to detect and categorize odd event in picture condition.

Thus, the neurophysiological evidence also revealed the privileged access to semantic memory especially for categories in picture vs. word condition (also see Amsel, Urbach, & Kutas, 2013 for more recent evidence).

Theories underlying semantic processing. Many have attempted to explain the underlying mechanism in the processes of verbal and visual information and their interaction with the semantic system. One of the most influential approaches emphasized on the dual semantic systems responsible for visual and verbal input. The dual coding system suggests two separated semantic systems, imaginal and verbal, in which pictures carry the advantage to access both while words only access the verbal system. It is the dual access that contributes to the picture process advantage (e.g., Paivio, 1991). Therefore, rather than one semantic system that stores and retrieves for information, the dual theory argued that there should be at least two domain-specific systems that process different kind of information.

16

Opponents of dual coding have and other similar “multiple ” approaches have argued that semantic memory is supported by a unitary system (Bright, Moss, & Tyler, 2004).

Caramazza (1996) suggested that one unitary semantic system which was accessed directly from all forms of input modalities (e.g., vision, spoken and written language) with an emphasis on the direct access to semantic system from visual input. Glaser (1989, 1992), on the other hand, proposed a model, similar to the idea of the dual coding framework, in which pictures also have privileged access to the unitary semantic system but words require translational processing to their mental lexicon before accessing the semantic system (Seifert, 1997). Therefore, in the unitary semantic system account, the differential processing may be a result of different physical characteristics that allows pictures to have more direct access to semantic information than words instead of two orthogonal subsystems.

Both the unitary and multiple semantic system arguments have been invoked to account for idiosyncratic performance of brain-injured adults (e.g., Warrington, 1975; Warrington &

Shallice, 1984). Dissociations of picture-word impairment were reported in patients with neurological conditions who demonstrate better or preserved comprehension with pictures as compared with words. McCarthy and Warrington (1988) reported a patient (T.O.B) with progressive deterioration in his language use and comprehension. When asked to define a given object or spoken word, T.O.B. can retrieve much information from pictures but not from its verbal labels. The patient demonstrated fairly consistent dissociation overtime especially for living items (pictures 94% vs. words 33%). Saffran, Coslett, Martin, and Boronat (2003) reported patient BA with progressive fluent aphasia who also had severe verbal impairment as measured by both auditory and written lexical decision tasks but relatively preserved access to knowledge from pictures. Both were discussed as evidence of orthogonal semantic subsystems in

17

which pictures access information stored in a visual format, whereas words were expected to access verbal knowledge. However, in their connectionist model, Lambon Ralph and Howard

(2000) replicated the same word-picture dissociation within one unitary semantic system, which explained the asymmetry performance of IW, their patient with semantic dementia. Coccia,

Bartolini, Luzzi, Provinciali, and Lambon Ralph (2004) presented further converging evidence for Lambon Ralph’s model by reporting a four-year longitudinal study of two patients with semantic dementia. Their semantic knowledge decline revealed a parallel fashion in verbal and nonverbal performance with different performance between verbal input and nonverbal performance. It was noted that patients’ nonverbal performance, although relatively spared, was not completely preserved. Furthermore, by providing appropriate cues (e.g., priming with a semantically-related picture) patients could retrieve semantic information about words they had trouble with when presented in isolation (e.g., patient IW). Therefore, the amodal view stressed the fact that words bear an arbitrary phonology-to-semantic relationship to objects they denote, whereas visual/pictorial representations are relatively more systematic in the semantic-structure mapping. It is thus the more arbitrary representation of words that contributes to the seemingly dissociation between verbal and nonverbal semantics.

On the other hand, one computational model attempted to provide a compromised approach to the debate between unitary vs. multiple semantic systems especially for the case of optic aphasia (Plaut, 2000). It is reported that patients with optic aphasia demonstrated impairment in naming visually presented objects, which did not have object recognition issue as seen in visual agnosia, or did not have overall naming issue as seen in anomia. Patients may still be able to recognize the objects they cannot name by gesturing the use appropriately. They also retained the ability to name the same objects from other input modalities (e.g., auditory or

18

tactile). In his connectionist model, Plaut (2002) argued a unitary semantic system common to all input modalities which has selective activation in response to different input modality. He suggested that instead of debating for two ends of the argument, perhaps a graded, middle ground perspective is more suitable.

Experienced-based semantic access. One popular line of research in object concepts has viewed the semantic system in terms of how people interact with and experience objects in the world. This approach, known as embodied cognition, is premised upon the distributed nature in which concepts are closely associated with modality specific conceptual features and their corresponding brain regions. The embodied approach in its contemporary form is also considered to be flexible, so that semantic processing is tailored to the concurrent, task-based constraints

(Amsel et al., 2013).

Under the experienced-based context, activation of conceptual information could vary flexibly as a function of verbal or nonverbal context. Kiefer (2001) demonstrated this fact in with respect to category specific effects in healthy participants via ERPs. In their experiments, healthy participants were asked to determine whether a given superordinate category name (experiment

1) or a category probe presented pictorially in from of an object pair (experiment 2) were appropriate to a target object or word. In both word and picture probe conditions, picture targets were categorized faster than words, reflecting an overall faster access to semantic memory perceptually. Specifically, natural items were verified faster than artifacts in the picture condition. The results not only supplemented the pictorial superiority effect but also showed that natural items carried more visual information perceptually and semantically than artifacts which were reflected in the picture condition.

19

The experience account also indicated that pictures and words can bias knowledge access due to the different semantic information contacted initially by pictures and words (Saffran,

Coslett, & Keener, 2003). When healthy adults were asked to generate a word associated with a given object, they produced more verbal associates (e.g., bride – groom) in response to words. In contrast, participants produced more perceptual attributes to pictures (e.g., lion – roar) even though these attributes were not depicted in the picture. Specifically, Saffran et al. found that verbs were more likely to be elicited by pictures of manipulable and inanimate objects than non- manipulable objects. Thus, they not only demonstrated that pictures and words can affect how people respond to a task but again that concepts are weighted differentially with regard to their semantic information.

In a series of experimental manipulation to elucidate differential access to action-related knowledge from pictures and words, Chainay and Humphreys (2002) asked healthy participants to 1) name objects and words, 2) decide whether an action is more appropriate for a given object

(i.e., action decision), and 3) determine whether an object is typically found in a kitchen (i.e., contextual/semantic decision). The authors argued that if there is privileged access to action knowledge from objects, facilitation would be observed for object pictures vs. words in tasks requiring access to action knowledge. Consistent with previous findings, word naming was faster than picture naming. When asked to make judgment based on semantic information, participants were faster in making semantic judgments for object pictures not only for the contextual information but also for the related action knowledge. It is generally suggested that the relationship between the surface form or shape of an object contained in a picture and its manipulation knowledge allows quickly affordance resulting in preferential access as compared with word stimuli (Coccia et al., 2004; Thompson-Schill, Kan, & Oliver, 2001).

20

The experience-based approach contribution of experiential (i.e., nonverbal) and linguistic (i.e., verbal) information in shaping conceptual representation (e.g., the Language and

Situated Simulation framework (LASS), Simmons, Hamann, Harenski, Hu, & Barsalou, 2008).

Both linguistic and experiential information contribute to semantic representation and each’s contribution is not all-or-none, but a mixed effect depending on stimuli and task conditions

(Andrews, Vigliocco, & Vinson, 2009; Simmons et al., 2008). Therefore, following the notion of such conceptual flexibility, semantic processing would differ as a function of input modality.

Non-linguistic experiential stimuli cues (e.g., pictures or videos) might draw more experiential information preceding the linguistic information. Sharing a similar vein, Vigliocco and colleagues proposed a computational model evaluating the plausibility of experiential-only, linguistic-only, and combined conditions for semantic representation based on speaker generating data. They concluded that semantic representation is highly statistically based as a result of shared contribution of experiential and linguistic information (Andrews, Vigliocco, &

Vinson, 2009).

Conceptual flexibility also stresses the necessity of evaluating semantic memory from more than one input modality. If there is differential weights of semantic features for concepts, accessible in response to various task conditions, it becomes important to elucidate the mechanism underlying semantic processing such as the timing of semantic activation (when), the type of semantic information being activated (what), the brain area(s) associated with the activation (where), and/or how the process works.

Although theories for the differences in the processing of visual and verbal information are still debatable, there are two general findings: first, an observed processing superiority of object pictures is commonly reported as compared to words, suggesting that pictures have a

21

faster and more direct access to meaning, while words would require translational activity at the representational level before accessing the semantic system. Second, semantic process of verbal and nonverbal information has been discussed to be associated with multiple facets of semantic information (e.g., sensorimotor, verbal), reflecting active and dynamic interaction. Finally, we need to consider conceptual flexibility when assessing semantic processing.

Action Concepts

Like the object concepts, actions label concepts. The ability to recognize an action concept denotes a further step from recognizing a reference (to objects) to making appropriate inference. Typically, nouns are considered prototypical objects whereas prototypical verbs actions (Kemmerer, 2014).

It is unclear whether the mechanisms underlying the processing of object nouns pertain uniformly to action verbs. Actions are inherently motoric-based and involve more abstract relational information than objects. Recent studies have attempted to elucidate the neural substrates for action concepts (e.g., Watson, Cardillo, Ianni & Chatterjee, 2013) as well as the broader principle of representation of an action semantic system (e.g., Vigliocco, Vinson, Lewis,

& Garrett, 2004). However, we have little understanding of whether conceptual flexibility exists for action concepts. It is unclear how accessible action concepts are through different input channels as no previous studies have, to our knowledge, directly contrasted actions and objects across different input modalities. Furthermore, the role of motor simulation in action semantic processing remains both unclear and highly controversial. While some studies have observed rapid activation within somatopically demarcated motor cortices (Hauk, Shtyrov, & Pulvermüller,

2008), still others have questioned the necessity of this involvement (Chatterjee, 2010).

Therefore, it is important to include this line of research for a comprehensive investigation of

22

human semantic cognition. In the following section, I first describe how actions are different from objects, and then the current framework underlying action semantic processing.

Actions vs. Objects

Actions and objects qualitatively differ in many ways. Early brain injury studies have shown that patients with nonfluent aphasia demonstrate a selective impairment on action production whereas patients with fluent aphasia had particularly difficulty in naming common objects (See Matzig, Druks, Masterson, & Vigliocco, 2009 for review). Since, researchers have attempted to explain the seemingly double dissociation (i.e., nonfluent aphasia seemed to have better object than action production whereas fluent aphasia tended to have better action than object production). However, the dissociation was not a clear-cut phenomenon; that is, there was not always an action naming advantage in Wernicke’s or anomic aphasia and neither consistent reports of object naming advantage in Broca’s aphasia. In one recent review, Matzig et al. (2009) found that aphasic patients tended to show more verb deficits than noun deficits overall; however, verb deficits were not invariably associated with a specific lesion location or aphasia type. Furthermore, one recent meta-analysis study, examining previous fMRI/PET neuroimaging studies on objects and actions using a hierarchical clustering algorithm, also found that both objects and actions recruit distributed brain regions across frontal, parietal, and temporal regions during processing, rejecting the common tenets that action processing is primarily based in the frontal region and object processing is resided mainly in the temporal regions (Crepaldi et al.,

2013).

Despite the inconsistent reports in previous case studies, the action and object processing difference had been accounted for a number of different linguistic and cognitive factors that generally lead to the conclusion that action concepts are more complex than objects (e.g., Black

& Chiat, 2003; Druks, 2002). The first explanation focused on the linguistic referents of objects

23

and actions; that is, nouns and verbs. Verbs were usually rich in grammatical marking and in the thematic roles in a sentence. An action word alone or in a sentence can generate multiple expectations regarding an action event, since it can be associated with different doers, receivers, goal, instrument or location (McRae, Ferretti, & Amyote, 1997). Furthermore, recent work has also demonstrated that the linguistic input of objects and actions carries probabilistic orthographic and phonological cues that allow people to make grammatical category judgment based on the statistical regularities in surface form (de Zubicaray, Arciuli, & McMahon, 2013).

For example, the word ending -eve is a morphophonological marker of verbs, whereas word ending -age more commonly predicts noun status. Although researchers may exhibit slightly different views on the timing and degree of involvement of the grammatical class during word retrieval (see Vigliocco, Vinson, Druks, Barber, & Cappa, 2011 for review), it was suggested that the apparent grammatical distinction is modulating the lexical semantic representation.

The other explanation argued at the level of different meanings conveyed by nouns and verbs. Bird, Howard, and Franklin (2000) emphasized the importance of imageability and indicated that the dissociation can be eliminated by controlling for the imageability of nouns and verbs. Studies also showed that imageability led to differential noun-verb disruption in some patients with aphasia (Luzzatti et al., 2002). Aside from the lesser imageablility in verbs, the semantic organization of verb knowledge was also less straightforward. With regard to the semantic representation of action concepts, the traditional relational approach indicated that actions are represented with a shallower and matrix-like structure (compared to the hierarchical organization of objects; e.g., Collins & Quillian, 1969); that is, members in the same action category may share fewer semantic features and be less hierarchically organized (Huttenlocher &

Lui, 1979; Miller & Fellbaum, 1991). In their semantic network model, the Wordnet, Miller and

24

Fellbaum (1991) developed a large dataset of English words including nouns, adjective, and verbs based on the relation among themselves (e.g. how two words are related). While nouns were represented in a hierarchical, closely related structure (e.g., a dog is an animal), the relation of verbs was mainly troponymy, that is, an action contains a manner elaboration of another action (e.g., to jog is to walk in some manner, Fellbaum & Miller, 1990). Moreover, it is discussed that while objects depict entities and often demonstrate a “tighter fit” between their surface structure and meaning, actions show a “looser fit” due to the temporal and dynamic relations (Black & Chiat, 2003).

Despite the linguistic differences between objects and actions, Vigliocco and colleagues attempted to incorporate action concepts in the current representation of the semantic system. It is proposed that both actions and objects share the same semantic system in which concepts are represented based on semantic features (i.e., the Featural and Unitary Semantic Space (FUSS)

(Vigliocco et al., 2004; Vinson & Vigliocco, 2002). The Vigliocco group collected a set of features norms from 280 participants for 287 actions via verbal feature generation task. In their model, both features of objects and actions were divided into five knowledge types: visual, other perceptual (any other sensory modalities), functional (what it is used for/ goal of an action), motoric (how a thing is used/ how it moves), and other features. The model has been tested for semantic graded effect and semantic interference effect (Vinson & Vigliocco, 2002, 2008) demonstrating the plausibility of a feature-based representation for action concepts. Although the

Vigiocco group also indicated the dissimilarity of the principle and content of semantic organization between objects and actions (such that object concepts are more tightly packed whereas semantic space for action concepts has less dense neighborhoods), their model did not intend to fully explain all conceptual knowledge. It is indicated that actions and objects can be

25

represented in the brain in a similar feature-based fashion grounding the knowledge to human experience of the world.

In general, compared to object concepts, the semantic and linguistic quantitative differences are likely to increase the complexity and perhaps cognitive resource allocation during action processing. Unlike objects as entities that have rather direct visual-semantic relationship, actions are typically discussed as events that involve more conceptual information such as temporally or spatially dynamic, relation-based, and so on (Chatterjee, Southwood, & Basilico,

1999). Moreover, although the internal structure of the action concepts is not fully understood, recent attempts have provided a plausible, neuroanatomically compatible avenue to action concepts based on our experience of the world rather than focusing only on its abstraction of the content and process.

Action Understanding

Many recent investigations of experienced-based conceptual representation and the underlying neural substrates for object and action concepts have heavily invoked embodied or grounded cognitive viewpoints (Barsalou, 2008). Embodied cognition emphasized that meaning arises from humans’ bodily interaction with the external environment. Following a long debate regarding the amodal vs. modality-specific semantic system, the embodied framework argued that conceptual knowledge is not symbolic in format but anchored in distributed, modality- specific brain regions (Allport, 1985). It is also indicated that the processing of meanings would involve re-enactments of the previous experience and its corresponding sensorimotor systems and affective states associated with the experience (Barsalou, 1999, 2009; Damasio, Tranel,

Grabowski, Adolphs, & Damasio, 2004; Gallese & Lakoff, 2005; Kemmerer, Miller,

Macpherson, Huber, & Tranel, 2013). Therefore, object concepts are represented specifically in the brain regions associated with their semantic features, such as shape, color, sound, taste,

26

and/or manipulability (e.g., Martin & Chao, 2001). On the other hand, the processing of action concepts involves brain regions or nearby structures that underlie the simulation or execution of the actions (Gallese & Lakoff, 2005).

In the domain of action concepts, it has been taken as direct evidence of the embodied framework that processing of body-part-specific actions in various input formats (e.g., overt observation/execution, static depiction, or verbal form) activates body-part-specific regions in the left motor cortex (Hauk, Johnsrude, & Pulvermuller, 2004; Hauk & Pulvermuller, 2004).

Brain activation in the motor, premotor and/or visual motion areas was also frequently observed during comprehension and production of action pictures and words (Aziz-Zadeh, Wilson,

Rizzolatti, & Iacoboni, 2006; Berlingeri et al., 2008; Hauk et al., 2004; Liljestrom et al., 2008;

Noppeney, Josephs, Kiebel, Friston, & Price, 2005). Such activation has been reported when listening to action words or sentences (Pulvermuller, Hauk, Nikulin, & Ilmoniemi, 2005;

Raposo, Moss, Stamatakis, & Tyler, 2009); during lexical decision for face-, arm- and leg-related verbs or action-related nouns (Hauk & Pulvermuller, 2004; Pulvermuller, Harle, & Hummel,

2001; Vigliocco & Kita, 2006); during semantic similarity tasks (Kable, Kan, Wilson,

Thompson-Schill, & Chatterjee, 2005; Kable, Lease-Spellmeyer, & Chatterjee, 2002; Kemmerer,

Castillo, Talavage, Patterson, & Wiley, 2008); or when generating action or action-related words

(Saccuman et al., 2006).

An alternative perspective to embodied cognition holds that sensorimotor activation reflects post-processing imagery or ‘resonance’ from conceptual processing to sensorimotor regions. This disembodied view rejects the claim that concepts are mediated by automatic and necessary activation of the sensorimotor systems whenever a compatible action or action-related concept is processed. Instead, while sensorimotor experience still plays a role in conceptual

27

representation, some have shifted to investigate when and how the sensorimotor experience engages in the conceptual processing (Chatterjee, 2010). It is suggested that conceptual activation of related sensorimotor experiences is highly selective, representing a more graded grounding view. Therefore, sensorimotor information may not be necessarily sufficient during processing or provide any experience-related facilitation when thinking more abstractly; on the other end, there could be more experience-related facilitation when the motor experience engages strongly during conceptual processing and formation.

Experience dependent action understanding. It has been demonstrated that our own first-person subjective experience with motor simulation impacts how we understand actions. For example, Calvo-Merino, Glaser, Grezes, Passingham, and Haggard (2005) investigated brain processes of action observation as a function of expertise of the observer (e.g., dancing). When people were experts in dancing (e.g., classical ballet, capoeira), greater brain network activation was found during observation of their own dance style. Similarly, the effect of experience is also demonstrated during action language comprehension. Another fMRI study evaluated the sentence comprehension in hockey players and non-skilled normal controls (Beilock, Lyons,

Mattarella-Micke, Nusbaum, & Small, 2008). Both groups were presented with sentences containing everyday actions and hockey-related actions preceding pictures of an individual and asked to determine whether the individual was mentioned in the preceding sentence. Skilled hockey players not only showed greater premotor cortical activation but also demonstrated better understanding of hockey-related sentences than novices; however, this effect was not observed when presented with everyday actions (also see Lyons et al., 2010).

Similar effects of motor-related expertise are also apparent for more symbolic action processing. It has been tested whether motor activation differed as a result of first-person (i.e., I)

28

or third-person (i.e., s/he) perspective of action phrases (Saxe, Jamal, & Powell, 2006). For example, Papeo, Vallesi, Isaja, and Rumiati (2009) examined whether motor activation was greater for self-referential or for other referential action verb meaning using TMS. Healthy

Italian participants were asked to read Italian action words silently and determined whether the subject was first-person (e.g., afferr-o) or third-person (e.g., afferr-a) which was cued by Italian word suffix. Both hand-action (e.g., to stir) and non-action (e.g., to wonder) were included in the study. Greater activation in the motor cortex was found for action words (vs. non-action words) only when it was presented as first-person perspective. The authors suggested that the involvement of the motor system in conceptual processing may depend on the action or motor component of the word as well as the agent of the implied action. This leads to the suggestion that motor simulation is not necessarily an all-or-none effect but is largely based on the degree to which one finds information personally relevant. As a result, the effect could augment their performance neurophysiologically and behaviorally.

Verbal and Visual Access to Action Concepts

While neurophysiological studies have provided abundant opportunities to understand the neural substrate of action concepts, the direct comparison between verbal and visual access to action knowledge is scant functionally and behaviorally.

Dynamic and static action presentation – the ‘unnatural’ view. Action images are by nature more complex visually relative to objects due to the amount of information depicted.

Aside from the perceptual complexity, a static action picture may only partially represent one fractional moment of the entire process which led to the increased processing load or difficulty observed in clinical population (d'Honincthun & Pillon, 2008; Fung et al., 2001). Some have found that patients’ action comprehension or production could be improved with dynamic action

29

stimuli (e.g., videos) (d'Honincthun & Pillon, 2008; Pashek & Tompkins, 2002) or real gestural performances (Druks & Shallice, 2000).

However, the findings are still controversial as no improvement or difference were found in other studies (e.g., Berndt, Mitchum, Haendiges, & Sandson, 1997). For example, Jensen

(2000) reported a patient with verb retrieval problem who did not demonstrate better naming performance in video than in pictured action. In another anomic case, Berndt et al. (1997) also reported no difference between naming picture and video stimuli, showing possible underlying deficits of action concepts regardless of the demonstration of the dynamic nature. Moreover, one recent neuroimaging study examined the visual processing of dynamic versus static actions in both healthy adults and individuals with brain damage using the Dynamic Action Naming test and the Static Action Naming task (Tranel, Manzel, Asp, & Kemmerer, 2008). While dynamic and static actions shared an overlapping neural network, it was also found that brain-damaged patients’ performance in the dynamic and static naming tasks was highly correlated (N = 71, R

=.91), demonstrating highly commonality of their behavior performance on dynamic (video) and static (picture) naming tasks. Therefore, while some may argue that static pictures to action concepts are like pictures of objects lacking salient features to object concepts, it is also evident that static pictures could, in some cases, sufficiently imply action concepts.

Action picture vs. action word. Direct comparison of pictures and words in action semantic processing is rare functionally and behaviorally (Chatterjee, 2010; Watson et al., 2013).

In a conceptual matching task similar to the Pyramids and Palm Trees Test (Howard & Patterson,

1992), Kable et al. (2002) conducted a fMRI study examining the brain activation of action event knowledge using pictures and words. They demonstrated overlapped brain regions in the left hemisphere for both types of stimuli (also see Watson et al., 2013 for a meta-analysis review).

30

Moreover, although their focus was on the brain regions that were involved during action semantic processing, their behavioral results of a small group of participants (N = 6) showed a trend of lower accuracy and longer reaction time (picture accuracy 76 ±4% and RT 1720 ±80 ms; word accuracy 91 ±3% and 1469 ±65 ms). However, no further investigation was carried out to evaluate the latent factor of the fine-grain action knowledge of this behavioral outcome.

Our previous preliminary study (Hung, Reilly, & Edmonds, submitted) compared semantic processing of pictures and words for objects and actions to determine if access to semantic representations differs across input modality (e.g., word vs. picture) and conceptual domains (e.g., object vs. action) using two common neuropsychological measures – the Pyramids and Palm Trees Test (Howard & Patterson, 1992) for objects and the Kissing and Dancing Test

(Bak & Hodges, 2003) for actions. Participants were asked to match for semantic associative relatedness from triads of picture/word. The results showed that action semantic processing required more time than object concepts; moreover, healthy participants allocated more attention in judging action pictures than action words as measured by the amount of fixation response to target choice. We suggested that the processing disadvantage may be a result of the complex and more abstract nature of action concepts as compared to objects.

We questioned the hypothesis that pictures enjoy privileged access to semantic memory and instead suggest that these modality advantages are a graded function of word class. Concrete nouns may elicit a picture advantage because of their inherent sensorimotor salience, whereas verbs behave more like abstract nouns (Paivio, Yuille, & Madigan, 1968). That is, it is more effortful to convey the multitude of spatiotemporal and thematic information associated with actions within the context of static photographs.

31

To sum up, current theories for action understanding typically emphasize the role of the sensorimotor experience. While many researchers address intensively on the neural substrates underlying different ways of action understanding or articulation, it is unclear how action semantic processing differs as a function of input modality. Contradictory evidence is also reported and highly supportive evidence has not been investigated systematically to draw further conclusion. Therefore, a more systematic design will likely to shed light on the processing difference.

Working Hypotheses

In the introduction, I discussed the multifaceted semantic system in which information processing is dynamic and flexible. With action concepts, many questions remained exploratory.

This study had 2 aims – the first aim tested the modality-specific processing to action concepts to directly compare the direct vs. indirect action semantic access. When we process the meanings of static objects, pictures show a robust advantage over words in terms of speed and accuracy of access to the underlying representations (i.e., picture superiority effect). In contrast, I hypothesized that action concepts are subject to a reversal of this effect, characterized by an advantage for words over pictures.

The second aim was to examine the effect of motor experience during semantic processing. Given that putative studies have emphasized on the motor enactment during action processing, it was hypothesized that processing physical action concepts (i.e, self-initiated, self- executed action) simulated our motor representation, and thus this type of action concepts were more likely to be activated than non-physical motor experience during semantic retrieval.

Additionally, compared to non-physical actions, physical actions would be more likely to interact with the finger tapping task since both processing required similar resource in the brain.

32

CHAPTER 3 STIMULUS STANDARDIZATION

Overview

Before implementing the main experiments, I conducted a series of norming studies to match the experimental stimuli on a range of psycholinguistic variables. All action words used in the experimental tasks were also entered into orthogonal norming surveys to evaluate their lexical and visual characteristics. The purpose of the lexical and visual norming for single and paired action words/images was to control stimulus characteristics so that the two action categories pitted against each other in the experiment differed in terms of only the variable of experimental interest (i.e., embodied, first-person salience). The single word norming tests included an action category rating, visual motion rating, subjective familiarity rating, subjective concreteness rating, lexical fluency, word length, naming agreement, and objective visual complexity. After single word norming was done, I conducted a normative study of action pair relations to determine pair similarity, pair association, and visual similarity.

Single Word Norming

Participants

Thirty six young adults (5 males, mean age = 20.1 years, SD = 1.31) from the University of Florida participated in the stimuli standardization study. All participants were right-handed, native speakers of English with no reported history of neurological or developmental impairment.

Participants were randomly assigned to several word norming questionnaires depending on how many questionnaires they can finish within one hour, including action category, visual movement, familiarity, and concreteness. Approval from the Institutional Review Board at the University of

Florida was obtained and all participants signed an informed consent form prior to the initiation of study procedures.

33

Single Word Lexical Norming

Action word selection. A list of 183 actions was first drawn following the criteria of the linguistic tradition (i.e., English verb lexicon categorized by classes) that distinguishes between action verbs referring to physical actions that humans can perform (N = 111) and non-physical actions (N = 72) that express changes of an object (Kemmerer et al., 2008; Levin, 1993). Non- physical actions typically include change of state actions, cooking actions, or actions that only animals perform.

All words were normed for several psycholinguistic characteristics but only words that were actually used in the main experimental tasks (N = 142) were reported in the following norming results. Mean and standard deviation of the psycholinguistic characteristics of single stimuli can be found in Table 3-1 and all norming instructions were in Appendix A.

Action category norming. Twenty-five participants were given an action category norming on the list of actions. On a scale of 1 to 7, participants were instructed to determine whether a given action is performed more commonly by humans (7) or if it is more commonly used to express changes of an object (1). The average mean of the action verbs referring to physical actions was 6.10 (SD = .59) and the average mean of non-physical actions was 3.68 (SD

= 1.3). The two groups were significantly different in terms of the category rating [mean difference = 2.42, t(138) = 14.1, p < .001].

Visual movement rating. Participants were asked to rate on the degree of visual movement they can image in their brain when reading an action word. The questionnaire was adapted from previous study comparing action verbs varying the degree of motion magnitude

(Bedny, Caramazza, Grossman, Pascual-Leone, & Saxe, 2008). I reasoned that visual motion is one important factor that could contribute to people’s judgment on action concepts. Similar to imageability, if one could picture the movement in their brain easier, the action also tends to be

34

highly imageable. The average mean of the action verbs referring to physical actions was 5.05

(SD = 1.03) and the average mean of non-physical actions was 4.49 (SD = .99). Although the two groups were statistically different from each other [mean difference = .56, t(138) = 3.28, p

=.001], the effect was small, within one-point on the 7-point Likert scale. Most important, both categories were toward the high visual movement end of the Likert scale.

Familiarity rating. Subjective familiarity rating was collected by asking participants to rate how usual or unusual the action word is in their realm of experience on a 7-point Likert scale.

The average mean of the action verbs referring to physical actions was 6.54 (SD = .41) and the average mean of non-physical actions was 6.37 (SD = .58). The two groups marginally differed

[mean difference = .17, t(138) = 2.01, p = .05]. In general, both physical and non-physical action words are highly familiar.

Concreteness rating. Subjective concreteness rating was also collected by asking participants to rate how well they were able to see, touch, or manipulate what the word represented. The average mean of the action verbs referring to physical actions was 6.20 on a 7- point scale (SD = .48) and the average mean of non-physical actions was 5.77 (SD = .55).

Although the two groups were statistically different from each other [mean difference = .47, t(138) = 5.38, p < .001], the effect was small, within one-point on the 7-point Likert scale. In general, both physical and non-physical actions were very concrete.

Lexical frequency and syllable length. Lexical frequency and word length were calculated using the Sublex psycholinguistic database (Brysbaert & New, 2009). The average mean of the action verbs referring to physical actions was 57.2 (SD = 89.5) and the average mean of non-physical actions was 43.7 (SD = 71.8). Action words referred to physical actions had higher word frequency than non-physical actions but the difference was not significant [mean

35

difference = 13.5, t(140) = .99, n.s.]. The average syllable length for physical actions (M = 4.5,

SD = 1.03) was only slightly shorter than non-physical actions (M = 4.7, SD = 1.18) [mean difference = -.20, t(140) = -1.10, n.s.]

Single Word Visual Norming

Aside from the original 183 action words, 26 additional words were included in order to supplement for action words that had low image agreement. Line drawings of actions were taken from a variety of sources (Akinina et al., 2014; Druks & Masterson, 2000; Schwitter, Boyer,

Meot, Bonin, & Laganaro, 2004; Szekely et al., 2004) and were also created by tracing images obtained online. Naming agreement and visual complexity were examined for the final 142 sets of action images. All images were sized and standardized to 500 by 500 pixels.

Naming agreement. Twelve additional participants were recruited in the naming agreement exercise. Participants were seated at a Dell laptop with Eprime software, which displayed an action picture for 3 seconds preceded by a fixation cross for 400 ms followed by a scrambled image for 500 ms. Participants were asked to generate a name for the action image verbally as quickly and accurately as possible during the 3 second of projection. Items were counted as correct if norming participants gave the same label as the picture was originally designed. If most participants gave a different label with consistency (e.g., name ‘skidding’ as

‘swerving’), I kept the picture but replaced the label. Furthermore, synonyms were coded as incorrect (e.g., ’taking off’ for ‘launching’). The mean naming agreement for physical actions was .86 (SD = .16) and .84 (SD = .23) for non-physical actions. Naming agreement for the two categories was not significantly different.

Visual complexity. Following the International Picture Naming Project (IPNP) criteria

(Szekely & Bates, 2000), objective visual complexity was obtained using the raw size of the

36

image files in bytes. The two action category did not differ significant with regard to visual complexity [physical = 29.6 vs. non-physical = 28.1, t(140) = .95. n.s.].

Summary of Single Word Norming

In the single word lexical and visual norming tasks, the following factors were evaluated:

1) action category rating (i.e., relative embodiment), 2) visual movement rating, 3) subjective familiarity rating, 4) subjective concreteness rating, 5) lexical frequency, 6) syllable length, 7) naming agreement, and 8) objective visual complexity. For action category rating, I asked participants to rate on embodied experience. The purpose of the norming steps was to match for any potential differences that may occur between action categories and to ensure that the embodied experience was the biggest driving factor accounting for the difference.

It should be noted that the rating reflects a subjective experience. Therefore, though actions were classified as non-physical action (e.g., change of state), it could be related subjectively with high embodied experience (e.g., bending). In this case, I would follow the linguistic convention for action class (Levin, 1993). Aside from the typical lexical and visual variables, we included the visual movement rating. I reasoned that this was more appropriate for action concepts as participants were prompted to think about the visual motion. It could be comparable with traditional imageability and perhaps more suitable for action concepts since it requires a certain degree of motoric or visual motion experience.

The single word lexical and visual norming test confirmed that the initial categorization for physical vs. non-physical actions following traditional verb class classification was basically comparable with subjective embodied experience. Although our classification of physical and non-physical actions was relative not absolute, it reflected general consensus on relative embodied experience and usage. In addition, both categories showed similar visual motion, familiarity, and concreteness ratings. Although it was statistically significant, the effect was

37

small. These factors were reported to support cross-condition comparison for action pairs (e.g., word vs. picture, physical vs. non-physical action).

Action Pair Selection

Participants

Fourteen participants (8 male, mean age = 35.5 years, SD = 10.3) who did not participate in the main experiment nor in the pretest surveys were recruited from the Amazon Mechanical

Turk (Buhrmester, Kwang, & Gosling, 2011) approved by local IRB at Temple University.

English was the primary language for all participants and the average education year was 15.2 years (SD = 1.5).

Pair Selection

Forty pairs of physical actions and 40 paired of non-physical actions served as experimental stimuli. Twenty four pairs of stimuli in each action category were based on the similarity of the manner of action (i.e., how the action is performed) and the remaining 16 pairs were based on associative relations. To justify the semantic relatedness, participants were asked to evaluate the 1) semantic similarity, 2) visual similarity, and 3) association strength on a 7- point Likert scale (See Appendix A for task instruction). Mean average higher than 4 on a 7- point Likert scale was considered as more similar or associated. Action pairs between categories were not different from each other statistically in all three relation norms. Table 3-2 shows the mean and standard deviation of the pair relation results and Appendix B lists all experimental pairs.

Eighty unrelated pairs, half physical half non-physical actions, were also quasi-randomly created by taking a probe used in a related pair and paired it with another probe. Same rules applied to target. In this manner, all items appeared in both position (i.e., probe, target). The

38

unrelated pairs were examined for any obviously association or similarity. No stimulus was used more than four times in the experiments.

The master experiment stimulus list contained 80 related and 80 unrelated stimulus-pairs.

Two sub-lists were created and counterbalanced in terms of pair-relation ratings, such as manner similarity, association strength, and visual similarity (p > .05 all). Both sub lists (i.e., A or B) contained 40 related and 40 unrelated stimulus-pairs. Participants were presented with both lists but half of the participants received list A first whereas the other received list B first. Trial order was randomized for each participant.

Non-action pair selection. An additional 24 noun stimulus-pairs were included as a non- action condition (See Appendix C for the list of non-action pair stimuli). Stimulus-pairs were randomly drawn from the 96 items developed by Kalenine et al. (2009) and supplemented with

Shelton and Martin (1992) and the South Florida Association Norms (Nelson, McEvoy, &

Schreiber, 1998). Stimulus images were mainly drawn from the Snodgrass standardized norms

(Snodgrass & Vanderwart, 1980) with a few exceptions from other sources. All images had high naming agreement (i.e., > 80%). Related pairs were counterbalanced following the original stimulus manipulation in Kalenine et al. (2009) (i.e., living/non-living, semantically- similar/thematic associative relation). 24 unrelated stimulus-pairs were also created by repairing a probe of a related pair to another probe or a target of a related pair to another target item.

39

Table 3-1. Mean characteristics of action stimuli Action Type %NA SVC REm VM Fam CNC Freq WL Physical .86 29.60 6.10 5.05 6.54 6.20 57.2 4.5 (.16) (8.30) (.59) (1.03) (.41) (.48) (89.5) (1.02) Non-physical .84 28.09 3.68 4.49 6.37 5.74 43.7 4.7 (.15) (10.59) (1.33) (.99) (.58) (.55) (71.8) (1.18) %NV = percentage of name agreement, SVC = subjective visual complexity (KB), REm = relative embodied experience on a 7-point Likert scale (1 = Not at all), Fam = subjective familiarity rating on a 7-point Likert scale (1 = Not at all familiar), CNC = subjective concreteness rating on a 7-point Likert scale (1 = Not at all concrete), Freq = SUBTL frequency in third-person singular form, WL = length in third-person singular form

Table 3-2. Mean and standard deviation of the 7-point Likert scale for the pair-relation survey Pair Relation Semantic Similarity Association Strength Visual Similarity Physical Similarity 4.93 (.61) 5.01 (.93) 3.76 (1.16) Association 3.28 (.51) 5.12 (.71) 3.88 (1.13) Non-physical Similarity 4.99 (.75) 5.07 (.89) 3.75 (1.26) Association 3.52 (.80) 5.06 (.72) 3.85 (1.46)

40

CHAPTER 4 ACTION SEMANTIC JUDGMENT TASK

Overview

I conducted two eyetracking experiments. The first was a semantic task in which healthy young participants were asked to determine the semantic congruence of pairs of actions and objects while their eye movements were recorded. Two stimulus formats were included to directly compare sematic processing from pictures and words. In addition, the type of action category was also manipulated to investigate the embodied experience effect during semantic processing. That is, participants viewed actions they could perform with their own body part effectors (e.g., running, catching) and the other type of actions mostly demonstrated changes to an object or the environment (e.g., melting, raining).

The second experiment condition included a concurrent motor task in addition to the semantic task. This semantic + motor dual task condition served as an additional manipulation to examine the embodied experience effect aside from manipulation the type of action category. It was hypothesized that the motor task would induce dual task interference effects for 1) the action pairs overall (i.e., action vs. object), and 2) especially for physical actions.

Participants

35 healthy young participants (9 male) were recruited from Temple University,

Philadelphia, PA. Inclusion criteria were as follows: 1) age range of 18 to 30 years (M = 21.5 years, SD = 2.38), 2) right-handed as verified by the Edinburgh Handedness Inventory (Oldfield,

1971), 3) native English speaker, 4) normal or corrected to normal vision, 5) normal hearing, 6) no history of self-reported neurological or psychiatric disorders, 7) global cognition within age- based norms (i.e., > 26 of 30 possible) on the Montreal Cognitive Assessment (MoCA;

Nasreddine et al., 2005), and 8) no glare resistant eyeglasses or hard contact lenses. All

41

participants were provided with an informed consent and be compensated as approved by local

Institutional Review Board.

General Procedure

The goal of the dissertation project was to examine semantic processing of action concepts as a function of input modality and embodied experience. Two experimental task conditions were conducted to target Aim I: the direct vs. indirect access of action knowledge, and Aim II: the physical vs. non-physical effect for action comprehension. Participants were instructed to make semantic relatedness judgment with and without a concurrent, continuous motor task when their eye movements were measured. After the pre-test screening, participants completed the two experimental conditions during two visits in the lab, one visit per week for two weeks.

Cognitive Testing. Aside from the informed consent, participants also completed the following neuropsychological screenings, including MoCA (Nasreddine et al., 2005), the Simon test (Simon & Rudell, 1967), and Shipley vocabulary test (Zachary, 1986). The Shipley vocabulary test was used to test vocabulary. The test inhibition and executive function was examined via the Simon test created based on Martin, Kohen, Kalinyak-Fliszar, Soveri, and

Laine (2012).

Semantic Task and Procedure

Trial structure. Participants were asked to judge whether pairs of concepts appear on the screen ‘go together’ or not. They were to respond to each pair by verbally responding “Yes” or

“No” as quickly and accurately as possible. Participants were instructed to think about all aspects of a concept. For example, pairs that ‘go together’ could be similar in the way they were performed such as “wiping” and “rubbing,” or associated in meaning such as “reading” and

“writing.”

42

Trials consisted of two forms of stimulus type – one with picture and the other with word.

All stimulus slides were standardized to 500 by 500 pixels. The lexical stimuli were presented as black letters on a white background with –ing ending in lower case (Arial, size 50). Participants first looked at a fixation cross and heard a simultaneous pure tone for 600 ms to begin each presentation. A probe and the scrambled target then appeared on the screen arranged horizontally up for 500 ms. The location of the probe and target on the left or right side of presentation on the screen was also randomized. Following white screen for 250 ms, the real probe-target pair then showed up. It disappeared automatically after 3000 ms if there was no response, and the trial was counted as an error. Following another white screen for 1000 ms, a neutral mask (i.e., #######) remained on the screen for 600 ms. A final white screen stayed up on the screen for 750 or 1500 ms at the end of a trial. See Figure 4-1 for the depiction of the trial procedure. Behavioral (e.g., accuracy and reaction time) and eyetracking data (e.g., fixation location, fixation duration) were recorded.

The experiment consisted of 320 action trials (half in pictorial form and half in word form). Picture and word targets were presented in separated blocks, within each 2 balanced lists

(i.e., A or B) were created. Each list contained 80 stimulus pairs (half consistent, half inconsistent). The orders of the presentation of blocks, stimulus presentation within the list, and the location of probe/target (i.e., left or right) were randomized 4 times (2 input modality x 2 loci on the screen) to reduce carry-over effects. 96 non-action trials (i.e., noun stimulus-pairs) were run as two separate blocks, half pictorial and half word forms, and followed the same counterbalancing method. Before the main experiment, a practice section with feedback on correctness was implemented. Participants were instructed as followed in both practice and experimental sessions:

43

“You are going to see pairs of stimuli (either words or pictures). You are required to think about all aspects of a concept and judge how well they go together. Indicate yes (match) or no (mismatch) by saying your answer aloud. Respond as quickly and accurately as possible.”

Motor Familiarization Task and Semantic-Motor Control Task

Two tasks were implemented prior to the semantic-motor task to ensure that participants were familiar with the finger tapping task and trial procedure. The familiarization procedure included a baseline finger tapping task and a semantic-motor control task.

Baseline finger tapping task. Participants were instructed to perform a modified finger tapping task on a number pad connected to a PC using the eyetracking paradigm. Participants were instructed to type a series of number in order (i.e., 4-5-6-6-5-4) with their dominant or non- dominant hand as soon as they saw a cross (+) paired with an audible tone. The cross remained on the screen for 60 seconds, and participants were instructed to remain typing the numbers in sequential order as quickly and accurately as possible until the cross disappeared and a mask slide (i.e., #######) appeared. The total correct set of sequential tapping in 60 seconds was calculated for each participant.

Semantic-motor control task. A second part of the motor task familiarization included a semantic control task with prime and target picture/words scrambled during finger tapping. We used Photoshop Scramble filter to break the stimuli into 2 x 2 pixel tiles then spatially rearrange these tiles randomly. Two stimulus sub-lists were created from list A and list B respectively. 24 stimulus-pairs (12 related, 12 unrelated) were randomly selected and both stimulus type (i.e., picture, word) were given. Therefore, the control task consisted of 48 scrambled stimulus-pairs.

In addition, practice trials (N = 4 in both pictorial and word forms) were given during the control task. In total, the control task contained 56 stimulus-pairs. Participants received the control task

44

using either list A or B of the experimental stimuli via random assignment. Trial procedure was the same as the semantic judgment task. The total correct set of sequential tapping was calculated.

Motor Task and Procedure

After participants were familiarized with finger tapping and the task structure, the main semantic plus motor task was implemented. The motor task involved participants to perform finger tapping during the semantic relatedness task. The trial structure was identical to the semantic task, but participants were also asked to perform concurrent finger tapping (i.e., 4-5-6-

6-5-4) during the semantic task.

Eyetracking Recording and Apparatus

We presented the stimuli to participants and monitor various aspects of eye movements using an infrared, table-mounted eyetracking system (SMI iView X RED eyetracker)

(SensoMotoric Instruments Inc., Boston, MA). This eyetracker samples binocular eye movements at a rate of 120Hz with spatial resolution < 0.03°.

The eyetracking study was taken part in the Memory, Concepts, Cognition Laboratory at

Temple University. Participants were positioned on an optical chin-rest 50 cm away from a 17 inch PC monitor. The distance between participants’ eyes and the eyetracker was controlled between 60 - 70 mm. I first conducted a 5-point calibration and validation procedure where the participant tracked moving dots across the screen. The eyetracker was considered well-calibrated when the fit between the target dots and observed movement was < 0.5 degrees. After the optimal eyetracking recording was confirmed, participants initiated a brief familiarization sequence and then begin the experiment trials.

45

Data Design and Analysis

Behavioral Analysis

Semantic task analyses were conducted on response accuracy and reaction time.

Participants’ verbal semantic judgment was recorded via TASCAM digital recorder for offline accuracy and reaction time coding. Reaction time was first labeled and measured from the onset of fixation tone to the onset of their verbal response using Audacity audio editor. I then entered participants’ response time on the target slide specifically in the analysis (i.e., excluding all trial presentation time before the target slide, which was 1350 ms in total).

Response accuracy and reaction time were entered in the subject (F1, t1) and item (F2, t2) analysis respectively using repeated and mixed analysis of variance (ANOVA). Trials were excluded from the analyses when no responses or responded over 4350 ms (i.e., did not respond before target slide disappeared). Fixed factors included Handedness (Dominant, Non-dominant),

Task condition (Semantic, Motor), Task order (Semantic-, Motor-first), Domain (Action, Object),

Modality (Picture, Word), Category (Physical/Living, Non-physical/Non-living), and Relation

(Associatively-related, Semantically-similar).

For exploratory purposes, the initial omnibus subject (F1, t1) analysis included

Handedness and Task Order as the between-subject variables, and Task Condition, Domain,

Modality, Relation, and Category as the within-subject variables. The initial omnibus item analysis (F2, t2) considered Domain, Relation, and Category as the between-subject factors, and

Modality, Handedness, Task Condition, and Task Order as the within-subject variables. Final statistical analyses collapsed across all handedness and task order and were conducted by

Domain respectively.

The behavioral variable of interest for the motor task analyses was tapping rates (i.e., time per correct set of sequential tapping). The SMI eyetracker recorded participants’ input

46

during entire experiment run time (i.e., user event). Tapping rate was calculated by dividing experiment run time by total correct set of sequential tapping, which yielded the time it took to type one correct set of taps (i.e., 4-5-6-6-5-4). A repeated measures ANOVA was conducted to examine if 1) tapping rates in experimental trials were significantly different from the baseline, and 2) if tapping rates among experimental trials differed from each other.

Eyetracking Analysis

Eye movements were recorded from the onset of the fixation cross to the neutral mask

(i.e., #######) for all experimental trials. I conducted a target fixation analysis examining a series of gaze measures inside the target area of interest (AOI). For the current eyetracking setup, a fixation is defined as dwell times within a circumscribed area, exceeding 80 ms in duration.

The variables of interest during eyetracking were the number of fixations inside an AOI, total fixation duration in an AOI, and the number of backtracks (i.e., regressions) to previously visited AOI. I focused on the above gaze measures within the target AOI in the target slide (~

3000 ms).

The number of fixations inside the AOI counts the number of fixations in the area of interest during the presentation time of the slide (i.e., 3000 ms). Previous studies suggested that the total number of fixations is indicative of search efficiency and/or task difficulty (Holmqvist et al., 2011). A higher number of fixations could be due to the higher demand of the task before any decision is reached. Therefore, the more difficult a task is, the more number of fixations to the AOI is expected. Total fixation duration in an AOI sums the fixation duration inside an AOI and examines how long participants’ eyes are still in the AOI. The more processing demand the

AOI is, the longer duration will be observed. The number of backtracks to an AOI is the number of re-fixations to the AOI after the first fixation. An increased number of backtracks could be indicative of confirmatory scanning, substantially inferring task difficulty.

47

Similar to the behavioral analyses, I entered the eyetracking output into repeated and mixed ANOVAs with the same final predictors as the behavioral analyses (i.e., Task condition,

Domain, Modality, Category, and Relation) for the target AOI by subject and item analysis respectively.

Research Questions Remarks and Predictions

The first aim examined the direct vs. indirect access to action semantic concepts. If one input modality carried processing advantage over the other, a fewer fixations, less backtracks, and/or shorter fixation duration would be observed in addition to behavioral advantages such as higher accuracy and shorter overt response latencies.

The second aim was to examine the effect of embodied experience during semantic processing. I predicted processing benefits of physical actions over non-physical actions due to first-person motor enactment. In addition, the concurrent motor task would affect processing of physical actions to a greater extent than the non-physical actions during action semantic judgment task; therefore, the reaction time in the semantic judgment task should show significant changes in the physical actions as compared to non-physical actions during the semantic + motor dual task condition. Moreover, the eyetracking measures would reflect these processing demands and changes.

48

Figure 4-1. Example of the trial procedure

49

CHAPTER 5 RESULTS

Data Trimming

14% of the experimental trials were classified as errors, including response errors (i.e., no answer or timeouts over 4350 ms) and incorrect responses. All were excluded from the statistical analyses for response latency. Data trimming was also implemented by subject and by item.

Within subject, one participant was excluded due to exceptionally poor accuracy (z < -3) in one or more of her experimental conditions. At the item level, five item pairs were excluded from the action domain if more than one experimental trials for the pair elicited poor accuracy (z < -3).

The five pairs included fighting-biting (associative, physical-act), floating-hanging

(semantically-similar, non-physical-act), filling-stacking (semantically-similar, non-physical-act), hanging-flying (associative, non-physical-act), and petting-kissing (associative, physical-act).

After elimination, 82% of the data remained for the behavioral and eyetracking analyses.

Behavioral Results

Initial omnibus repeated measures ANOVA with all fixed factors indicated no reliable main effects of Handedness and Task order; therefore, all subjects regardless of Handedness and

Task order were entered in the final analyses for the reaction time and accuracy respectively.

Initial investigation also indicated significant main effect of Domain in that action pairs were judged with less accuracy (.83 vs. .94) and required longer reaction time (993 vs. 719 ms).

Individual analyses were conducted for action and object concepts separately to report the trend of difference. Final fixed factors entered into the analyses included Task condition (Single, Dual),

Modality (Picture, Word), Category (Physical/Living, Non-physical/Non-living), and Relation

(Associatively-related, Semantically-similar).

50

Response Accuracy

Overall, object pairs (.94) had very high accuracy as compared to action pairs (.83).

Participants were also more accurate in the semantic single task condition [.89 vs. .88, trend by subject t1(34) = 1.88, p = .069, t2(91) = 2.71, p = .006], but the effect was largely driven by action concepts. The patterns of response accuracy are depicted in Figure 5-1 for action concepts and Figure 5-2 for object concepts. Table 5-1 summaries the F statistics and Table 5-2 summaries the mean and standard deviation of the accuracy measures.

For the object pairs, a main effect of Relation was observed. Object pairs based on association had higher accuracy than pairs based on similarity [.95 vs. .90, t1(34) = 3.12, p = .004, trend by item t2(20) = 1.89, p = .07]. There was a significant interaction effect of Relation x

Category such that Non-living objects had higher accuracy in associative pairs (.99 vs. .91; t1(34)

= 5.69, p <.001), trend by item t2(20) = 1.85, p = .08) whereas Living objects had higher accuracy in semantically-similar pairs (.92 vs. .87, t1(34) = 2.53, p = .015, t2(20) = 1.2, n.s.).

For the action pairs, there were main effect of Task Condition, Modality, and Category.

Dual task condition yielded lower accuracy than single task (.73 vs. .76, trend by subject t1(34) =

-1.88, p = .07, t2(71) = -4.14, p < .001). People also responded with lower accuracy in picture condition than word condition (.75 vs. .81, t1(34) = -15.2, p = .001, t2(71) = -2.52, p =.014) and in physical actions vs. non-physical actions (.76 vs. .80, t1(34) = -3.36, p = .003, t2(71) = -1.19, n.s.). Two way interaction of Modality x Relation indicated higher accuracy in the semantically- similar pairs in the word condition [.84 vs. .77, t1(34) = 3.65, p =.001, t2(71) = 4.67, p < .001] but higher accuracy in the associative pairs in the picture condition [.78 vs. .73, t1(34) = 3.24, p

= .002, t2(71) = .018, n.s.]. Moreover, Relation x Category interaction showed greater accuracy of the non-physical pairs (.83) compared to physical pairs (.73) was also only observed in the semantically-similar relation, t1(34) = 7.29, p <.001, t2(71) = 2.86, p =.005; although a reverse

51

pattern was seeming (Associative physical vs. non-physical = .79 vs. .76), the difference did not reach statistical significance. Significant interaction effect of Relation x Category x Modality revealed that physical action pairs had lower accuracy than non-physical pairs regardless of

Relation in the word condition; however, physical actions had higher accuracy than non-physical actions in the associative pairs presented in picture condition (.83 vs. .73) but lower accuracy in the semantically-similar pairs presented in word condition (.66 vs. .79).

Reaction Time

Main effects of Task condition for both conceptual domains (i.e., action, object) indicated a dual task effect in that people took longer to judge for pair relations in the dual task condition

[Action dual = 1049 ms, single = 937 ms, t1(34) = 3.93, p <.001, t2(71) = 11.57, p <.001; Object dual = 768 ms, single = 670 ms, t1(34) = 3.98, p <.001, t2(20) = 12.09, p <.001]. However, main effect of Modality was only observed in the action concept and main effect of Relation was only observed in the object concept. The patterns of reaction time are graphed in Figure 5-3 for action concepts and Figure 5-4 for object concepts. Table 5-3 summarizes F statistics and Table 5-4 shows mean and standard deviation of the reaction time.

For object pairs, it took longer to judge for their similarity compared to association [764 vs. 674 ms, t1(34) = 6.87, p <.001, t2(20) = 2.49, p =.02] and the pattern was consistent in both picture and word modality with a rather robust effect with words [mean difference = 135 ms, t1(34) = 6.49, p < .001, t2(20) = 3.97, p = .001]. When considering Category, it was faster to judge for the pair association for Non-living vs. Living objects [635 vs. 713 ms, t1(34) = -6.62, p

<.001, t2(20) = -2.01, p = .058]. However, the pattern reversed when judging for pair similarity; it was actually faster to judge pair similarity for Living than Non-living object pairs [726 vs. 802 ms, t1(34) = -4.46, p <.001, t2(20) = -1.48, n.s.].

52

For action pairs, it always took longer to judge for semantic relatedness in the picture than word modality [1027 vs. 959 ms, t1(34) = 3.7, p = .001, t2(71) = 4.22, p < .001].

Additionally, significant interaction effects revealed modulation of Category and Relation.

Action pairs based on similarity yielded longer reaction time than associatively related pairs in the picture modality but the pattern reverse in the word modality [Picture Association = 995 ms,

Similarity = 1095 ms, t1(34) = -5.33, p < .001, t2(71) = -1.64, n.s.; Word Association = 982 ms,

Similarity = 936 ms, t1(34) = 2.67, p = .01, t2(71) = 2.17, p = .03]. When considering Category,

A general trend showed that it took longer to judge for non-physical vs. physical action pairs related associatively [1010 vs. 967 ms, t1(34) = 2.52, p =.02, t2(71) = 1.03, n.s.]. However, the pattern reversed in the semantically-similar condition in that it took longer to judge for physical vs. non-physical actions [1017 vs. 977 ms, t1(34) = 3.29, p =.002, t2(71) = 1.45, n.s.].

Motor Rates

Three subjects were removed from the motor rate analysis due to instrument error and two outliers were excluded resulting in 31 subjects in the final motor rate analysis. Conditions with motor tapping rate greater than 5000 ms (> 3SD) were also excluded in the analysis. After data trimming, 84% of the original data remained.

Motor typing rate was conducted by dividing the total run time (trial start time to trial end time) by the total numbers of correct set of sequential tapping for each trial condition (i.e., 60 sec motor familiarity, semantic-motor control task, action-picture, action-word, object-picture, object-word). Initial examination showed no significant effects of Handedness and Task order for all trial conditions (i.e., Domain x Modality); additionally, there was no significant effect of

Task List (a, b) for the Motor Control trial. Therefore, a repeated measures ANOVA was conducted to examine if participants’ typing rate was moderated by trial conditions. The lack of statistical significance indicated that motor typing rate did not differ in response to trials.

53

Additional analysis was conducted to evaluate the experimental trials with Domain and Modality as the within-subject variables for action vs. object comparison. Still, no significant effects were identified as well.

Visual inspection indicated that there was a trend of increased tapping rate for action concepts; therefore, a secondary analysis was conducted to examine the percentage of rate change for each experimental condition from the two baseline motor tasks. The percentage of rate change was defined by dividing the tapping rate difference between the experimental task and the baseline task by the tapping rate of the baseline task. In particular, there were greater changes from the two baseline tasks in the action picture condition. It had 17-21% difference from the two baseline tasks respectively, about two to three times more than the object concepts

(4-8% difference from the baseline). Table 5-5 summaries the mean, standard deviation, and the percentage of tapping rate change in each trial condition and Figure 5-5 depicts the trend of difference.

Behavioral Summary

Mixed ANOVAs with subject and item as random factors indicated an overall processing advantage for object concepts relative to actions. Participants took more time determining the semantic relations for action pairs than object pairs (262 ms difference). In addition, the overall accuracy was lower in action concepts as well (10% difference).

In general, Task condition (i.e., semantic single task, motor dual task) induced a significant but small dual task effect on the semantic judgment task. Participants took more time to respond action (112 ms difference) and object (98 ms difference) pairs; however, only action pairs became less accurate in the dual task condition (3% difference). Moreover, although the tapping rate did not sustain statistically difference, action concepts generally had greater tapping rate than the motor baseline and the object pairs. In particular, action pictures demonstrated the

54

greatest percentage of rate change from the baseline task among all experimental conditions (17-

21% vs. action word 10-13% vs. object concepts 4-8%).

Separate analyses by Domain indicated that participants’ responses were modulated by different independent variables (i.e., Modality, Relation, and Category). For action concepts,

Modality had small but significant main effect indicating greater response reaction time (68 ms difference) and lower accuracy (6% difference) in processing action picture pairs. On the other hand, Relation was the dominant main effect for object concepts resulting higher accuracy (5% difference) and shorter reaction time (90 ms difference) in judging object associative pairs than semantically-similar object pairs.

Both conceptual domains showed significant Relation x Modality and Relation x

Category interaction effects on response accuracy and reaction time. In object concepts, associative pairs were faster to be judged than semantically-similar pairs in both input modalities

(37 ms; 3% difference), especially in word condition (135 ms; 8% difference). Between object categories, non-living categories were judged faster in the associative relations (78 ms; 7% difference) whereas there was an opposite trend of faster reaction time for the living category in the semantically-similar relation (76 ms; 5% difference). In action concepts, Relation x Modality interaction term showed a processing advantage of associative pairs in the picture modality (64 ms; 6% difference) but advantage of semantically-similar pairs in the word modality (46 ms; 6% difference). Relation x Category interaction term also revealed processing advantage of physical category in the associative relation (43 ms; 3% difference) but advantage of non-physical category in the semantically-similar relation (40 ms; 10% difference). It should be noted that the effect size was small even though it was statistically significant.

55

Interim Discussion of the Behavioral Results

These behavioral results revealed the processing difference between objects and actions and the dynamics of semantic processing in both conceptual domains.

The behavioral results first supported study aim 1 that healthy adults performed differently in response to input modality, especially for actions. In the action concepts, participants continued to have processing ‘disadvantage’ as reflected in longer reaction time and lower accuracy rate than the object concepts. Although there was no main effect of Modality effect in the object concepts, it demonstrated a category-specific effect, which was consistent with previous studies (Hoenig, Sim, Bochev, Herrnberger, & Kiefer, 2008; Kiefer, 2001; Saffran,

Coslett, & Keener, 2003). That is, living category carried processing advantage in the semantically-similar pairs whereas non-living category carried processing advantage in the associative pairs.

Such interaction was also found in the action concepts. There was also a dissociated pattern of interaction of Relation x Modality. It was easier to process associative action pairs in the picture modality but semantically-similar pairs in the word modality. This interaction pattern was opposite from object concepts. It was typically faster to judge for semantically-similar object pairs in the picture modality because of the shared semantic features. However, I reasoned that since there could be various pictorial presentations for one action concept, people could not use a fixed set of invariant perceptual cues to make relatedness judgment. Alternatively, many distinct physical actions could be performed with the same body part effector but did not share category membership (e.g., pounding vs. grabbing). Consequently, it took more processing time to determine the semantically-relatedness in the picture modality for physical actions. Relation x

Category interaction showed the processing advantage of physical action pairs in associative relation and non-physical action pairs in semantically-similar relation. It could be that there were

56

fewer varieties of visual motion or motoric features for non-physical action category. For example, “spinning” and “rolling” both were related to motions around an axis according to the verb class classification (Levin, 1993). Thus, we found an advantage of non-physical category in semantically-similar relation.

The behavioral results did not provide strong evidence to support study aim 2 in action semantic processing advantage as a result of embodied experience. Two manipulations were used to address the embodied experience during semantic processing. In the first semantic judgment task comparing physical to non-physical actions (e.g., running vs. leaking), no effect of task condition was found on action category or their interactions although there was a Relation x

Category interaction as mentioned previously.

The second manipulation included a concurrent motor task. It was hypothesized that the motor task would have greater dual task effect in the physical action category due to resource competition. Although motor tapping did not show a significant tapping rate difference from the baseline and among experimental trials, action pairs showed greater tapping rate changes from the motor baseline (248~397 ms/set) and the object pairs (107~256 ms/set). Action pictures in particular showed the longest tapping rate. Moreover, performing a concurrent motor task with the semantic judgment task influence actions greater than objects by demonstrating longer reaction and lower accuracy overall. However, no effects of motor task on action category were observed in the semantic judgment task.

This dual task effect was attributed to increased executive demand or general cognitive resource allocation. There were no main effects of Handedness (Dominant, non-dominant) and

Task order (Semantic single task first or Semantic and Motor dual task first) in the initial omnibus investigation. Participants’ outcomes did not differ significantly by whether they used

57

their dominant or non-dominant hand to perform the motor task or whether they did the single or dual task before the other. The greater effect of dual task observed in actions vs. objects could be due to the complex nature of actions; therefore, greater resource was demanded when judging for actions.

The lack of significant motor-semantic interaction could be that the tapping task was not hard enough and participants could adapt themselves to the task quickly. A post experimental survey revealed that 12 participants found the motor task easier to perform than the semantic judgment task; moreover, the average task difficulty rating for the motor task by every participants was neutral (4.4 out of 7 on a Likert scale of 7 being very difficult). Since the order of the experimental trials was randomized, subjects could also become adapted to the motor task and performed with faster rate in the later trials. The demographic questionnaire also revealed that many of the participants were actively involved in sports, playing musical instrument and/or computer/video game experience. Therefore, the level of task difficulty and participants’ motor skills could attribute to the absence of significant motor-semantic interaction.

Alternatively, the lack of consistent handedness effect also questioned whether motor tapping task and the action semantic processing engaged the same resource in the brain. A theoretical assumption would predict motor interference during an overt motor task and action semantic processing that shares the same body part effector. Furthermore, greater interference effect with dominant hand would be expected since neuroimaging findings in action representation were mostly left-lateralized (see Kemmerer, 2015 for a review). However, I found no difference between dominant and non-dominant hand. The dual task effect in the behavioral results then demonstrated a general effect as a result of performing two tasks at the same time

58

(Pashler, 1994). Noted that current evidence only showed an overall trend of interference but was still not sufficient to make a definitive claim. Further empirical investigation will be required.

Eyetracking Results

Fixation Count

Overall, action concepts demonstrated more fixation counts to the target AOI than object concepts [4.21 vs. 3.8, t1(34) = 7.83, p <.001, t2(91) = 2.56, p = .01]. People also fixated more in pictures compared to words [4.4 vs. 3.6, t1(34) = 10.9, p <.001, t2(91) = 9.22, p <.001. The F statistics of the ANOVAs (F1, F2) are summarized in Table 5-6 and graphed in Figure 5-6. The mean and standard deviation for fixation count to the target region are shown in Table 5-7.

Only main effects of Modality and Relation showed significance in the action concepts.

Overall, pairs in picture condition (4.7) yielded more number of fixations to the target AOI than in the word condition (3.7), t1(34) = 13.78, p <.001, t2(71) = 9.78, p <.001. More fixation counts were also apparent in semantically-related pairs than associative pairs [4.22 vs. 4.17, t1(34) = 2, p

=.05, t2(71) = .53, n.s.]. Two way interaction of Modality x Relation indicated less numbers of fixation count to the associative pairs compared to the semantically-similar ones in the picture condition [4.57 vs. 4.81, t1(34) = -3.79, p =.001, t2(71) = -1.12, n.s.] but the pattern reversed in the word condition ( 3.77 vs. 3.61).

For the object domain, a reliable main effect was observed in Modality. Participants had more number of fixations when viewing object pictures compared to words [4.1 vs. 3.5, t1(34) =

5.94, p < .001, t2(20) = 6.5, p < .001].

Revisits

Single task yielded more revisits than dual task condition [1.17 vs. 1.08, t1(34) = 2.44, p

= .02, t2(91) = 5.25, p < .001]. However, object and action concepts did not differ significant in terms of revisits (Action = 1.13 vs. Object = 1.13). Main effect of Modality showed that picture

59

condition continued to yield more revisits than word condition [1.20 vs. 1.05, t1(34) = 4.34, p

<.001, t2(91) = 7, p <.001], and semantically-related pairs tended to require more revisits than associative pairs [1.17 vs. 1.09, t1(34) = 4, p < .001, t2(91) = 2.08, p = .04]. The F statistics of the

ANOVAs (F1, F2) are summarized in Table 5-8 and graphed in Figure 5-7. The mean and standard deviation for the number of revisits to the target region are shown in Table 5-9.

For the action concepts, there were a significant main effect of Modality and two-way interactions of Modality x Relation. Action pictures had more revisits than action words [1.17 vs.

1.08, trend by subject t1(34) = 1.93, p = .06, t2(71) = 2.96, p = .004]. A Modality x Relation interaction term showed that semantically-similar pairs had more revisits than the associative pairs in the picture modality [1.19 vs. 1.14, t1(34) = 2.3, p = .03, t2(71) = 1.59, n.s.] whereas there was a trend of greater revisits in the associative pairs than semantically-similar pairs in the word modality (1.09 vs. 1.07).

For the object concepts, a reliable main effect was observed in Modality, Task condition, and Relation. There were more fixation revisits in the semantic single task condition [1.2 vs. 1.06, t1(34) = 2.64, p = .01, t2(20) = 4.48, p <.001]. Object picture pairs had more revisits than object word pairs [1.2 vs. 1.01, t1(34) = 4.6, p <.001, t2(20) = 6.67, p <.001].The main effect of Relation also demonstrated that semantically-similar object pairs tended to require more revisits [1.20 vs.

1.06, t1(34) = 3.88, p <.001, t2(20) = 2.14, p = .04].

Fixation Duration

Overall, more fixation duration was found in the action concepts than the object concepts

[1245 vs. 1172 ms, t1(34) = 4.62, p <.001, t2(91) = 2.01, p = .047] and there was a trend of more fixation duration in the picture than the word modality but did not reach statistical significance

(1224 vs. 1193ms, n.s.). The F statistics of the ANOVAs (F1, F2) are summarized in Table 5-10

60

and graphed in Figure 5-8. The mean and standard deviation for fixation duration to the target region are shown in Table 5-11.

For the action concepts, action pictures had greater fixation duration than action words

[1277 vs. 1213 ms, t1(34) = 2.51, p = .02, t2(71) = 2.46, p = .02]. The significant Relation x

Category interaction reflected that physical actions yielded less fixation duration than non- physical actions in associative pairs [1190 vs. 1286 ms, t1(34) = -5.59, p <.001, t2(71) = -2.04, p

= .05] whereas physical actions had greater fixation than non-physical actions in semantically- similar pairs [1267 vs. 1238 ms, t1(34) = 2.45, p = .02, t2(71) = 0.9, n.s.].

For the object concepts, no reliable significant effect was found by subject and by item analysis. Pairwise comparisons of the Modality x Relation x Category interaction indicated a complete opposite direction that could contribute to the lack of significant results. For object pictures, non-living items had greater fixation duration than living items when paired associatively whereas living items has greater fixation duration when paired semantically.

However, for objects words, living items had greater fixation duration in associative pairs but less fixation duration in semantically similar pairs.

Eyetracking Summary

Three eyetracking measures were included in the repeated and mixed ANOVAs to determine any processing variability. All measures had been used to indicate changes during information processing in previous studies such as task difficulty (Holmqvist et al., 2011) or semantic informativeness when compared to unrelated/filler information (Odekar, Hallowell,

Kruse, Moates, & Lee, 2009). They were used to infer processing differences in the current study.

I reasoned that faster or easier access to semantic memory would result in less visual attention during decision process when viewing pairs of stimuli. A general trend indicated different degree

61

of processing load by conceptual domains (i.e., action, object) and input modality (i.e., picture, word).

In the fixation count analysis, the numbers of fixation within the target AOI were calculated. The higher the number, the more visual attention and search were allocated inside the

AOI. There were small but significant effects of more numbers of fixations in the action concepts

(0.4 differences) and in the picture modality (0.8 differences). Main effect of pictures over words was observed in both object (0.6 differences) and action (0.95 differences) concepts. In the action concepts, greater fixation counts were found in the semantically-similar pairs than associative pairs (0.23 difference); furthermore, semantically-similar action pairs had significantly greater numbers of fixation in picture modality (0.53 difference) than in the word modality whereas a trend of more fixation counts was found in the word modality for associative action pairs (0.2 difference). However, for the object domain, only Modality showed statistical significance. Object pictures had more fixation counts than object words (0.6 difference).

In the revisit analysis, I examined the number of revisits to the target AOI participants had previously fixated within. I reasoned that a larger number of revisits would be indicative of greater processing demand of the information and of a need to (re)confirm the information

(Holmqvist et al., 2011). No difference was found between action and object concepts. More revisits were elicited 1) in the semantic single task condition as compared to the motor dual task condition (.09 difference), 2) in the picture modality compared to words (0.15 difference), and 3) in the semantically-related pairs than associative pairs (.08 difference). For the action concepts, greater revisits were found in the semantically-similar pairs in the picture modality (.05 difference) whereas associative pairs had more revisits in the word modality (.02 difference). In the object concepts, greater revisits were evidenced in 1) semantic single task condition (.13

62

difference), 2) picture modality (.20 difference), and 3) semantically-similar pairs (.13 difference).

In the fixation duration analysis, I examined the sum of the fixation duration inside the target AOI. Greater duration was indicative of more processing demand. There were more fixation durations to the action concepts (73 ms difference). For the action concepts, action pictures had more fixation duration than action words (64 ms difference). Relation x Category interaction showed that non-physical actions consistently had greater fixation duration in associative pairs (98 ms difference) whereas it had less fixation duration in the semantically- similar pairs. However, no reliable fixation duration difference was found for the object concepts.

Interim Discussion of the Eyetracking Results

The eyetracking measures also illustrated processing difference between action and object concepts. Both conceptual domains showed flexible gaze patterns in response to experimental manipulations. Overall, although a small effect, action concepts had more fixation counts and fixation duration than object concepts. However, fixation revisits were comparable between the two conceptual domains. This could be a reflection of the level of processing of the information conveyed in actions and objects. Actions could be more difficult so that participants allocated more attention to interpret the target AOI (i.e., more fixation counts or more fixation duration) as part of the decision process; therefore, revisits were compromised. On the other hand, the information conveyed in objects was less complex so that it could be quickly activated with less fixation counts within the AOI before they made the related judgment; therefore, revisits were not always required during decision making.

Supporting aim 1, the eyetracking results first showed that more fixation counts and revisits were found in the picture modality compared to word modality. This phenomenon was revealed in both action and object concepts. This could be attributed to the fact that visual

63

information in pictures is generally greater than words (Snodgrass & Vanderwart, 1980).

Therefore, more visual scans or processing demand, as reflected by fixation counts and revisits, were found in picture modality for both conceptual domains. Moreover, the effect was greater in action concepts.

The fixation duration measure also showed a significant main effect of Modality in action concepts, indicating greater visual attention allocated to action pictures than action words.

However, object pictures and object words showed comparable results due to the opposite direction of dissociation of Relation x Category interaction in the two modalities.

In regard to the second research question on embodied experience during action semantic processing, there was no significant evidence in the eyetracking measures. There was no robust main effect of Category and related interactions. The lack of eyetracking evidence for objects could be due to the rapid activation of the semantic information from visual recognition (i.e., faster recognition from one look). Therefore, fixation count or revisit did not reflect any category-specific effect as seen in the behavioral measures (although small).

Although a close look of the pairwise comparison showed some trend of category difference in the interaction terms, no reliable statistic significances (i.e., by subject and by item) was evident for effects of Category and Task condition. Action pairs with or without physical experience could be weighted comparably in the semantic judgment task, either equally hard or abstract; therefore, no significant processing demand was found.

Secondary Analysis

A secondary analysis was conducted to exclude non-physical items that contained human body parts. I applied strict exclusion criteria by eliminating non-physical items that contained human body parts, whether it was actively involved in the non-physical action concepts (i.e., active effector) or not. For example, the “folding” image involved two hands folding a paper.

64

However, the “ripping” image depicted that a person’s pants was ripped; even though the person did not perform any action, this “ripping” image was also excluded. This step was to examine any interference or facilitation effect as a result of body parts presence. nine items were excluded from analysis, including breaking-ripping (whole body), dropping-falling (whole body/hand), dropping-breaking, folding-crumpling (hand), swerving-sliding (whole body), swinging-rolling

(whole body), twisting-turning (hand), unfolding-wrapping (foot), splattering-raining (whole body) (items involved human body-parts in the images were in italics.)

Subject and item analyses were conducted for all outcome measures after data elimination. The secondary analysis indicated nuanced differences from the original results. All major differences remained significant and trends continued to show similar patterns. Therefore, the analyses before item elimination reflected general difference between physical and non- physical action experience.

Discussion of the Experimental Findings

Both behavioral and eyetracking measures indicated more processing demand for action concepts relative to object concepts. These differences emerged in the context of controlling for task condition, modality, relation, and category; further, the effects continued to show in the secondary analysis with a strict item exclusion criteria. The behavioral measures first showed that Modality had a significant effect during processing for action concepts. It took longer time to process action picture pairs, and participants showed lower accuracy for action pictures relative to words. The greater processing demand as reflected by all eyetracking measures in the action picture pairs were consistent with the behavioral findings. The eyetracking measures not only supported the behavioral outcomes but also allowed us to differentiate where the processing difference may occur.

65

The eyetracking measures showed more fixation counts to object pictures as compared to object words. I reasoned that pictures in general include more visual information, resulting in more visual scans during visual recognition stage. Moreover, this processing could be modulated by perceptual similarity, potentially increasing the visual scan demand. However, the behavioral results also indicated a faster reaction time and higher accuracy for the object concepts. The behavioral and eyetracking measures together could indicate faster semantic processing in the object domain despite early visual recognition ‘disadvantage’ as compared to words. That is, the direct and superior access to object semantic memory from the surface form of pictures allows us to get pass the early visual recognition demand as shown in the eyetracking measures.

On the other hand, action concepts elicited lengthier processing in both behavioral and eyetracking measures. Furthermore, the difficulties were observed in both input modalities with greater effect in action pictures. I interpreted this finding as suggesting that action concepts are in general more effortfully processed than static objects regardless of their input modality. In addition, the ‘disadvantage’ of pictorial access to action concepts induced more processing load to early visual recognition stage. Therefore, both early visual recognition stage and later semantic interpretation process could contribute to the increased semantic processing demand in action concepts as compared to objects.

Both behavioral and eyetracking measures showed no strong supporting evidence for study aim 2 (embodied cognition). I examined the effect of embodied experience during semantic processing via 1) action pairs with vs. without embodied experience, and 2) semantic task with or without a concurrent motor task. Both measures showed comparable effect of action category during semantic judgment task. Even though I included action pairs that were similar in the manner of performance (e.g., wiping-rubbing, spinning-turning), the similar movement in

66

physical action did not always trigger processing advantage. In addition, the dual task condition induced an overall processing delay in all experimental trials (i.e., baseline, actions, and objects), though greater effect was found in action pictures. No reliable dual task effect was found in the eyetracking measures.

67

Table 5-1. Summary of F statistics for the ANOVA results examining response accuracy in action and object concepts via subject (F1) and item (F2) analysis Action Object Subject Item Subject Item F1(1, 34) F2(1, 71) F1(1, 34) F2(1, 20) Modality 12.23*** 6.36** .03 .00 TaskCond 3.94 ^ 17.6*** .40 .60 Relation .06 .01 9.7** 3.64^ Category 10.42** 1.42 1.6 .22 Relation x Category 31.8*** 4.53* 25.34*** 4.7* Relation x Modality 38.3*** 7.80** 8.26** 2.72 Relation x TaskCon .87 1.38 .49 .60 Category x Modality 4.35* .46 .04 .01 Category x TaskCon .10 .25 5.68* 2.78 Modality x TaskCon .59 .53 .02 .03 Relation x Category x Modality 43.2*** 5.76* 4.75* .92 Relation x Category x TaskCon 3.3^ 4.03* .02 .01 Relation x Modality x TaskCon .13 .10 .39 .75 Category x Modality x TaskCon .09 .08 4.42* 3.55^ Relation x Category x Modality x TaskCon .09 .09 13.3*** 8.30** * p < .05; ** p < .01; and *** p < .001; ^denotes marginal effect; TaskCon = task condition (single, dual)

Table 5-2. Mean and standard deviation of response accuracy Action Object Associative Semantic Associative Semantic Phy NonP Phy NonP Living NonL Living NonL 0.85 0.74 0.67 0.82 0.91 0.98 0.93 0.89 Single (.17) (.16) (.17) (.12) (.11) (.07) (.12) (.16) Picture 0.80 0.73 0.64 0.77 0.88 0.99 0.94 0.87 Dual (.21) (.18) (.20) (.16) (.12) (.04) (.09) (.16) 0.77 0.79 0.82 0.89 0.95 1.00 0.92 0.84 Single (.17) (.18) (.13) (.13) (.10) (.03) (.17) (.20) Word 0.74 0.79 0.79 0.85 0.91 0.99 0.88 0.90 Dual (.19) (.15) (.15) (.14) (.12) (.05) (.16) (.17) Single = Semantic judgment task only, Dual = Semantic + Motor task, Phy = physical action, NonP = non-physical action, NonL = non-living object

68

Table 5-3. Summary of F statistics for the ANOVA results examining reaction time in action and object concepts via subject (F1) and item (F2) analysis Action Object Subject Item Subject Item F1(1, 34) F2 (1, 71) F1(1, 34) F2(1, 20) Modality 13.7*** 18.2*** .77 .20 TaskCon 15.5*** 133.8*** 15.8*** 146.04*** Relation .76 .07 47.1*** 6.18* Category .01 .00 .01 .14 Relation x Category 15.6*** 2.88^ 43.1*** 6.11* Relation x Modality 27.1*** 10.2** 19.99*** 8.23** Relation x TaskCon 4.54* 1.68 2.07 5.11* Category x Modality .61 .59 5.2* 1.19 Category x TaskCon .26 .01 .10 .14 Modality x TaskCon .09 .06 2.61 15.9*** Relation x Category x Modality .20 .29 2.74 1.43 Relation x Category x TaskCon 2.45 1.34 3.58^ 3.11 Relation x Modality x TaskCon .18 .00 2.0 3.66 Category x Modality x TaskCon .64 .42 .69 2.99 Relation x Category x Modality x TaskCon .12 .16 6.24** 11.88** * p < .05; ** p < .01; and *** p < .001; ^denotes marginal effect; TaskCon = task condition (single, dual)

Table 5-4. Mean and standard deviation of reaction time Action Object Associative Semantic Associative Semantic Phy NonP Phy NonP Living NonL Living NonL 928.7 965.6 999.9 981.8 702.8 614.3 683.0 702.5 Single (259.1) (259.2) (266.7) (266.6) (183.5) (140.9) (197.8) (159.7) Picture 1016.9 1068.5 1147.8 1106.4 795.7 655.1 708.5 823.7 Dual (285.0) (312.8) (334.1) (313.6) (263.9) (174.4) (207.2) (254.0) 928.6 939.9 897.1 851.8 629.9 587.7 670.4 766.2 Single (303.5) (265.1) (217) (205.5) (150.5) (151.3) (160.1) (232.8) Word 995.3 1064.4 1024.1 969.6 723.7 681.9 840.5 916.1 Dual (261.8) (316.7) (248.5) (247.7) (228.7) (191.1) (211.8) (250.1) Single = Semantic judgment task only, Dual = Semantic + Motor task, Phy = physical action, NonP = non-physical action, NonL = non-living object

69

Table 5-5. Mean, standard deviation, and percent rate change of the motor tapping rate Motor Motor Object Object Action Action familiarity control picture word picture word Rate 1938 (493) 1995 (632) 2086 (608) 2079 (870) 2335 (973) 2186 (722) Δ1 2.9 7.6 7.3 20.5 12.8 Δ2 4.6 4.2 17.1 9.6 Δ1 = % rate change from motor familiarity condition, Δ2 = % rate change from motor control condition

Table 5-6. Summary of F statistics for the ANOVA results examining fixation count in the target region in action and object concepts via subject (F1) and item (F2) analysis Action Object Subject Item Subject Item F1(1, 34) F2 (1, 71) F1(1, 34) F2(1, 20) Modality 190.8*** 96.2*** 35.34*** 42.4*** TaskCond 2.9 11.1*** 1.98 7.14* Relation 4.01* 2.98^ 5.06* .19 Category 00 .03 8.2** .94 Relation x Category 9.25** .08 13.25*** 2.37 Relation x Modality 14.27*** 2.52 1.26 .73 Relation x TaskCon 1.85 .75 .05 .08 Category x Modality 18.15*** 1.06 1.27 .00 Category x TaskCon .25 .93 .01 .00 Modality x TaskCon 3.78^ 3.51^ 1.86 3.88^ Relation x Category x Modality 22.84*** 1.91 13.53*** 2.48 Relation x Category x TaskCon 2.76 1.31 .08 .02 Relation x Modality x TaskCon 1.99 1.38 .70 .73 Category x Modality x TaskCon .20 .10 .17 .03 Relation x Category x Modality x TaskCon .03 .02 .15 .04 * p < .05; ** p < .01; and *** p < .001; ^denotes marginal effect; TaskCon = task condition (single, dual)

Table 5-7. Mean and standard deviation of the fixation count in the target AOI Action Object Associative Semantic Associative Semantic Phy NonP Phy NonP Living NonL Living NonL 4.5 4.9 5.0 4.8 3.9 4.6 4.5 4.2 Single (0.7) (0.9) (0.8) (0.8) (1.0) (1.1) (1.0) (1.0) Picture 4.2 4.6 4.8 4.7 3.7 4.3 4.1 3.9 Dual (0.7) (0.7) (0.6) (0.8) (0.8) (1.0) (1.0) (1.2) 3.8 3.8 3.8 3.6 3.4 3.6 3.6 3.7 Single (0.8) (0.8) (0.7) (0.7) (1.0) (1.1) (1.0) (1.0) Word 3.8 3.6 3.7 3.7 3.3 3.5 3.6 3.6 Dual (0.9) (0.9) (0.7) (0.7) (1.0) (1.4) (1.1) (1.0) Single = Semantic judgment task only, Dual = Semantic + Motor task, Phy = physical action, NonP = non-physical action, NonL = non-living object

70

Table 5-8. Summary of F statistics for the ANOVA results examining revisits in the target region in action and object concepts via subject (F1) and item (F2) analysis Action Object Subject Item Subject Item F1(1, 34) F2 (1, 71) F1(1, 34) F2(1, 20) Modality 3.72^ 8.85** 21.59*** 44.85*** TaskCon 1.46 5.2* 6.97** 20.2*** Relation 1.80 .16 15.54*** 4.66* Category .08 .003 2.88 1.37 Relation x Category 2.9 .17 .01 .21 Relation x Modality 4.0* 5.44* 3.96 3.08 Relation x TaskCon .25 .92 .28 .00 Category x Modality 6.44* 3.30^ 1.48 .94 Category x TaskCon .13 .06 .63 1.28 Modality x TaskCon .62 2.82 .16 .55 Relation x Category x Modality .45 .38 .05 .00 Relation x Category x TaskCon .81 .39 .73 .18 Relation x Modality x TaskCon .36 .18 3.07^ 3.56^ Category x Modality x TaskCon .45 1.12 .13 .06 Relation x Category x Modality x TaskCon .10 .50 .50 .38 * p < .05; ** p < .01; and *** p < .001; ^denotes marginal effect; TaskCon = task condition (single, dual)

Table 5-9. Mean and standard deviation of revisits in the target AOI Action Object Associative Semantic Associative Semantic Phy NonP Phy NonP Living NonL Living NonL 1.1 1.2 1.2 1.2 1.2 1.2 1.4 1.4 Single (0.4) (0.4) (0.4) (0.3) (0.5) (0.5) (0.6) (0.6) Picture 1.1 1.2 1.2 1.2 1.1 1.2 1.2 1.2 Dual (0.4) (0.4) (0.4) (0.4) (0.4) (0.5) (0.5) (0.7) 1.1 1.1 1.2 1.1 1.0 1.0 1.1 1.2 Single (0.5) (0.4) (0.4) (0.4) (0.6) (0.6) (0.6) (0.6) Word 1.1 1.0 1.1 1.0 0.8 0.9 1.0 1.1 Dual (0.5) (0.5) (0.4) (0.4) (0.4) (0.5) (0.5) (0.4) Single = Semantic judgment task only, Dual = Semantic + Motor task, Phy = physical action, NonP = non-physical action, NonL = non-living object

71

Table 5-10. Summary of F statistics for the ANOVA results examining fixation duration in the target region in action and object concepts via subject (F1) and item (F2) analysis Action Object Subject Item Subject Item F1(1, 34) F2 (1, 71) F1(1, 34) F2(1, 20) Modality 6.3* 6.07* .00 .02 TaskCon .78 8.5** .43 1.31 Relation 1.72 .38 .03 .08 Category 13.2*** 1.33 .03 .00 Relation x Category 29.5*** 4.78* 2.0 .33 Relation x Modality 5.73* .59 .52 .52 Relation x TaskCon .02 .09 6.06* 2.73 Category x Modality 10.5** .80 1.27 .02 Category x TaskCon .26 .14 .59 .65 Modality x TaskCon 1.39 1.26 1.29 7.3** Relation x Category x Modality 17.1*** 1.51 10.1** 2.72 Relation x Category x TaskCon 2.16 .80 .18 .33 Relation x Modality x TaskCon 2.91 1.61 .66 .28 Category x Modality x TaskCon .11 .00 .22 .24 Relation x Category x Modality x TaskCon .34 .14 .00 .00 * p < .05; ** p < .01; and *** p < .001; ^denotes marginal effect; TaskCon = task condition (single, dual)

Table 5-11. Mean and standard deviation of fixation duration in the target AOI Action Object Associative Semantic Associative Semantic Phy NonP Phy NonP Living NonL Living NonL 1191 1384 1348 1282 1107 1209 1246 1144 Single (227) (235) (249) (188) (293) (318) (277) (309) Picture 1143 1308 1282 1278 1119 1286 1167 1096 Dual (216) (251) (168) (215) (329) (335) (320) (354) 1204 1224 1237 1201 1142 1081 1146 1153 Single (293) (308) (292) (259) (255) (299) (346) (406) Word 1222 1229 1200 1190 1235 1207 1202 1208 Dual (374) (355) (281) (279) (458) (621) (437) (412) Single = Semantic judgment task only, Dual = Semantic + Motor task, Phy = physical action, NonP = non-physical action, NonL = non-living object

72

Figure 5-1. Main effects of Task Condition (single, dual) and Modality (picture, word) for accuracy for action concepts

73

Figure 5-2. Main effect of Relation (association, similarity) for accuracy for object concepts

74

Figure 5-3. Main effect of Task Condition (single, dual) and Modality (picture, word) for reaction time for action concepts

75

Figure 5-4. Main effect of Task Condition (single, dual) for reaction time for object concepts

76

Figure 5-5. Finger tapping rate by baseline control and experimental trials (Domain x Modality)

77

Figure 5-6. Main effects of Domain (action, object) and Modality (picture, word) for fixation count

78

Figure 5-7. Main effect of Modality (picture, word) for revisits

79

Figure 5-8. Main effect of Domain (action, object) for fixation duration

80

CHAPTER 6 GENERAL DISCUSSION

Previous studies have examined semantic representation and/or processing for action concepts. However, some elements of action processing remain unclear, including the moderating effects of modality and self-initiated embodied cognition. This dissertation included two experimental tests of these factors.

The results supported a processing difference as a function of input modality (i.e., pictures, words) for action concepts. Although some category effects were observed in the semantic task, the motor condition did not provide a strong support to current embodied theory of motor-language interaction. In the discussion to follow, I interpret my current findings in the context of direct vs. indirect semantic processing, motor experience during semantic processing, and the overview of action semantics.

Direct vs. Indirect Semantic Processing

Previous investigations on input modality have provided substantial evidence for the picture superiority effect (Coccia et al., 2004; Taikh, Hargreaves, Yap, & Pexman, 2014;

Thompson-Schill et al., 2001). Here, I showed that participants could access an apple’s semantic features, such as color, shape, smell, as quickly from words as from pictures (also see Dorjee,

Devenney, & Thierry, 2010). Although there was no significant main effect of Modality in the object pairs in the current study, a small trend showed a shorter reaction time with object picture pairs. The eyetracking results provided supplementary evidence to this pictorial semantic accessing advantage. During early visual recognition, more fixation counts are indicative of more demand on visual scan in object pictures vs. object words; thus, visual/perceptual information in object pictures is not without complexity and could still use some visual attention. However, despite the amount of visual attention needed during early visual recognition stage, the relatively

81

faster response time then suggested a very strong connection between surface form recognition and semantic knowledge for object concepts, resulting in a faster semantic activation from picture modality than words.

The link of visual form to semantic knowledge is less arbitrary and direct in action concepts. The amount of visual attention drawn also indicated the greater degree of visual complexity an action picture could carry. Moreover, action semantic processing depends on the context and/or visual co-occurrence (Sadeghi, McClelland, & Hoffman, in press). For example, we presented a picture of a girl biting an apple and a picture of two women fighting for the

“biting-fighting” pair. While participants judged with high accuracy on the association of the two action concepts in word modality (.81), they usually said the two were not conceptually related in picture modality (.10). Both pictures have fair naming agreement (.83), so this picture-word discrepancy could not be solely attributed to the quality of the images.

It is likely that aside from the visual complexity of action concepts, the lack of veridical correspondence to its meaning also challenges the processing advantage of a picture would carry now. With a category in the object concepts, take “dog”, although there are many kinds of dogs, they all share similarities (e.g., snout, four legs, friendly, etc.). But for actions, one word could represent different distinct concepts (i.e., polysemy) and concepts within the same category could look very different (Miller & Fellbaum, 1991) (see Figure 6-1 for a sample depiction of action semantic processing). Similar to abstract concepts, there is no prototypical visual form for an action. Take the word “biting”, one could picture it as someone biting an apple or a dog biting someone. Therefore, action concepts are constrained by the context of the image whereas action words encapsulate all possibilities and offer more top-down access to its meaning. Without shared visual context, it then becomes more effortful to make semantic judgment for actions.

82

Motor Experience during Semantic Processing

Contrary to a number of previous studies, I have found no effects of motor experience during action semantic processing. Although a greater dual task effect was observed in action semantic judgment task with concurrent motor task than objects, no difference was found between physical vs. non-physical actions and no interaction was found between physical actions and the motor task. It is then questioned under what circumstance we will observe a motor- semantic interaction effect.

The semantic-motor interaction has its supports from neurophysiological and behavioral grounds. Previous neuroimaging studies typically found motor activation in the brain during lexical decision task (e.g., is this a real/action word or not?), sentence reading that contained action words (Hauk et al., 2004; Pulvermuller et al., 2001), listening to action verbs in isolation or in sentence (Raposo et al., 2009), and viewing or naming action pictures (Berlingeri et al.,

2008). Behavioral studies also found evidence of a semantic-motor effect, though debatable regarding its facilitation or interference effect, during reaching task (Boulenger et al., 2006), verbal fluency task (Rodriguez, McCabe, Nocera, & Reilly, 2012), or motor-semantic priming task (Morsella, 2002).

Recently, one study also highlighted the importance of embodied experience during action picture naming (Sidhu, Kwan, Pexman, & Siakaluk, 2014). In their second experiment, they conducted a secondary analysis on the relationship between their subjective embodiment rating and the action naming latency extracted from an existing well-controlled database (IPNP)

(Szekely et al., 2004). A facilitation effect was found as the more likely the action picture involved human body parts, the faster it would be named.

Perhaps such semantic-motor interaction is task dependent. That is, when motoric aspect of action knowledge was highlighted, the more likely a motor-semantic interaction would be

83

observed (Taylor & Zwaan, 2009; Yee et al., 2013). Previous investigations usually included lexico-semantic tasks involving viewing or naming an action word (with or without motor observation/execution). However, if motor representation is essential to action concepts, 1) it should apply to all tasks targeting action semantic processing, and 2) a greater effect should be observed when an action easily involves the human body.

Nonetheless, we did not always see strong semantic-motor interaction. In Kable’s studies

(2002, 2006), triads of action words/pictures were provided and participants’ brain activation was measured as they made semantic judgment task. Greater brain activation was actually found in the posterior brain regions rather than the frontal motor regions. Additionally, Watson’s meta- analysis results did not provide a strong support of the necessity of the motor enactment during action semantic processing. Behaviorally, Kemmerer et al. (2013) conducted an action semantic similarity judgments in individuals with Parkinson’s disease (PD, N = 10) and neurotypical controls. Participants were asked to choose the most related pairs from triad of action words (e.g., limp-stroll-trudge). They included both action verbs (i.e., Running, Hitting, Cutting, and

Speaking subtype) and non-action verbs (i.e., Change of state and Mental verbs). Though an overall delayed processing speed was identified in the PD group, no substantial differences were identified among different types of action (e.g., action vs. non-action). Similar to Kemmerer’s study (2013), I did not find differences between physical and non-physical action. Moreover, the current study was unique in that 1) it tested two input modalities (i.e., picture, word), and 2) it controlled for the visual motion salience (Bedny et al., 2008) and embodied experience. Even so, there was still no strong effect of category difference.

My experimental task forced participants to engage in deep semantic processing (e.g., semantically similar or conceptually associated). This task could require participants to judge in

84

a higher order, abstract fashion that does not require motor enactment. However, since there were only a few studies examining verb-verb or action-action semantic processing (Bushell &

Martin, 1997; Kemmerer et al., 2013; Rosler, Streb, & Haan, 2001; Vinson & Vigliocco, 2008) and the relative embodied experience has not been considered extensively before, more evidence is needed before any definitive conclusions can be drawn.

Action Semantic Processing – A Summary

In the current study, we examined two aspects of semantic processing for action concepts

– the effect of input modality and the influence of embodied experience. As previous studies would suggest, measuring the impact of early visual recognition can be challenging when processing pictures of action concepts (Bird, Howard, & Franklin, 2003; d'Honincthun & Pillon,

2008). Factors such as visual co-occurrence and image variability can significantly affect people’s judgment of semantic similarity or association (Bonin, Peereman, Malardier, Méot, &

Chalard, 2003). However, the current study also showed that processing discrepancies between objects and actions cannot be solely attributed to visual forms (or from a variety of other psycholinguistic variables). Action concepts are harder to categorize than static objects.

Members within the same action category may not share perceptual or semantic similarity as seen in object concepts, resulting a less hierarchically clustered and matrix like structure (Miller

& Fellbaum, 1991). Furthermore, English verbs or action words are often polysemous and flexible in meaning depending on the contexts (Fellbaum, 1990). Therefore, semantic judgment could be more effortful. Together, the processing disadvantage in action pictures potentially emerge from at least two stages during semantic processing – the visual recognition stage, and the semantic knowledge stage.

With regard to action understanding, the notion of overlapping neural substrates underlying overt or covert motor-language interaction or action production/comprehension

85

typically holds the necessity of embodied motor representation or enactment during action processing. Neuropsychological results from individuals with motor neuron impairment also demonstrate a certain degree of action processing difficulties. For example, patients with apraxia showed more impairment action verb comprehension (Bak, O'Donovan, Xuereb, Boniface, &

Hodges, 2001). Patients with Parkinson’s disease (PD) also demonstrated impairments of comprehension and/or production of action verbs relative to object nouns (Boulenger et al.,

2008).

However, recent studies also indicated that this could be a graded effect of sensorimotor experience for action semantic processing. It does not reject the contribution of our sensorimotor experience, but is context-dependent. For example, in reading action verbs in literal (e.g., single word – kick, sentence – kick the ball) and in idiomatic sentence (e.g., kick the bucket), fMRI brain activation in the motor regions was only found with action verbs in literal but not when used as metaphor (Raposo et al., 2009). Similarly, in the current project asked participants to think about all aspects of an action concept which required deeper semantic processing and they could potentially be prompted to think beyond the sensorimotor level of semantic knowledge.

Therefore, no prominent semantic-motor interaction was identified.

The investigation of the conceptual flexibility for action concepts has its theoretical and clinical implication. The asymmetric impairment of the verbal and nonverbal modality has been frequently reported in brain injury studies for object concepts (McCarthy & Warrington, 1988); however, very few have investigated this specific effect in action concepts in neurologically impaired populations (e.g., stroke, Parkinson’s disease). To the best of my knowledge, there exists only one neuropsychological assessment that includes both verbal and nonverbal assessment for action semantic processing (i.e., the Kissing and Dancing test, Bak & Hodges,

86

2003). Moreover, the specificity of this test remains unsatisfactory. The semantic relation being examined was association-based and the type of action concepts was limited to physical action.

In our current investigation, the action relation types consistently interacted with input modality further yielding the need to address different semantic relations when assessing action semantic processing.

Additionally, previous case reports also indicated action concepts could be selectively impaired due to the loss of salient features. Some studies have demonstrated that patients with semantic dementia have difficulties with concrete verbs (e.g., hitting) but not with abstract verbs

(e.g., thinking) (i.e., the reversal of the concreteness effect) (Yi, Moore, & Grossman, 2007). Yi et al. attributed this phenomenon to the selective and disproportionate loss of visuo-spatial features in this population. If this is true, we then would need to consider how pathology or brain representation correlates with action features instead of considering action impairment as an all- or-none event (also see Bushell and Martin, 1997). Recently, Kemmerer (2015) proposed a functional-anatomical plausible model on the feature representation of action concepts to address the two potential loci for the visual and motoric components of actions. He suggested that the visual-motion vs. motor components is generally mapped to the posterior vs. anterior areas of the brain. Although the proposal is still in its infancy, it provides a theoretical testing ground for the embodied framework (e.g., physical actions high in motoric features vs. non-physical actions definitive by visual-motion features) as well as the potential dissociation of action impairments in clinical population (e.g., anterior lesion for motoric feature impairments or posterior temporal lobe lesions for visual-motion feature impairments).

Limitations and Future Directions

I have reported results that provided further evidence of semantic processing differences between actions and objects. I interpreted these findings as a reversal of pictorial superiority

87

effect in the action concepts and suggested the effect was due to two levels of the semantic processing – the visual recognition stage of an action picture and the semantic store of the actions. Although the motor dual task increased the processing speed for both action and object concepts but with greater effect in action concepts, there was no strong supporting evidence of embodied experience as reflected in the action category (i.e., physical vs. non-physical) and in the motor dual task condition.

The findings of the processing difference between objects and actions yield the need of a wider range approach of assessing semantic processing. As the optimal input pathway to conceptual knowledge is different (e.g., picture for objects, language for actions), assessment may be designed to target the difference. That is, there is a general benefit of using pictorial or video forms of training stimuli for language intervention for object concepts. However, if we do not necessarily gain better access to semantic memory for action concepts via pictorial input than verbal input, the use of such training form for language intervention may lose its sufficiency.

Additionally, pictorial input also limits the context of the action concept being assessed. It does not necessarily indicate individuals have action impairments but could be a result of failure to process action semantics from that specific input channel/context.

Both conceptual domains demonstrated various degrees of conceptual flexibility in the current investigation. There were trends of dissociation as a function of Modality, Relation, or

Category. Although I used a smaller set of object stimuli, findings were comparable with previous reports on category-specific or modality-specific effect (Saffran, Coslett, & Keener,

2003). With regard to actions, pair relationship (i.e., semantically-similar, associative) also demonstrated different degree of modulation during action semantic processing. Future work should consider this potential dissociation as a result of conceptual flexibility for actions. For

88

example, physical action pairs based on semantic similarity (i.e., manner of performance) required longer reaction time but elicited lower accuracy. Furthermore, participants took longer to judge physical actions in picture modality than in words. It is then questioned whether the motoric feature is always activated at the early stage of processing and why pictorial input did not facilitate such feature for physical actions. Potentially, future research should consider how we make action semantic judgment, such that if it is usually associatively-based or feature similarity-based.

Although we did not find a significant effect of embodied experience, two potential venues could be explored: 1) the embodied experience is graded rather than absolute; therefore, one could model it as a continuous factor rather than categorical; 2) one could test the relationship between embodied experience and task demand to evaluate the potential task constraints during action semantic processing. Future studies should also focus on the timing and circumstance underlying the semantic-motor interaction, treating this semantic-motor interaction as a dynamic process instead of a fixed effect.

89

Figure 6-1. Schematic depiction of action sematic processing

90

APPENDIX A INSTRUCTIONS FOR PSYCHOLINGUISTIC RATINGS

1) Action category/type: Please rate how commonly the action is performed by human body parts or is used to demonstrate changes to an object. [7 = very commonly used as an action human performs and 1 = not common or is often used to describe change of state or object use] For example: you might say a person coughs, where a clock would not. When you think of thawing your mental image is that of an object (e.g., meat) warming up. In some case, an action word can demonstrate both human-related action and changes to an object, please try your best to determine how commonly and frequently the action is used to describe either human performance or object state.

Not common very common Change to object human body performs 1------2------3------4------5------6------7

2) Action magnitude (visual motion) In this questionnaire, we are interested in the extent to which the meanings of different words make you imagine visual movement. That is, when you read or hear a word, do you see something moving in your mind's eye? You are to rate each word on a scale of 1 to 7 as to how much that word makes you think of visual movement. In this scale 1 represents "NOT AT ALL", 2 represents "VERY LITTLE", 4 represents "SOMEWHAT" and 7 represents "VERY MUCH." Please do your best to use the full range of the scale.

Not at all Extremely 1------2------3------4------5------6------7

3) Familiarity Please rate how familiar you are with the word according to how usual or unusual the action word is in your realm of experience. On the following pages is a list of words. Some of these words will be words that you know and use very often; that is, you will be VERY FAMILIAR with these words. Others will be words that you might not have ever seen before and surely do not recognize. Thus, you are VERY UNFAMILIAR with these words. Of course, there will also be words that can fall within those two extremes. Some words you might recognize, but not use or hear very often; you are familiar with them. Others you might just slightly recognize, perhaps you have seen them a few times and thus are SLIGHTLY FAMILIAR to you. And conversely, there will be words that you barely recognize, SLIGHTLY UNFAMILIAR, or words that you can recognize but perhaps have only heard or read once or twice, UNFAMILIAR.

Not at all extremely familiar familiar 1------2------3------4------5------6------7

91

4) Concreteness Please rate how concrete the word is. Concreteness is defined as how well you are able to see, touch, or manipulate what the word represents. [1= not at all able to see, touch, or manipulate and 7=extremely able to see, touch or manipulate]

Not at all extremely concrete concrete 1------2------3------4------5------6------7

5) Pair manner similarity In the following questions you will see pairs of words (verbs). Please determine to what extent the two words are similar in terms of how the action is performed: Think about how the action is performed or executed. The actions running and walking may be fairly similar because they involve movement done by humans primarily using the legs with similar manner. Now compare running and sprinting, these may again be similar but they may be more similar because they both involve quicker movement than does walking. Now compare, running and sleeping. These actions may not be very similar; only that they are both actions that humans would do.

Not similar very similar 1------2------3------4------5------6------7

6) Pair association strength In the following questions you will see pairs of words (verbs). Please determine to what extent the two words are commonly used together (conceptual association): Conceptual association is defined by how commonly the concepts represented in the words OCCUR TOGETHER. For example, pitching and catching are highly associated since both actions occur frequently in the same scenario or event (e.g., baseball game). On the other hand, cooking and yawning are not associated since they rarely occur in the same context under normal circumstance.

Not associated very associated 1------2------3------4------5------6------7

7) Pair visual similarity Please carefully rate how similar the two images look. Visual similarity = the degree of visual similarity

Not alike very alike 1------2------3------4------5------6------7

92

APPENDIX B ACTION STIMULI IN SETS 1 AND 2

Table B-1. Experimental stimuli in the semantic judgment task Prime Target Prime Target Set 1 physical action Set 2 physical action wiping rubbing kneading squeezing slapping knocking knocking ringing tapping hammering punching stabbing pinching touching petting combing catching holding carrying holding juggling throwing flipping juggling swatting waving erasing scrubbing reading watching jumping skipping hiking marching marching dancing running climbing crawling climbing squatting kneeling kneeling bowing bouncing jumping diving jumping pulling picking poking fencing Set 1 non-physical action Set 2 non-physical action leaking dripping breaking ripping dropping falling twisting turning pouring straining floating hanging sinking dipping swerving sliding cutting tearing spinning rolling folding crumpling crushing crumpling revolving twisting swinging rocking erupting popping splattering raining popping ejecting popping breaking hatching growing dissolving melting spraying raining wilting deflating spilling pouring flooding spilling cracking ripping unfolding opening Set 1 physical action (association) Set 2 physical action (association) petting kissing pulling pushing catching running planting picking laughing clapping bowing clapping diving sailing swimming sailing drumming conducting reading drawing fighting biting marching saluting carrying pulling wiping vacuuming

93

Table B-1. Continued. Prime Target Prime Target Set 1 non-physical action (association) Set 2 non-physical action (association) unfolding wrapping leaking raining bending breaking dropping breaking dissolving sinking sinking floating swerving crashing hanging flying wilting blossoming erupting burning melting boiling hatching splitting spreading spilling wilting falling

94

APPENDIX C NON-ACTION STIMULI

Table C-1. Non-action pairs in the semantic judgment task Prime Target Prime Target Living – similarity Non-living – similarity carrot tomato saw screwdriver nose mouth bowl fork cherry pear broom sponge dog bear bed chair spider ant train bike duck cow couch desk Living – association Non-living – association ear telephone shirt iron hand ring axe logs apple knife key lock monkey banana motorcycle helmet sheep wool castle knight bird nest truck road

95

LIST OF REFERENCES

Akinina, Y., Malyutina, S., Ivanova, M., Iskra, E., Mannova, E., & Dragoy, O. (2014). Russian normative data for 375 action pictures and verbs. Behavior Research Methods.

Allport, D. A. (1985). Distributed memory, modular subsystems and dysphasia. In S. K. Newman & R. Epstein (Eds.), Current Perspectives in Dysphasia (pp. 32-60). Edinburgh: Churchill Livingstone.

Amsel, B. D., Urbach, T. P., & Kutas, M. (2013). Alive and grasping: stable and rapid semantic access to an object category but not object graspability. Neuroimage, 77, 1-13.

Andrews, M., Vigliocco, G., & Vinson, D. (2009). Integrating experiential and distributional data to learn semantic representations. Psychological Review, 116(3), 463-498.

Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Iacoboni, M. (2006). Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Current Biology, 16(18), 1818-1823.

Azizian, A., Watson, T. D., Parvaz, M. A., & Squires, N. K. (2006). Time course of processes underlying picture and word evaluation: an event-related potential approach. Brain Topography, 18(3), 213-222.

Bak, T. H., & Hodges, J. R. (2003). Kissing and dancing - a test to distinguish the lexical and conceptual contributions to noun/verb and action/object dissociation. Preliminary results in patients with frontotemporal dementia. Journal of Neurolinguistics, 16(2-3), 169-181.

Bak, T. H., O'Donovan, D. G., Xuereb, J. H., Boniface, S., & Hodges, J. R. (2001). Selective impairment of verb processing associated with pathological changes in Brodmann areas 44 and 45 in the motor neurone disease-dementia-aphasia syndrome. Brain, 124(Pt 1), 103-120.

Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(4), 577- 660.

Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645.

Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philosophical Transactions of the Royal Society B-Biological Sciences, 364(1521), 1281-1289.

Bedny, M., Caramazza, A., Grossman, E., Pascual-Leone, A., & Saxe, R. (2008). Concepts Are More than Percepts: The Case of Action Verbs. Journal of Neuroscience, 28(44), 11347- 11353.

Beilock, S. L., Lyons, I. M., Mattarella-Micke, A., Nusbaum, H. C., & Small, S. L. (2008). Sports experience changes the neural processing of action language. Proceedings of the National Academy of Sciences of the United States of America, 105(36), 13269-13273.

96

Berlingeri, M., Crepaldi, D., Roberti, R., Scialfa, G., Luzzatti, C., & Paulesu, E. (2008). Nouns and verbs in the brain: grammatical class and task specific effects as revealed by fMRI. Cognitive Neuropsychology, 25(4), 528-558.

Berndt, R. S., Mitchum, C. C., Haendiges, A. N., & Sandson, J. (1997). Verb retrieval in aphasia. 1. Characterizing single word impairments. Brain and Language, 56(1), 68-106.

Bird, H., Howard, D., & Franklin, S. (2000). Why is a verb like an inanimate object? Grammatical category and semantic category deficits. Brain and Language, 72(3), 246- 309.

Bird, H., Howard, D., & Franklin, S. (2003). Verbs and nouns: the importance of being imageable. Journal of Neurolinguistics, 16(2-3), 113-149.

Black, M., & Chiat, S. (2003). Noun-verb dissociations: A multi-faceted phenomenon. Journal of Neurolinguistics, 16, 231-250.

Bonin, P., Peereman, R., Malardier, N., Méot, A., & Chalard, M. (2003). A new set of 299 pictures for psycholinguistic studies: French norms for name agreement, image agreement, conceptual familiarity, visual complexity, image variability, age of acquisition, and naming latencies. Behavior Research Methods, Instruments, & Computers, 35(1), 158-167.

Boulenger, V., Mechtouff, L., Thobois, S., Broussolle, E., Jeannerod, M., & Nazir, T. A. (2008). Word processing in Parkinson's disease is impaired for action verbs but not for concrete nouns. Neuropsychologia, 46(2), 743-756.

Boulenger, V., Roy, A. C., Paulignan, Y., Deprez, V., Jeannerod, M., & Nazir, T. A. (2006). Cross-talk between language processes and overt motor behavior in the first 200 msec of processing. Journal of Cognitive Neuroscience, 18(10), 1607-1615.

Bright, P., Moss, H., & Tyler, L. K. (2004). Unitary vs multiple semantics: PET studies of word and picture processing. Brain and Language, 89(3), 417-432.

Brysbaert, M., & New, B. (2009). Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41(4), 977-990.

Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science, 6(1), 3-5.

Bushell, C. M., & Martin, A. (1997). Automatic semantic priming of nouns and verbs in patients with Alzheimer's disease. Neuropsychologia, 35(8), 1059-1067.

97

Calvo-Merino, B., Glaser, D. E., Grezes, J., Passingham, R. E., & Haggard, P. (2005). Action observation and acquired motor skills: an FMRI study with expert dancers. Cerebral Cortex, 15(8), 1243-1249.

Caramazza, A. (1996). Neuropsychology: Pictures, words and the brain. Nature, 380, 485-486.

Chainay, H., & Humphreys, G. W. (2002). Privileged access to action for objects relative to words. Psychonomic Bulletin & Review, 9(2), 348-355.

Chatterjee, A. (2010). Disembodying cognition. Language and Cognition, 2(1), 79-116.

Chatterjee, A., Southwood, M. H., & Basilico, D. (1999). Verbs, events and spatial representations. Neuropsychologia, 37(4), 395-402.

Coccia, M., Bartolini, M., Luzzi, S., Provinciali, L., & Lambon Ralph, M. A. (2004). Semantic memory is an amodal, dynamic system: Evidence from the interaction of naming and object use in semantic dementia. Cognitive Neuropsychology, 21(5), 513-527.

Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior, 9(2), 240-247.

Crepaldi, D., Berlingeri, M., Cattinelli, I., Borghese, N. A., Luzzatti, C., & Paulesu, E. (2013). Clustering the lexicon in the brain: a meta-analysis of the neurofunctional evidence on noun and verb processing. Frontiers in Human Neuroscience, 7, 303. d'Honincthun, P., & Pillon, A. (2008). Verb comprehension and naming in frontotemporal degeneration: the role of the static depiction of actions. Cortex, 44(7), 834-847.

Damasio, H., Tranel, D., Grabowski, T., Adolphs, R., & Damasio, A. (2004). Neural systems behind word and concept retrieval. Cognition, 92(1-2), 179-229. de Zubicaray, G., Arciuli, J., & McMahon, K. (2013). Putting an "end" to the motor cortex representations of action words. Journal of Cognitive Neuroscience, 25(11), 1957-1974.

Dell, G. S., & O'Seaghdha, P. G. (1992). Stages of lexical access in language production. Cognition, 42(1-3), 287-314.

Dorjee, D., Devenney, L., & Thierry, G. (2010). Written words supersede pictures in priming semantic access: a P300 study. Neuroreport, 21(13), 887-891.

Druks, J. (2002). Verbs and nouns-a review of the literature. Journal of Neurolinguistics, 15, 289-315.

Druks, J., & Masterson, J. (2000). An object and action naming battery. London: Psychology Press.

98

Druks, J., & Shallice, T. (2000). Selective preservation of naming from description and the "restricted preverbal message". Brain and Language, 72(2), 100-128.

Fellbaum, C. (1990). English Verbs as a Semantic Net. International Journal of Lexicography, 3(4), 278-301.

Fung, T. D., Chertkow, H., Murtha, S., Whatmough, C., Peloquin, L., Whitehead, V., & Templeman, F. D. (2001). The spectrum of category effects in object and action knowledge in dementia of the Alzheimer's type. Neuropsychology, 15(3), 371-379.

Gallese, V., & Lakoff, G. (2005). The brain's concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3-4), 455-479.

Glaser, W. R. (1992). Picture naming. Cognition, 42(1-3), 61-105.

Glaser, W. R., & Glaser, M. O. (1989). Context effects in stroop-like word and picture processing. Journal of Experimental Psychology: General, 118(1), 13-42.

Hauk, O., Johnsrude, I., & Pulvermuller, F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron, 41(2), 301-307.

Hauk, O., & Pulvermuller, F. (2004). Neurophysiological distinction of action words in the fronto-central cortex. Human Brain Mapping, 21(3), 191-201.

Hauk, O., Shtyrov, Y., & Pulvermüller, F. (2008). The time course of action and action-word comprehension in the human brain as revealed by neurophysiology. Journal of Physiology-Paris, 102(1-3), 50-58.

Hoenig, K., Sim, E.-J., Bochev, V., Herrnberger, B., & Kiefer, M. (2008). Conceptual flexibility in the human brain: Dynamic recruitment of semantic maps from visual, motor, and motion-related areas. Journal of Cognitive Neuroscience, 20(10), 1799-1814.

Holmqvist, K., Nystrom, M., Andersson, R., Dewhurst, R., Jarodzka, H., & van de Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and measures. Oxford: Oxford University Press.

Howard, D., & Patterson, K. (1992). Pyramids and Palm Trees: A test of semantic access from pictures and words. Bury St. Edmunds, UK: Thames Valley Test Company.

Hung, J., Reilly, J., & Edmonds, L. A. (2014). Reversal of the picture superiority effect for action semantic processing: An eyetracking investigation. Manuscript submitted for publication.

Huttenlocher, J., & Lui, F. (1979). The semantic organization of some simple nouns and verbs. Journal of Verbal Learning and Verbal Behavior, 18(2), 141-162.

99

Jensen, L. R. (2000). Canonical structure without access to verbs? Aphasiology, 14(8), 827-850.

Kable, J. W., & Chatterjee, A. (2006). Specificity of action representations in the lateral occipitotemporal cortex. Journal of Cognitive Neuroscience, 18(9), 1498-1517.

Kable, J. W., Kan, I. P., Wilson, A., Thompson-Schill, S. L., & Chatterjee, A. (2005). Conceptual representations of action in the lateral temporal cortex. Journal of Cognitive Neuroscience, 17(12), 1855-1870.

Kable, J. W., Lease-Spellmeyer, J., & Chatterjee, A. (2002). Neural substrates of action event knowledge. Journal of Cognitive Neuroscience, 14(5), 795-805.

Kalenine, S., Peyrin, C., Pichat, C., Segebarth, C., Bonthoux, F., & Baciu, M. (2009). The sensory-motor specificity of taxonomic and thematic conceptual relations: A behavioral and fMRI study. Neuroimage, 44(3), 1152-1162.

Kemmerer, D. (2014). Word classes in the brain: Implications of linguistic typology for cognitive neuroscience. Cortex, 58(0), 27-51.

Kemmerer, D. (2015). Visual and Motor Features of the Meanings of Action Verbs: A Cognitive Neuroscience Perspective. In R. G. de Almeida & C. Manouilidou (Eds.), Cognitive Science Perspectives on Verb Representation and Processing (pp. 189-212): Springer International Publishing.

Kemmerer, D., Castillo, J. G., Talavage, T., Patterson, S., & Wiley, C. (2008). Neuroanatomical distribution of five semantic components of verbs: Evidence from fMRI. Brain and Language, 107(1), 16-43.

Kemmerer, D., Miller, L., Macpherson, M. K., Huber, J., & Tranel, D. (2013). An investigation of semantic similarity judgments about action and non-action verbs in Parkinson's disease: implications for the Embodied Cognition Framework. Frontiers in Human Neuroscience, 7, 146.

Kiefer, M. (2001). Perceptual and semantic sources of category-specific effects: event-related potentials during picture and word categorization. Memory & Cognition, 29(1), 100-116.

Lambon Ralph, M. A., & Howard, D. (2000). Gogi aphasia or semantic dementia? Simulating and assessing poor verbal comprehension in a case of progressive fluent aphasia. Cognitive Neuropsychology, 17(5), 437-465.

Levelt, W. J. (2001). Spoken word production: a theory of lexical access. Proceedings of the National Academy of Sciences of the United States of America, 98(23), 13464-13471.

Levin, B. (1993). English verb classes and alternations: A preliminary investigation. Chicago: University of Chicago Press.

100

Liljestrom, M., Tarkiainen, A., Parviainen, T., Kujala, J., Numminen, J., Hiltunen, J., . . . Salmelin, R. (2008). Perceiving and naming actions and objects. Neuroimage, 41(3), 1132-1141.

Luzzatti, C., Raggi, R., Zonca, G., Pistarini, C., Contardi, A., & Pinna, G. D. (2002). Verb-noun double dissociation in aphasic lexical impairments: the role of word frequency and imageability. Brain and Language, 81(1-3), 432-444.

Lyons, I. M., Mattarella-Micke, A., Cieslak, M., Nusbaum, H. C., Small, S. L., & Beilock, S. L. (2010). The role of personal experience in the neural processing of action-related language. Brain and Language, 112(3), 214-222.

Martin, A., & Chao, L. L. (2001). Semantic memory and the brain: structure and processes. Current Opinion in Neurobiology, 11(2), 194-201.

Martin, N., Kohen, F., Kalinyak-Fliszar, M., Soveri, A., & Laine, M. (2012). Effects of working memory load on processing of sounds and meanings of words in aphasia. Aphasiology, 26(3-4), 462-493.

Matzig, S., Druks, J., Masterson, J., & Vigliocco, G. (2009). Noun and verb differences in picture naming: past studies and new evidence. Cortex, 45(6), 738-758.

McCarthy, R., & Warrington, E. K. (1988). Evidence for modality-specific meaning systems in the brain. Nature, 334(6181), 428-430.

McCarthy, R. A., & Warrington, E. K. (2015). Past, present, and prospects: Reflections 40 years on from the selective impairment of semantic memory (Warrington, 1975). The Quarterly Journal of Experimental Psychology, 1-28.

McRae, K., Ferretti, T. R., & Amyote, L. (1997). Thematic roles as verb-specific concepts. Language and Cognitive Processes, 12(2/3), 137-176.

Miller, G. A., & Fellbaum, C. (1991). Semantic networks of English. Cognition, 41, 197-229.

Morsella, E. E. (2002). The motor components of semantic representation. (Doctoral Dissertation, Columbia University). Retrieved from http://proquest.umi.com/pqdweb?did=726451981&sid=5&Fmt=2&clientId=20179&RQT =309&VName=PQD

Nasreddine, Z. S., Phillips, N. A., Bedirian, V., Charbonneau, S., Whitehead, V., Collin, I., . . . Chertkow, H. (2005). The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society, 53(4), 695- 699.

Nelson, D. L., McEvoy, C. L., & Schreiber, T. (1998). The University of South Florida word association, rhyme, and word fragment norms, from http://www.usf.edu/FreeAssociation/

101

Noppeney, U., Josephs, O., Kiebel, S., Friston, K. J., & Price, C. J. (2005). Action selectivity in parietal and temporal cortex. Cognitive Brain Research, 25(3), 641-649.

Odekar, A., Hallowell, B., Kruse, H., Moates, D., & Lee, C.-Y. (2009). Validity of eye movement methods and indices for capturing semantic (associative) priming effects. Journal of Speech, Language, and Hearing Research, 52(1), 31-48.

Oldfield, R. C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia, 9(1), 97-113.

Paivio, A. (1991). Dual coding theory: Retrospect and current status. Canadian Journal of Psychology/Revue canadienne de psychologie, 45(3), 255-287.

Paivio, A., Yuille, J. C., & Madigan, S. A. (1968). Concreteness, imagery, and meaningfulness values for 925 nouns. Journal of Experimental Psychology, 76(1, Pt.2), 1-25.

Papeo, L., Vallesi, A., Isaja, A., & Rumiati, R. I. (2009). Effects of TMS on different stages of motor and non-motor verb processing in the primary motor cortex. PLoS One, 4(2), e4508.

Pashek, G. V., & Tompkins, C. A. (2002). Context and word class influences on lexical retrieval in aphasia. Aphasiology, 16(3), 261-286.

Pashler, H. (1994). Dual-task interference in simple tasks: data and theory. Psychological Bulletin, 116(2), 220-244.

Plaut, D. C. (2002). Graded modality-specific specialisation in semantics: A computational account of optic aphasia. Cognitive Neuropsychology, 19(7), 603-639.

Potter, M. C., & Faulconer, B. A. (1975). Time to understand pictures and words. Nature, 253, 437-438.

Pulvermuller, F., Harle, M., & Hummel, F. (2001). Walking or talking? Behavioral and neurophysiological correlates of action verb processing. Brain and Language, 78(2), 143- 168.

Pulvermuller, F., Hauk, O., Nikulin, V. V., & Ilmoniemi, R. J. (2005). Functional links between motor and language systems. European Journal of Neuroscience, 21(3), 793-797.

Raposo, A., Moss, H. E., Stamatakis, E. A., & Tyler, L. K. (2009). Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia, 47(2), 388-396.

102

Rodriguez, A. D., McCabe, M. L., Nocera, J. R., & Reilly, J. (2012). Concurrent word generation and motor performance: further evidence for language-motor interaction. PLoS One, 7(5), e37094.

Rosler, F., Streb, J., & Haan, H. (2001). Event-related brain potentials evoked by verbs and nouns in a primed lexical decision task. Psychophysiology, 38(4), 694-703.

Saccuman, M. C., Cappa, S. F., Bates, E. A., Arevalo, A., Della Rosa, P., Danna, M., & Perani, D. (2006). The impact of semantic reference on word class: an fMRI study of action and object naming. Neuroimage, 32(4), 1865-1878.

Sadeghi, Z., McClelland, J. L., & Hoffman, P. (in press). You shall know an object by the company it keeps: An investigation of semantic representations derived from object co- occurrence in visual scenes. Neuropsychologia(2014).

Saffran, E. M., Coslett, H., Martin, N., & Boronat, C. B. (2003). Access to knowledge from pictures but not words in a patient with progressive fluent aphasia. Language and Cognitive Processes, 18(5-6), 725-757.

Saffran, E. M., Coslett, H. B., & Keener, M. T. (2003). Differences in word associations to pictures and words. Neuropsychologia, 41(11), 1541-1546.

Saxe, R., Jamal, N., & Powell, L. (2006). My body or yours? The effect of visual perspective on cortical body representations. Cerebral Cortex, 16(2), 178-182.

Schwitter, V., Boyer, B., Meot, A., Bonin, P., & Laganaro, M. (2004). French normative data and naming times for action pictures. Behavior Research Methods, Instruments, & Computers, 36(3), 564-576.

Seifert, L. S. (1997). Activating representations in permanent memory: different benefits for pictures and words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(5), 1106-1121.

Shelton, J. R., & Martin, R. C. (1992). How semantic is automatic semantic priming? Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(6), 1191-1210.

Sidhu, D. M., Kwan, R., Pexman, P. M., & Siakaluk, P. D. (2014). Effects of relative embodiment in lexical and semantic processing of verbs. Acta Psychologica, 149, 32-39.

Simmons, W. K., Hamann, S. B., Harenski, C. L., Hu, X. P., & Barsalou, L. W. (2008). fMRI evidence for word association and situated simulation in conceptual processing. Journal of Physiology-Paris, 102(1-3), 106-119.

Simon, J. R., & Rudell, A. P. (1967). Auditory S-R compatibility: the effect of an irrelevant cue on information proessing. Journal of Applied Psychology, 51, 300-304.

103

Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology Human Learning and Memory, 6(2), 174-215.

Szekely, A., & Bates, E. (2000). Objective visual complexity as a variable in studies of picture naming. Center for Research in Language Newsletter, 12(2), 1-33.

Szekely, A., Jacobsen, T., D’Amico, S., Devescovi, A., Andonova, E., Herron, D., . . . Bates, E. (2004). A new on-line resource for psycholinguistic studies. Journal of Memory and Language, 51(2), 247-250.

Taikh, A., Hargreaves, I. S., Yap, M. J., & Pexman, P. M. (2014). Semantic classification of pictures and words. The Quarterly Journal of Experimental Psychology, 1-17.

Taylor, L. J., & Zwaan, R. A. (2009). Action in cognition: The case of language. Language and Cognition, 1(01), 45-58.

Theios, J., & Amrhein, P. C. (1989). Theoretical analysis of the cognitive processing of lexical and pictorial stimuli: reading, naming, and visual and conceptual comparisons. Psychological Review, 96(1), 5-24.

Thompson-Schill, S. L., D'Esposito, M., Aguirre, G. K., & Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences of the United States of America, 94(26), 14792-14797.

Thompson-Schill, S. L., Kan, I. P., & Oliver, R. T. (2001). Functional Neuroimaging of Semantic Memory. In R. Cabeza & A. Kingstone (Eds.), Handbook of functional neuroimaging of cognition (pp. 149-190). Cambridge, MA: MIT Press.

Tranel, D., Manzel, K., Asp, E., & Kemmerer, D. (2008). Naming dynamic and static actions: Neuropsychological evidence. [Review]. Journal of Physiology-Paris, 102(1-3), 80-94.

Vigliocco, G., & Kita, S. (2006). Language-specific properties of the lexicon: Implications for learning and processing. Language and Cognitive Processes, 21(7-8), 790-816.

Vigliocco, G., Vinson, D. P., Druks, J., Barber, H., & Cappa, S. F. (2011). Nouns and verbs in the brain: A review of behavioural, electrophysiological, neuropsychological and imaging studies. Neuroscience and Biobehavioral Reviews, 35(3), 407-426.

Vigliocco, G., Vinson, D. P., Lewis, W., & Garrett, M. F. (2004). Representing the meanings of object and action words: The featural and unitary semantic space hypothesis. Cognitive Psychology, 48(4), 422-488.

104

Vinson, D. P., & Vigliocco, G. (2002). A semantic analysis of grammatic class impairments: Semantic representations of object nouns, action nouns and action verbs. Journal of Neurolinguistics, 15(3-5), 317-351.

Vinson, D. P., & Vigliocco, G. (2008). Semantic feature production norms for a large set of objects and events. Behavior Research Methods, 40(1), 183-190.

Warrington, E. K. (1975). The selective impairment of semantic memory. The Quarterly Journal of Experimental Psychology, 27(4), 635-657.

Warrington, E. K., & Shallice, T. (1984). CATEGORY SPECIFIC SEMANTIC IMPAIRMENTS. Brain, 107(SEP), 829-854.

Watson, C. E., Cardillo, E. R., Ianni, G. R., & Chatterjee, A. (2013). Action concepts in the brain: an activation likelihood estimation meta-analysis. Journal of Cognitive Neuroscience, 25(8), 1191-1205.

Yee, E., Chrysikou, E. G., & Thompson-Schill, S. L. (2013). The Cognitive Neuroscience of Semantic Memory. In K. Ochsner & S. Kosslyn (Eds.), The Oxford Handbook of Cognitive Neuroscience (Vol. 1, pp. 353-374): Oxford University Press.

Yi, H. A., Moore, P., & Grossman, M. (2007). Reversal of the concreteness effect for verbs in patients with semantic dementia. Neuropsychology, 21(1), 9-19.

Zachary, R. (1986). Shipley Institute of Living Scale. Revised Manual. Los Angeles: Western Psychological Services

105

BIOGRAPHICAL SKETCH

Ching-I (Jinyi) Hung received her bachelor’s degree in Foreign Languages and Literature from the National Cheng Kung University, Tainan, Taiwan in 2006. Jinyi then continued further academic studies in the U.S. in Speech-Language Pathology in the University of Tennessee

Health Science Center in 2007. After obtaining her master’s degree, Jinyi continued into the doctoral program in Speech, Language and Hearing Sciences at the University of Florida.

While completing her doctoral dissertation, Jinyi also works as a research associate at the

Memory, Concepts, Cognition Laboratory at Temple University (PI: Dr. Jamie Reilly). Her main responsibility here is to provide structured language intervention for individuals with

Alzheimer’s disease and semantic variant Frontotemporal Degeneration. Upon graduation, Jinyi plans to work as a postdoctoral fellow to continue research in language and cognition.

106