A Investigation of the Role of Stuttering Anticipation On the Preparation and Execution of Speech in Adults Who Stutter

By

Anna-Maria Mersov

A thesis submitted in conformity with the requirements for the degree of Masters of Science Speech Language Pathology University of Toronto

© Copyright by Anna-Maria Mersov 2015 A Magnetoencephalography investigation of the role of stuttering anticipation on the preparation and execution of speech in adults who stutter

Anna Mersov

Master of Science

Department of Speech Language Pathology

University of Toronto

2015

ABSTRACT

Coordination of the speech neural network in developmentally stuttering adults prior to initiation of individual speech utterances has not been explored. The objectives of this study were a) to characterize sensory-motor recruitment in preparation for and execution of speech in adults who stutter (AWS) using magneto-encephalography, and b) to investigate the effect of stuttering anticipation on such sensory-motor recruitment. Brain neural oscillatory activity was recorded during a cued overt repetition task. No differences between high and low anticipation were found. However, AWS demonstrate stronger bilateral recruitment of the mouth compared to controls in speech preparation and execution. AWS recruit the right mouth motor cortex before the left, while controls show a preference for the left mouth motor cortex. The study constitutes a unique contribution to understanding the role of speech motor recruitment in AWS. This is proposed to reflect facilitative mechanisms adopted in a limited motor speech network.

II

ACKNOWLEDGEMENTS

I would like to thank the following individuals whose contributions made the completion of this project feasible and enjoyable: my supervisor, Luc De Nil, who helped me in the conceptual framework of the study design and was always available to provide constructive feedback to push me forward; Douglas Cheyne, for providing his insight and expertise on the data analysis; Elina Mainela-Arnold, for her insightful and careful feedback on my work, and to Pascal Van Lieshout, who was very helpful in providing input when I came up to him with questions. I would also like to thank all the students in Dr. Cheyne’s lab for their help on a topic that was largely new to me, and especially to Cecilia Jobst, who offered her time to show me how to use the developed software for data analysis. Lastly I would like to thank Jed Meltzer who was very kind to offer his time to help with my inquiries on the analysis approach despite not being directly related to the project.

III

TABLE OF CONTENTS

INTRODUCTION...... 1 Developmental stuttering: an introduction ...... 1 Brain correlates of speech production in persons who stutter ...... 2 Review of brain regions involved in typical speech production ...... 3 White matter tract thinning in adults and children who stutter ...... 4 Functional haemodynamic differences and the right-hemisphere ...... 6 Functional differences between stuttered and fluent utterances ...... 8 Limitations of fMRI and PET and implications for stuttering research ...... 9 Brief review of high-temporal resolution neural measures of brain function ...... 10 Sensory-motor recruitment prior to speech production ...... 11 The role of the auditory cortex ...... 13 Current knowledge on speech preparation and execution in AWS ...... 14 Evoked neural responses in the left inferior frontal cortex ...... 14 Evoked auditory suppression ...... 15 Atypical recruitment of the motor cortex ...... 16 Sensory-motor aspects in behavioral speech and non-speech studies ...... 17 Movement accuracy and kinematics ...... 17 Reduced motor learning ability ...... 19 Dual task interference ...... 20 The role of stuttering anticipation ...... 21 Summary of study objectives and hypotheses ...... 25

METHODS ...... 26 Participant criteria ...... 26 Stimuli selection ...... 27 Visit 1: word ranking task ...... 27 Stuttering anticipation ranking-task scoring ...... 29 Visit 2: MEG task procedure ...... 30 Brief justification for the implemented task design ...... 30

IV

Task procedure ...... 31 Data acquisition ...... 32 Reliability of severity and stuttering measures ...... 33 EMG recording ...... 34 Data processing ...... 35 SAM beamformer analysis ...... 36 Statistical analyses ...... 37

RESULTS ...... 38 BEHAVIOURAL ...... 38 Consistency of stuttering anticipation rankings ...... 38 Stuttering and anxiety ...... 39 Group effects on response times ...... 40 Addressing negative response times ...... 40 Addressing response type ...... 41 Trial effect on response times ...... 42 Task-induced stuttering ...... 42

NEUROMAGNETIC: all stimuli combined ...... 45 Localization of beta (15-25Hz) band ERD...... 45 Bilateral visual beta suppression ...... 48 Bilateral motor beta suppression ...... 50 Quantifying beta ERD in preparation and execution stages of speech ...... 55 Speech preparation, comparing ERD extent ...... 56 Speech preparation, comparing latencies ...... 57 Comparing visual and motor ERD latencies ...... 59 Speech execution, comparing ERD extent ...... 61 Speech execution, examining latencies...... 61 Correlations with stuttering severity ...... 62 Localization of alpha band (8-13Hz) ERD ...... 64 Alpha-beta combined ...... 71

V

Comparing latencies across all regions ...... 71 Comparing auditory-motor ERD power and latencies ...... 73 Modulations in high beta, low and high gamma ...... 75

NEUROMAGNETIC: words split into HLS AND LLS ...... 77 Contrasting beta and alpha localizations in HLS and LLS ...... 77

DISCUSSION ...... 85 Addressing differences between stimulus-locked and EMG-locked datasets ...... 85 Modulations of beta ERD in the motor cortex ...... 87 Speech preparation ...... 87 Speech execution ...... 88 Role of the left hemisphere in AWS ...... 89 Role of the right hemisphere in AWS ...... 90 Correlation of beta response with stuttering severity ...... 92 Modulations of alpha ERD in the auditory cortex ...... 93 Alpha-beta in the bilateral cuneus ...... 95 Effects of stuttering anticipation ...... 95

CONCLUSION AND LIMITATIONS ...... 97 Limitations ...... 99

REFERENCES ...... 101

VI

LIST OF TABLES

Table 1 Participant summary of measured variables ...... 28 Table 2 Number of HLS and LLS words collected from each subject ...... 30 Table 3 SSI scores and task-induced stuttering numbers ...... 43 Table 4 SAM-localized alpha and beta ERD coordinates and pseudo-T group peak on the group average image ...... 46 Table 5 STIM locked beta ERD onsets and first ERD peak latencies in the precentral gyrus ...... 58 Table 6 STIM locked beta ERD onsets and first ERD peak latencies in the cuneus ...... 60 Table 7 SAM-localized beta and alpha ERD compared between HLS and LLS datasets ...... 80 Table A Final trial numbers EMG-locked and stimulus-locked data sets ...... 127 Table B Number of trials stuttered during the MEG task ...... 128 Table C Common phonemes in HLS and LLS categories ...... 128 Table D Number of trials where EMG onset preceded the speech cue ...... 129 Table E Single subject left-hemisphere Talairach coordinates of beta ERD ...... 129 Table F Single subject right-hemisphere Talairach coordinates of beta ERD ...... 131 Table G Single subject left-hemisphere Talairach coordinates of alpha ERD ...... 132

Table H Single subject right-hemisphere Talairach coordinates of alpha ERD ...... 134 Table I STIM locked alpha ERD onsets and first ERD latencies in the auditory cortex...... 136

VII

LIST OF FIGURES

Figure 1 Task schematic and time course of two successive trials ...... 32 Figure 2 Location of bipolar EMG electrode pairs ...... 35 Figure 3 Score difference between the two repeated ranking tasks ...... 39 Figure 4 Group differences in state and trait anxiety scores ...... 39 Figure 5 Distribution of response times relative to the speech cue, obtained from all trials. Top: AWS, bottom: FS. The percent of negative response times is indicated ...... 40 Figure 6 Participant averaged response times ...... 41 Figure 7 Regression of response time on trial number ...... 42 Figure 8 Correspondence of stuttered words with their anticipation ranking for subjects who stuttered (ID S27 to S06, Table 3) ...... 44 Figure 9 Correlations of percent stuttered trials with SSI and STAI scores ...... 44 Figure 10A STIM-locked localization of beta (15-25Hz) ERD on the group average image ..47 Figure 10B EMG-locked localization of beta (15-25Hz) ERD relative to EMG-onset ...... 47 Figure 11A STIM-locked virtual sensors from the left cuneus (BA 18) for AWS and FS ...... 49 Figure 11B EMG-locked virtual sensors from the left cuneus (BA 18) for AWS and FS ...... 50 Figure 12 SAM localization of beta ERD on the brain surface ...... 51 Figure 13A STIM-locked TFR plots of virtual sensors extracted from the left and right precentral gyrus (BA6) ...... 52 Figure 13B STIM-locked time-course of beta suppression (15-25Hz) in the bilateral precentral gyrus (BA6) ...... 53 Figure 14A EMG-locked TFR plots of virtual sensors extracted from the left and right precentral gyrus ...... 54 Figure 14B EMG-locked time-course of beta suppression (15-25Hz) in the bilateral precentral gyrus (BA6) ...... 55 Figure 15 Combined EMG and STIM-locked group average time course of beta (15-25Hz) suppression in the left precentral gyrus ...... 56 Figure 16 Beta ERD during speech preparation ...... 57 Figure 17 STIM-locked beta ERD latencies in the bilateral precentral gyrus ...... 59 Figure 18 Beta ERD in the precentral gyrus during speech execution ...... 61

VIII

Figure 19 Correlations between stuttering severity and beta ERD in the right precentral gyrus ...... 62 Figure 20 Beta band power in the left and right precentral gyrus compared across PREP (P) and EXEC (E) stages ...... 63 Figure 21 Laterality index during the EXEC and PREP stages compared between mild and severe participants ...... 64 Figure 22A STIM-locked localization of alpha (8-13Hz) ERD ...... 65 Figure 22B EMG-locked localization of alpha (8-13Hz) ERD ...... 66 Figure 23 SAM localization of alpha ERD on the brain surface...... 67 Figure 24A STIM-locked TFR plots of virtual sensors extracted from the left and right auditory cortex (BA41, 22, 13) ...... 68 Figure 24B STIM-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA13, 41, 22) ...... 69 Figure 25A EMG-locked TFR plots of virtual sensors extracted from the left and right auditory cortex (BA41, 22, 13) ...... 70 Figure 25B EMG-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA13, 41, 22) ...... 71 Figure 26 Close-up of STIM-locked alpha and beta ERD peaks across all observed regions ...... 72 Figure 27A Right-hemisphere time-courses of alpha-beta suppression contrasted between the auditory, motor, and visual cortices ...... 73 Figure 27B Left-hemisphere time-courses of alpha-beta suppression contrasted between the auditory, motor, and visual cortices ...... 74 Figure 28 PREP stage ERD power compared between motor (beta) and auditory (alpha) cortices ...... 75 Figure 29 SAM localization of 70-100Hz ERS...... 76 Figure 30A STIM locked localization of beta (15-25Hz) and alpha (8-13Hz) ERD compared between HLS and LLS datasets ...... 78 Figure 30B EMG-locked localization of beta (15-25Hz) and alpha (8-13Hz) ERD compared between HLS and LLS datasets ...... 79 Figure 31A STIM-locked time-course of beta suppression (15-25Hz) in the bilateral precentral gyrus (BA6) ...... 81

IX

Figure 31B EMG-locked time-course of beta suppression (15-25Hz) in the bilateral precentral gyrus (BA6) ...... 82 Figure 32A STIM-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA41,22,13) ...... 83 Figure 32B EMG-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA41,22,13) ...... 84 Figure A SAM localization compared between 15-25Hz and 20-30Hz bands ...... 141 Figure B Time-courses of 15-25Hz and 20-30Hz modulation in the precentral gyrus (BA6) of AWS ...... 142

X

LIST OF APPENDICES

Appendix A Additional tables ...... 127 Appendix B Additional figures ...... 137 Appendix C Abbreviations ...... 139

XI

1. INTRODUCTION

1.1 Developmental stuttering: an introduction

Developmental stuttering is a disorder of speech fluency persistent in about 1% of the adult population, affecting 4 males to every 1 female, and is characterized by prolongations, blocks and repetitions of syllables during speech (Bloodstein & Ratner, 2007). It typically surfaces during early stages of speech development at about 18-60 months of age (Yairi & Ambrose, 1999), and is not associated with any physical trauma or injury, thus presenting a different etiology from neurogenic stuttering, where onset is sudden and occurs shortly following a traumatic event. From this point forward when referring to stuttering, we mean developmental stuttering unless otherwise specified. Although there is a high variability in the way stuttering is expressed, a few fairly common characteristics are increased frequency of stuttering on initial syllables rather than middle or last, on content rather than function words, and during spontaneous conversation rather than on single words. Stuttering is also often associated with secondary behaviours, the extent of which is varied depending on severity. Examples are facial grimaces, contortions of orofacial muscles, loss of eye-contact when “stuck”, tapping of the hand or foot, swaying, or generally a visible physical effort that accompanies an attempt to over-come a stutter (Bloodstein & Ratner, 2007). Consequently, stuttering often becomes a source of anxiety and emotional stress and may affect an individual’s social integration, employment opportunities, and general quality of life (Craig, Blumgart & Tran, 2009; Hughes, Gabel, Irani & Schlagheck, 2010; Klein & Hood, 2004). One of the interesting questions in this area of research is the difference between those children who recover from stuttering, which is estimated to be about 70-80% by the age of 6 (Blomgren, 2013; Månsson, 2000; Yairi & Ambrose, 1999), and those children who do not and become part of the current adult statistic. While the causes of stuttering are still under investigation there is evidence for genetic components (Suresh et al., 2006), environmental factors, such as daily stress and lack of (Blood, Wertz, Blood, Bennett & Simpson, 1997), temperamental factors, such as trait and social anxiety (Blumgart, Tran & Craig, 2010; Craig & Tran, 2014; Iverach & Rapee, 2013), and 1

differences in brain structure and function underlying the condition (Belyk, Kraft & Brown, 2015; Brown, Ingham, Ingham Laird & Fox, 2005). Stuttering is therefore best viewed as a multi-factorial condition. As we will see, current research focus seems to lie primarily on the sensory-motor aspects involved in speech production and in general motor control. More recent studies are also shifting their focus to the behavioural aspects surrounding stuttering and the effect on quality of life. While the role of sensory and motor factors in stuttering is being quite extensively studied, the applicability of research findings to treatment is generally not clear. A number of speech therapy programs focus on rebuilding and reshaping speech in order to improve control of speech preparation and execution. The idea is to break speech down to the basics and eventually return to normal speech rate while maintaining as much conscious control as possible. Results are frequently short lasting, however, with a high percentage of relapse (Arya, 2013; Baxter et al., 2015). We believe that progress in characterizing processes underlying the speech co-ordination network and motor learning mechanisms can expand our understanding of the nature of dysfluencies in this population. While there is an abundance of behavioural studies on sensory- motor control and coordination in speech and non-speech kinematics in stuttering adults, studies on such processes at the level of the brain and with sufficient temporal resolution are greatly lacking. The study described herein was designed to fill a current gap in the literature on the sensory-motor neural processes underlying the short millisecond time window of speech preparation and execution processes in adults who stutter (AWS). Below we discuss the main findings and perspectives that lead to the current investigation.

1.2 Brain correlates of speech production in persons who stutter

Typical speech production has been studied using a variety of structural and functional neuroimaging techniques. Structural measures, such as Diffusion Tensor Imaging (DTI) or structural MRI, can evaluate the integrity of the thick bundles of neural fibres connecting functional regions throughout the brain, otherwise known as white matter (Assaf & Pasternak, 2008; Horsfield & Jones, 2002), or quantify gray matter volume and cortical thickness

2

(Ashburner & Friston, 2000; Hutton, Draganski, Ashburner & Weiskopf, 2009; Panizzon et al., 2009). Reduced white matter integrity may reflect reduced myelination of axonal fibres, or a disorganization in fibre structure and direction, which in either case could result in impaired transmission between regions within the speech-motor network that would typically be functionally synchronized (Fields, 2010). Functional measures, such as fMRI and PET imaging, measure the haemodynamic response in cortical and sub-cortical gray matter regions either during a task, which reflects task induced recruitment of necessary regions, or during rest, which reflects the default mode of functional connectivity within the brain (Mier & Mier, 2015; Uludağ et al., 2004). Functional measures can also refer to more direct imaging of neural activity underlying the haemodynamic response, such as electro- and magneto-encephalography (EEG, MEG), but we discuss these neural measures separately in the proceeding sections. Together these measures describe a complex speech-motor network that coordinates and monitors speech production.

1.2.1 Review of brain regions involved in typical speech production Brain imaging of typical speakers using PET, fMRI and DTI techniques during primarily speaking and reading tasks has revealed a wide network of regions engaged during speech production, reflecting linguistic, sensory and motor components essential for speech. These regions include the bilateral auditory cortex (superior temporal gyrus, Heschl's gyrus), supplementary motor area, pre- and post-central gyri, rolandic operculum (specifically the left or Broca’s area), cerebellum and basal-ganglia, as well as the right inferior frontal gyrus (Behroozmand et al., 2015; Bohland & Guenther, 2006; Ghosh, Tourville & Guenther, 2008). It is acknowledged that at least some of these regions show a left-lateralized function during linguistic processing (Greve et al., 2011; Ocklenburg, Hugdahl & Westerhausen, n.d.; Price, 2010), as well as during speech-production, such as the left inferior frontal gyrus (BA44, 45, 47), pre-central gyrus (BA4, 6), anterior insula (BA13), parieto-temporal area (BA41,22,40,7), and specifically, although less consistently reported, the left auditory cortex (Bohland & Guenther, 2006; Kell, Morillon, Kouneiher & Giraud, 2011; Price, 2010; Ries et al., 2004; Simmonds, Leech, Collins, Redjep & Wise, 2014). Left-lateralization can also be observed in sensory regions during a 3 second preparatory period for speech onset (Kell et al., 2011). The

3

wide spectrum of speech studies has allowed the scientific community to infer specific functional roles of several implicated regions, such as the left inferior frontal gyrus, in the formation of the articulatory plan (Beal et al., 2015; Hickok, 2012; Papoutsi et al., 2009), the ventral premotor cortex, in the interface between linguistic articulatory components and motor execution (Tremblay & Small, 2011; van Geemen, Herbet, Moritz-Gasser,& Duffau, 2014), the superior temporal and auditory regions, in sensory-auditory monitoring of speech (Christoffels, Formisano & Schiller, 2007; Paus, Perry, Zatorre, Worsley & Evans, 1996; Tourville, Reilly, & Guenther, 2008), and the inferior parietal cortex in the integration of sensory and motor inputs (Hickok et al., 2000; Shum, Shiller, Baum & Gracco, 2011; Watkins, Smith, Davis & Howell, 2008). DTI studies demonstrate that these regions are highly inter-connected by bidirectional white matter projections, such as the arcuate fasciculus, a bundle of neural fibres running between frontal and motor cortices as well as posterior parietal and temporal areas (Dick & Tremblay, 2012; Fridriksson, Guo, Fillmore, Holland & Rorden, 2013; López-Barroso et al., 2013; Saporta, Kumar, Govindan, Sundaram & Chugani, 2011). The inferior frontal gyrus and ventral primary motor cortex also project back to the (Bernal & Altman, 2010; Petrides & Pandya, 2009), and the parietal cortex projects to the posterior temporal gyrus (Klingberg et al., 2000). It appears that in persons who stutter this complex network of speech production behaves rather atypically. Differences are expressed in terms of white matter connectivity, functional recruitment of cortical and sub-cortical regions, and consequently also in terms of functional lateralization during speech tasks, demonstrating a wide network of impacted regions.

1.2.2 White matter tract thinning in adults and children who stutter Adults who stutter (AWS) exhibit reduced white matter tracts integrating motor and language regions within the speech network. One of the most commonly observed locations of white matter thinning is the left rolandic operculum (Cai et al., 2014; Chang, Synnestvedt, Ostuni & Ludlow, 2010; Price et al., 1996; Sommer et al., 2002; Watkins et al., 2008; Wise, Greene, Büchel & Scott, 1999), neighbouring the inferior frontal gyrus and the mouth motor cortex (Grabski et al., 2012; Takai, Brown & Liotti, 2010). The left rolandic operculum, which includes BA 44 and 45 that make up Broca’s area, has an acknowledged role in establishing neural

4

speech-motor programs, or speech representations, that are then fed to the precentral gyrus for execution and to posterior superior temporal and parietal cortices for sensory analysis (Beal et al., 2015; Hickok, 2012; Papoutsi et al., 2009). Reduced white matter integrity in left rolandic operculum of AWS is therefore proposed to impair proper integration of sensory information from oral muscles and from the auditory regions with the frontal and precentral areas involved in coordinating speech commands. Structural (and functional) connectivity between the left inferior frontal gyrus (BA44) and premotor as well as sub-cortical basal ganglia structures is also reduced in both adults (Chang et al., 2010) and, from recent observations, even in children who stutter (CWS) (Chang, Zhu, Choo & Angstadt, 2015; Chang & Zhu, 2013). The occurrence of such effects in young children suggests that abnormal integration within the speech-motor network may be a causal link in the developmental trajectory of stuttering, and is not merely a result seen only in adults following long years of stuttering experience. Of recent interest is the reported aberrant developmental trajectory observed in Broca’s area among CWS. Gray matter volume in this region is observed to be reduced starting from an early age (Beal et al., 2015; Beal, Gracco, Brettschneider, Kroll & De Nil, 2013; Chang, Erickson, Ambrose, Hasegawa-Johnson & Ludlow, 2008) and shows similar signs of slower maturation patterns in the older teenage groups (Beal et al., 2015). In line with the critical role of Broca’s area in speech acquisition (Guenther & Vladusich, 2012; Hickok, Houde & Rong, 2011; Hickok, 2012), Beal et al. (2015) therefore propose that stuttering is induced due to an impaired neural growth of this region during early speech acquisition stages which results in insufficient neural resources required to formulate speech representations in early developmental years. Stuttering teens then fail to show the synaptic pruning typically observed in fluent speakers, which results in a region that is less optimized for initiating speech production plans. Altered white matter connectivity is also found underlying the left ventral premotor cortex (Cai, Chan, Yan & Peng, 2014; Chang et al., 2010; Connally, Ward, Howell & Watkins, 2014; Watkins et al., 2008), which is also a critical region in speech-motor execution (Tremblay & Small, 2011; Van Geemen et al., 2014). Notably, neural projections extending posteriorly from the left ventral premotor cortex are reported to be cut short in AWS, as they do not extend as far back to the superior temporal and parietal regions as they do in controls, thereby disturbing critical communication between left premotor and auditory cortices (Chang et al., 2010), Of

5

similar consequence would be the reduced neural projections through the arcuate fasciculus in AWS (Cieslak, Ingham, Ingham & Grafton, 2015), a major pathway connecting frontal and motor cortices with posterior parietal and temporal regions that primarily engage in auditory and sensory processing (Dick & Tremblay, 2012; Fridriksson et al., 2013; López-Barroso et al., 2013; Saporta et al., 2011). Motor-auditory integration is an accepted critical component of speech control and monitoring, whereby sensory input from generated speech is fed back to motor regions through the superior temporal and parietal cortices for corrective measures (Agnew, McGettigan, Banks & Scott, 2013; Alho et al., 2014; Bowers, Saltuklaroglu, Harkrider & Cuellar, 2013; Guenther, Ghosh & Tourville, 2006; Guenther, Hampson & Johnson, 1998; Hickok, 2012; Möttönen, Dutton & Watkins, 2013; Terband, Maassen, Guenther & Brumberg, 2014). Reduced connectivity between the superior temporal regions and the rest of the speech network has been proposed to impair appropriate sensory processing during speech production in AWS, resulting in improper speech correction maneuvers (Daliri & Max, 2015). The afore- mentioned aberrant neural proliferation in Broca’s area of CWS could also result in poorly formed auditory prediction of speech targets, which would contribute to an auditory-motor discord throughout adulthood. Such an interpretation would be in line with predictions made by the DIVA model of speech production (Guenther et al., 2006, 1998).

1.2.3 Functional haemodynamic differences and the role of the right-hemisphere In light of such structural abnormalities in the neural projections between motor and sensory regions it is not surprising that they are accompanied by functional changes. Specifically relevant to the discussion above is the reduced activation of the left inferior frontal gyrus (BA47), the left mouth motor cortex and bilateral superior and middle temporal gyri (e.g. Herschel’s gyrus), as reported from recent meta-analyses of fluent speech in AWS performing reading tasks (Belyk et al., 2015; Brown et al., 2005; Ingham, Grafton, Bothe & Ingham, 2012; Ludlow & Loucks, 2003). We emphasize that the large majority of functional imaging on speech in AWS, and on which these functional differences are based, was recorded during fluent speech, rather than stuttered speech, in AWS. This is an important distinction to make as it implies an inherent abnormality in the recruitment of the speech motor network that underlies “typical” speech in AWS and is not only specific to dysfluencies.

6

In contrast to the predominantly left hemispheric under-activation, the right hemisphere of AWS is consistently over-activated, most commonly in the right inferior and superior frontal gyrus (BA44,9), pre-supplementary motor area (SMA), inferior parietal lobule (BA40), ventral insula (BA13), and the right precentral gyrus including the lip motor cortex (BA6,4) (Belyk et al., 2015; Brown et al., 2005). Underlying some of these right-hemisphere regions is in fact increased white matter density, specifically between the right homologues of the inferior frontal and premotor regions (Chang et al., 2010). Authors have proposed that right-hemisphere over- activation and the augmented neural density observed underlying some right-homologue speech regions may be a functional response that develops over years of stuttering in order to compensate for loss of intrinsic automaticity in speech coordination in the left hemisphere (Chang et al., 2008; De Nil & Kroll, 2001; Preibisch et al., 2003). Over-activation has also been observed in the motor-control sub-regions of the basal ganglia and cerebellum (Giraud et al., 2008; Paulin, 1993; Thach, Keating, Thach, Goodkin & Keating, 1992; Watkins et al., 2008), also proposed to reflect an exaggerated conscious effort to monitor and control one’s own speech (De Nil & Kroll, 2001). In line with this view is the observed return to a more typical left- lateralized speech motor network in AWS following intensive speech programs. Namely, activation decreased in the right inferior frontal gyrus (BA46) and increased in left inferior frontal gyrus (BA47), precentral gyrus (BA6), and bilateral superior temporal gyrus (BA41, 22) during reading (Kell et al., 2009; Neumann et al., 2003, 2005). A return to normal levels was also observed in cerebellar functional connectivity (Lu et al., 2012). Generally, ipsilateral motor haemodynamic response is increased when either task complexity increases or when the motor control network is compromised, such as in the elderly population (Graziadio, Nazarpour, Gretenkord, Jackson, & Eyre, 2015; Mattay et al., 2002; Naccarato et al., 2006; Verstynen, Diedrichsen, Albert, Aparicio & Ivry, 2005; Wu & Hallett, 2005; Zimerman, Heise, Gerloff, Cohen & Hummel, 2014). Reduction of right hemisphere engagement post therapy could suggest an improvement in the motor coordination process and a return to the speech-dominant left hemisphere, with a reduced need for right hemisphere facilitation. The role of the right hemisphere is under continuous debate, however, as arguments have been provided as well for a source of interference with what is typically a left-dominant speech coordination process (Bohland & Guenther, 2006; Kell et al., 2011; Price, 2010; Ries et al., 2004; Simmonds et al.,

7

2014). The nature of such possible compensatory or interference mechanisms could be clarified if we better characterize where within the speech coordination time-line such mechanisms are initiated, and whether there is an association with a dysfluency occurrence. For example, interference effects would presumably take place prior to speech production and would be positively correlated with increased stuttering occurrence.

1.2.4 Functional differences between stuttered and fluent utterances As we have alluded to above, the main body of neuroimaging findings concerning speech in the stuttering population is based on fluent speech, rather than stuttered utterances. There are three main reasons for the paucity of findings on stuttered speech in AWS: a) it is difficult to induce stuttering in unnatural experimental speech scenarios; devoid of all other factors that contribute to a spontaneous speaking environment; b) speaking tasks have often been very simple utterances such as single word reading or even syllable production; and c) stuttering instances are often accompanied by secondary behaviours that generate movement and signal artifacts during scanning. Evidently, studies conducted in fluent conditions demonstrate a different functionality underlying general speech processes in AWS. While this is of great interest, it limits our understanding of causes underlying a stuttered moment in itself. Examining stuttered utterances will add critical information on the way in which the abnormal speech functionality in AWS is perturbed to result in a dysfluency. In a meta-analysis of eight studies to date where some degree of stuttering was observed during reading tasks, Belyk et al. (2015) separate functional differences associated with fluent speech in AWS, from those associated with stutter-prone speech in AWS. The meta-analysis showed that while fluent speech demonstrated the established left under-activation and right hemisphere over-activation in homologous speech regions (e.g., inferior frontal gyrus, supplementary motor area, auditory cortices, precentral gyrus), stutter-prone speech was characterized by a more diverse pattern and functional changes were less lateralized and often affecting both hemispheres (Belyk et al., 2015). The most distinctive difference was a strong association of fluent speech with decreased activation in the left lip and larynx motor cortex but conversely an association of stutter-prone speech with an over-activation of the right homologue. The differential activation of the left and right mouth motor cortex implies a potential role of orofacial muscle mis-coordination in inducing a

8

stuttering moment. With regards to the role of the right-hemisphere, its greater engagement during stutter-prone speech may either imply that it is compensating for the interruption in an attempt to correct it, or that it is actually causing the dysfluency by interfering with the speech coordination process. While these are intriguing findings, in actuality only three of the eight studies included in the meta-analysis actually separated between single stuttered and fluent utterances (Jiang, Lu, Peng, Zhu & Howell, 2012; Ouden, Adams, Montgomery & den Ouden, 2014; Wymbs, Ingham, Ingham, Paolin & Grafton, 2013), and one study out of these three was only a single subject design. Instead, the rest of the studies compared fluency inducing (chorus or metronome speech) with dysfluency inducing (solo reading, sentence generation) conditions (Braun et al., 1997; Toyomura, Fujii & Kuriki, 2011), or reported stuttering-associated activation from 4 second long trials that included some percentage of stuttered utterances mixed with fluent utterances (Fox et al., 2000; Ingham, Fox, Ingham & Zamarripa, 2000; Ingham et al., 2012). To date this remains the extent of data available of processes underlying stuttered speech in AWS. For this reason, improved study designs that are better able to; a) generate stuttering in an experimental task environment and b) separate trials of stuttered from fluent utterances, are of critical importance for understanding the underlying mechanisms that contribute to a stuttering moment.

1.2.5 Limitations of fMRI and PET and implications for stuttering research In addition to changing task designs for better identification of stuttering utterances, the previous section emphasizes the need to move beyond fMRI and PET imaging methods. These techniques provide a haemodynamic measure of brain function, as they quantify the extent of blood flow and oxygenation to regions recruited during a task (Aguirre, Zarahn & Esposito, 1998; Dale et al., 2000). The limitation of these techniques in studying speech processes is the relatively long lag-time of 3-4 seconds before a signal can be detected (Aguirre et al., 1998; Dale et al., 2000). Because speech motor and sensory coordination takes place in the millisecond range, this lag time limits our ability to separate processes that occur prior to speech onset (i.e., “preparation”) and those that occur following it (i.e., “execution”). Such a separation is not only critical for expanding our knowledge on how the speech network is assembled prior to speech onset in both typical speakers and persons who stutter, but would characterize that which occurs

9

prior to a speech dysfluency. Moreover, such a separation would help identify where within the speech coordination process specific regions become engaged, which would better inform our conclusions about their role in stuttering behaviour. For example, the hypothesized interference effect of the right hemisphere would gain more ground if right homologue regions were observed to engage specifically in preparation for speech and if this was followed by dysfluent utterances. These considerations therefore necessitate the application of high temporal resolution imaging techniques to study speech in AWS, which we introduce in the proceeding sections.

1.3 Brief review of high-temporal resolution neural measures of brain function

Brain imaging techniques with high temporal resolution (1 ms), such as magneto- and electro-encephalography (MEG, EEG), can help separate out preparation and execution components involved in the neural speech coordination network, if the proper task design is implemented. Any cognitive event, may it be sensory or motor, results in the recruitment of neural resources for the performance or the processing of that event. The engagement of neural resources is expressed in one of two ways. The first is the generation of an event-related potential (ERP) response, which results from a stimulus-triggered localized post-synaptic firing. ERP responses are both time and phase-locked to the trigger, and are generally referred to as “evoked” responses (Da Silva & Fernando, 2006; Pfurtscheller & Da Silva, 1999). The second expression of neural activity is event-induced changes in the degree of synchrony between interactions of local neural networks. Synchronous behaviour manifests as oscillations and induces power changes in this oscillatory activity. Such modulation of oscillatory activity occurs in specific frequency bands. Functional roles have been attributed to theta (3-7 Hz), alpha (8-13 Hz), beta (low: 15-25 Hz, high: 25- 30Hz), and gamma (low: 30-50Hz, high: 70-100Hz) frequency ranges, in order from slowest to fastest oscillations. Changes in these frequency bands are expressed either as an event-related synchronization (ERS), which reflects a power increase in oscillatory activity, or an event-related desynchronization (ERD), which reflects a power decrease. In contrast to ERPs, triggered ERD and ERS are not phase-locked to the stimulus, and are generally referred to as “induced” responses. Although the name would suggest otherwise, both oscillatory power increases and

10

decreases can reflect task induced engagement or disengagement in the regions where they are observed, and similarly, the same region can express both ERS and ERD depending on the task and the underlying complex neural network. Quantification of task-induced ERS and ERD typically involves the comparison of oscillatory power to a defined neutral baseline, as described below in this report. The correspondence of ERS and ERD measures with haemodynamic responses, as measured by PET and fMRI, is also highly region-specific, with recent studies showing bi-directional correlations (Hall, Robson, Morris & Brookes, 2014; Hermes et al., 2012; Ritter, Moosmann & Villringer, 2009). The functional significance of such oscillatory changes is highly region-specific and has been explored in a wide spectrum of healthy brain function, including language (Salmelin, 2007), motor performance and somatosensory processes (Cheyne, 2013), and attention (Klimesch, 2012). Abnormal oscillatory behaviour is found to occur in a wide range of pathologies, a few examples of which are Aphasia, Parkinson’s Disease, and Traumatic Brain Injury (Brown, 2003; Huang et al., 2009; Meltzer, Wagage, Ryder, Solomon & Braun, 2013). The current report will refer to changes in oscillatory modulation, neural rhythms, or induced oscillatory responses as synonymous terms. Similarly, an ERD in the alpha or beta frequency range may be referred to as an induced alpha or beta suppression, or an increased or decreased task-induced suppression depending on the directionality of the ERD effect. On the other hand, ERP measures will be referred to as evoked responses. Below we discuss the relevance of induced and evoked neural responses to studying the underlying coordination of speech preparation and execution sensory-motor processes.

1.3.1 Sensory-motor recruitment prior to speech production Speech production is characterized by a sequence of evoked neural responses that lead to the moment of actual speech onset. Given a visual stimulus and the cue to speak the presented word out loud, evoked activity initiates in the primary sensory , travels to the superior temporal regions for further lexical-semantic breakdown, continues to Wernicke’s area for phonological encoding, to the left inferior frontal regions for segmentation and articulatory planning, and finally to sensorimotor and pre-motor regions involved in recruiting the orofacial muscles for speech execution (Carota et al., 2010; Herdman, Pang, Ressel, Gaetz & Cheyne,

11

2007; Indefrey & Levelt, 2004; Indefrey, 2011; Salmelin, Schnitzler, Schmitz & Freund, 2000; Salmelin, 2007). In terms of induced oscillatory modulation, preparation for overt repetition of visually presented words was recently characterized by alpha and beta ERD in the bilateral premotor and primary motor cortex, as well as the parietal, superior temporal and occipital lobes, beginning 350 ms after the speech preparation cue was given (Gehrig, Wibral, Arnold & Kell, 2012; Jenson, Thornton, Saltuklaroglu & Harkrider, 2014). Specifically, auditory regions demonstrated alpha suppression, while premotor regions demonstrated beta suppression. Gehrig, Wibral, Arnold, and Kell (2012) proposed that alpha and beta ERD reflect the setting up of information routes between these inter-connected speech regions. Preparation for speech production was also shown to induce neural coherence in the high beta range (25-31Hz) between bilateral primary motor and premotor cortices, and the inferior and middle temporal gyri (Liljeström, Jan, Stevenson & Salmelin, 2014). Neural coherence reflects large cortical interactions between different neural circuits across the speech-motor network and is proposed to be a mechanism for transferring information region-to-region (Bressler & Kelso, 2001; Fries, 2005). Beta suppression in the motor cortex has for long been correlated with expectation of planned upcoming motor task, preceding response onset by almost 1 s (Alegre et al., 2006; Bai, Mari, Vorbach & Hallett, 2005; Jurkiewicz, Gaetz, Bostan & Cheyne, 2006; Kilavik, Zaepffel, Brovelli, MacKay & Riehle, 2013; Kilner, Bott & Posada, 2005; Pfurtscheller & Da Silva, 1999; Tan et al., 2012; Tzagarakis, Ince, Leuthold & Pellizzer, 2010). Authors propose that pre- movement beta suppression reflects early preparatory processes and can be modulated by the planning load, or by the certainty of the upcoming motor act (Cheyne, 2013; Kilavik et al., 2013). Similar beta suppression of the motor cortex during speech preparation is proposed to reflect a feed-forwarding of the motor speech plan to the motor effectors and to sensory regions required for monitoring speech output, while alpha suppression of the auditory regions is proposed to reflect the priming of sensory feedback loops required for speech production (Bowers et al., 2013; Crawcour, Bowers, Harkrider & Saltuklaroglu, 2009; Cuellar, Bowers, Harkrider, Wilson & Saltuklaroglu, 2012; Engel & Fries, 2010; Gehrig et al., 2012; Kilavik et al., 2013; Klimesch, 2012; Liljeström et al., 2014). Such interpretations are rendered plausible

12

given the extensive knowledge we have of neural projections interconnecting motor and sensory speech-relevant regions.

1.3.2 The role of the auditory cortex An important sensory region that appears to be modulated in anticipation of speech is the auditory cortex. The role of the auditory cortex in speech monitoring is primarily inferred from the reported suppression of the evoked peaks in the auditory cortex during speech production, a mechanism that is proposed to reflect sensory monitoring of speech output by comparison to the predicted motor plan (Beal et al., 2010; Curio, Neuloh, Numminen & Jousma, 2000; Daliri & Max, 2015; Flinker et al., 2010; Guenther et al., 2006; Houde, Nagarajan, Sekihara & Merzenich, 2002; Ventura, Nagarajan & Houde, 2009). Suppression of the evoked auditory N100 peak is observed during speech preparation as well, occurring at least 200ms prior to speech onset (Daliri & Max, 2015). Stimulating the motor and auditory cortex during speech preparation with a transcranial magnetic stimulation (TMS) pulse also demonstrates strong excitability of these regions in anticipation of upcoming speech, where excitability is quantified by the amplitude and time of the auditory evoked N100 and by the motor of the stimulated muscle (Mock, Foundas & Golob, 2011; Seyal, Mull, Bhullar, Ahmad & Gage, 1999). Induced oscillatory activity measures also demonstrated increased connectivity between the temporo-parietal gyri and the premotor cortex bilaterally during auditory overt repetition and picture naming tasks (Alho et al., 2014; Liljeström et al., 2014), and specifically alpha suppression was also seen in the left auditory cortex during overt speech preparation (Gehrig et al., 2012). Early priming of these regions prior to speech onset might therefore reflect a preparation for incoming sensory input (Crawcour et al., 2009; Cuellar et al., 2012; Daliri & Max, 2015; Max, Guenther, Gracco, Ghosh & Wallace, 2004). The auditory-motor interface is thus a critical component in sensorimotor integration required for speech preparation and execution. The literature therefore highlights the advantage of using MEG and EEG designs to identify separate stages of preparation and execution and to characterize the recruitment of the speech network within milliseconds of production by quantifying speech-induced oscillatory modulation.

13

1.4 Current knowledge on speech preparation and execution in AWS

Despite the key role that induced neural oscillations as well as evoked ERPs across sensory and motor cortices seem to play in coordinating the speech network for production, as discussed above, neural processes of speech in AWS have not been extensively examined. Only a handful of studies using MEG, EEG, and TMS have reported some notable differences in auditory perception and speech preparation processes in AWS, which we discuss below. Findings generally point towards three affected regions: the left inferior frontal (LIF) cortex, the bilateral auditory cortices, and the bilateral motor cortices. The application of high temporal resolution imaging to stuttering research is critical for its potential ability to reveal the level at which speech processing and preparation may be abnormal.

1.4.1 Evoked neural responses in the left inferior frontal (LIF) cortex The LIF cortex is an acknowledged key region in the articulation planning of speech that communicates with the left premotor cortex and posterior-temporal regions during speech production (Clerget, Badets, Duqué & Olivier, 2011; Papoutsi et al., 2009; Poeppel, Emmorey, Hickok & Pylkkänen, 2012). Biermann-Ruben, Salmelin, and Schnitzler (2005) sought to characterize evoked responses in AWS during perception of acoustically presented words and sentences in preparation for cued overt repetition (Biermann-Ruben, Salmelin & Schnitzler, 2005). An equivalent current dipole analysis showed that AWS responded with a stronger response in the LIF cortex specifically prior to the sentence repetition task. The findings were viewed as increased articulatory-motor planning due to an anticipated increase in articulatory load associated with sentence production. In a single-subject study designed to disambiguate fluent words from stuttered words, Sowman, Crain, Harrison and Johnson (2012) found that blocking instances during a cued overt vowel production task were in fact associated with reduced evoked activity in the LIF cortex (BA47) (Sowman, Crain, Harrison & Johnson, 2012). Together these results suggest that increased LIF response in fluent speech of AWS may be a mechanism to avoid stuttering when articulatory load increases, while a reduction in LIF engagement could render speech more susceptible to a dysfluency due to insufficient preparatory engagement of this region. It was also reported that the haemodynamic response of the LIF

14

cortex (BA47) during fMRI-imaged reading significantly increased following an intensive stuttering therapy program (Kell et al., 2009). It is possible that fluency-shaping programs that focus on prosody, rhythm, and breathing techniques to control a stuttered onset induce a greater facilitative mechanism in this region that increases articulatory-motor planning. The proposed effect of increased articulatory planning needs to be investigated in scenarios where perceived articulatory load and risk of stuttering are manipulated, and where stuttering can be generated in sufficient numbers in order to test the consequences of such effects.

1.4.2 Evoked auditory suppression Abnormalities during speech production and preparation are also found in the auditory cortex. The auditory cortex is known to generate two evoked ERP peaks at 50 ms (M50) and 100 ms (M100) following an auditory stimulus (Cardy, Ferrari, Flagg, Roberts & Roberts, 2004; Chait, Simon, Poeppel & Simon, 2004; Cheng, Baillet, Hsiao & Lin, 2015; Cardy, Flagg, Roberts & Roberts, 2008). These peaks are consistently suppressed during speech production in a phenomenon termed speech-induced auditory suppression, a mechanism that is proposed to play a key role in auditory-motor integration and feedback monitoring during speech production (Curio et al., 2000; Flinker et al., 2010; Gunji, Hoshiyama & Kakigi, 2001; Heinks-Maldonado, Mathalon, Gray & Ford, 2005; Houde et al., 2002; Martikainen, Kaneko & Hari, 2005). While AWS did not differ from FS in the degree of speech-induced auditory suppression, M100 latencies were much shorter in AWS and shifted to appear earlier in the right auditory cortex, whereas FS latencies were bilateral (Beal et al., 2010). Similar findings were observed in CWS but instead affecting the M50 latency (Beal et al., 2011). Reduced modulation of the auditory cortex was also observed prior to actual speech onset. In a delayed reading task, Daliri and Max (2015) recorded evoked EEG N100 response of the auditory cortex to probe tones during speech preparation (in the interval between word presentation and the cue to speak the word out loud) (Daliri & Max, 2015). AWS did not show a statistically significant modulation in the N100 amplitude while FS demonstrated the expected induced suppression reflecting preparation of the auditory cortex for the efferent copy of the motor speech plan. In light of such findings Daliri and Max (2015) and others (Beal et al., 2010; Cai, Beal, Ghosh, Guenther & Perkell, 2014) proposed that preparation of appropriate sensory feedback networks, such as the auditory-motor

15

interface, is improperly primed prior to speech in AWS. A shortcoming of these studies is that conclusions are derived based on primarily auditory stimuli, while the modulation of the auditory cortex specifically in preparation for an overt speech task has not been examined in AWS. Investigating auditory engagement in a task where a motor speech plan is required is important as it can tap into the motor-sensory integration processes that are key for speech production, and as we have seen with typical speakers, can induce auditory preparatory modulations.

1.4.3 Atypical recruitment of the motor cortex The second important component in the speech network is the motor cortex. Unlike the studies on the auditory cortex in AWS, which have formulated a few hypotheses about the auditory component in stuttering, the application of high temporal resolution brain imaging to study specifically the motor component of speech in AWS has so far been quite minimal. A recent EEG study found abnormal oscillatory neural connectivity in AWS during rest (i.e., no task) (Joos, De Ridder, Boey & Vanneste, 2014). Neural connectivity allows to quantify the degree of integration and communication of information across large-scale neural circuits. The benefit of quantifying neural connectivity in the absence of any task is that any abnormal activity within the speech network at rest will presumably underlie any additional task-related group differences. Joos, De Ridder, Boey and Vanneste (2014) found decreased inter-hemispheric functional connectivity in the beta (13-30Hz) and low gamma (31-44Hz) range between the bilateral inferior frontal gyrus, BA44 and 45, and the premotor and motor areas, BA4 and 6. Notably, these regions are also underlined by reduced white matter tracts in AWS (Cai et al., 2014; Chang et al., 2010; Connally et al., 2014; Watkins et al., 2008). Considering that communication between these regions is critical for the formulation of the motor plan (Guenther et al., 2006) and that speech muscles are bilaterally innervated (Grabski et al., 2012), Joos et al. (2014) proposed that reduced connectivity between these regions at rest may cause a de- synchronization of articulatory muscle groups once speech is initiated, and contribute to a stuttering moment. Indeed, recruitment of specifically the mouth motor cortices showed abnormal timing in AWS during an overt production task, showing early recruitment of beta (15- 25Hz) suppression in the right hemisphere preceding that of the left, while FS first engaged the left hemisphere (Salmelin et al., 2000). A few TMS studies also demonstrated that delivering a

16

TMS pulse to the mouth motor cortex prior to speech onset resulted in exaggerated right mouth motor cortex excitability in AWS relative to FS (Barwood, Murdoch, Goozee & Riek, 2013; Neef et al., 2011), while others reported reduced excitability of the left-hemisphere motor cortex with no mention of differences in the right hemisphere (Neef, Hoang, Neef, Paulus & Sommer, 2015). These findings therefore indicate that motor coordination prior to speech may be atypical in AWS, yet only one of these studies involved a speech task that resembled a task closer to a typical speaking environment (i.e., without a TMS pulse) (Salmelin et al., 2000). For this reason knowledge on motor recruitment prior to the speech motor act in AWS is poorly understood. However, a multitude of behavioural studies have generated strong and robust evidence for atypical motor coordination and over-all movement kinematics in this population, which is reviewed in detail below.

1.5 Sensory-motor aspects in behavioural speech and non-speech studies

Behavioural studies on motor control in AWS add insight into the way in which brain functionality differences may affect performance on simple speech and non-speech motor tasks. Motor learning and movement accuracy in AWS have been studied in both speech tasks, including word and pseudo-word repetition as well as goal-oriented orofacial muscle movements (Archibald & De Nil, 1999; Bauerly & De Nil, 2011; Byrd, Vallely, Anderson & Sussman, 2012; Loucks, De Nil & Sasisekaran, 2007; Max & Gracco, 2005; Sasisekaran & Weisberg, 2014), and non-speech motor tasks, such as finger tapping, goal oriented reaching, and movement tracking (Daliri, Prokopenko, Flanagan & Max, 2014; Hulstijn, Summers, Van Lieshout & Peters, 1992; Max & Yudman, 2003; Smits-Bandstra & De Nil, 2007, 2013; Smits-Bandstra, De Nil & Rochon, 2006). Differences found in AWS motor performance can be summarized in three general categories: a) reduced movement accuracy and slower performance, b) slower and less efficient motor learning, and c) susceptibility to interference from dual tasks.

1.5.1 Movement accuracy and kinematics During fluent syllable and word production tasks, AWS show greater variability, increased amplitude and longer durations of specific movements in the oral and laryngeal system

17

(Max, Caruso & Gracco, 2003; Max & Gracco, 2005; McClean, Goldsmith & Cerf, 1984), with similar findings in non-word reading tasks (Sasisekaran & Weisberg, 2014). AWS also show larger displacements when asked to open the jaw or lips the smallest possible distance without actual speech production (Archibald & De Nil, 1999; De Nil & Abbs, 1991; Loucks et al., 2007; Loucks & De Nil, 2006, 2012). We note that differences occurred regardless of fluency, as more often than not AWS were fluent in experimental speech tasks, and yet motor differences were still observed, again suggesting differences are underlying even what is perceived as typical motor speech movements. In non-speech motor tasks, such as learning finger tapping sequences, AWS show longer reaction times and longer over-all execution (Smits-Bandstra et al., 2006). Similarly, in a task requiring fast and accurate hand movement towards a visual target, AWS showed reduced accuracy and slower movements relative to fluent controls (Daliri et al., 2014). To some authors, bigger movements and slower response initiation and execution times in AWS suggest poorer control and coordination of fine motor movements (Alm, Karlsson, Sundberg & Axelson, 2013; Alm, 2004; Neef et al., 2011), but to others these different kinematics in AWS reflect an adopted motor control strategy, rather than an impairment (Namasivayam & Van Lieshout, 2008, 2011). This is an important distinction as it offers different implications for the type of effect we should observe prior to stuttered speech. In the former view, we would expect that impaired control would induce more stuttering and would be enhanced in more severe cases. In the motor strategy approach, slower and bigger movements should increase efferent feedback and enable better sensory monitoring of the motor act in a motor network that may already be inefficient. Consequently one would expect that fluent speech in AWS is successful when this motor control strategy is efficient and dysfluency results when the strategy breaks down. Therefore a difference between a severe and a mild case can depend on how well this strategy is adopted. There is therefore an onus to investigate these theories specifically in speech production control, with emphasis on fluent and stuttering conditions. Group differences in motor performance become more pronounced when external visual feedback sources are removed and when task complexity increases. AWS were asked to make minimal jaw and hand movements requiring good accuracy and control and showed to perform better when aided by a visual feedback (Archibald & De Nil, 1999; De Nil & Abbs, 1991;

18

Loucks & De Nil, 2006). The dependence on external feedback implies that motor control may be impaired due to either aberrant kinesthetic processing or to an inability to efficiently use kinesthetic feedback for movement guiding and adjustment (Kalveram, 2001; Ludo Max et al., 2004). Task complexity effects were implied in early studies that reported delayed speech onset on long words compared to short words in AWS (Peters, Hulstijn & Starkweather, 1989) and increased stuttering incidence with growing word length during reading (Dworzynski, Howell & Natke, 2003). More recently, AWS show reduced production accuracy as well as an increased number of attempts on non-word repetition tasks as the syllable length increases, while control subjects are negligibly affected (Byrd et al., 2012; Sasisekaran & Weisberg, 2014; Smith, Sadagopan, Walsh & Weber-Fox, 2010). Task complexity effects are proposed to reflect increased demands on a speech-motor system that is mal-adapted to perform them accurately when a certain motor load threshold is surpassed (Kenneth & Conture, 1995; Kleinow & Smith, 2000).

1.5.2 Reduced motor learning ability Slower motor learning in AWS is inferred from the relatively slow improvement they demonstrate with motor practice and the reduced retention of practice effects at follow-up sessions (Namasivayam & Van Lieshout, 2008, 2011; Sasisekaran & Weisberg, 2014; Smits- Bandstra & De Nil, 2007, 2013; Smits-Bandstra et al., 2006). In a non-sense syllable sequence learning task, AWS still demonstrated slower reaction times when tested on the retention of previously learned and therefore familiar nonsense syllable sequences both explicitly and implicitly (Smits-Bandstra & De Nil, 2013; Smits-Bandstra et al., 2006). Similar findings were reported in a non-word practice task, where AWS continued to show high variability in lip aperture trajectories across repeated trials, while FS demonstrated practice-induced increased movement consistency that is indicative of motor sequence learning (Sasisekaran & Weisberg, 2014). Although a few studies did show improvement with practice in AWS in both finger- tapping sequence learning (Bauerly & De Nil, 2011) and repeated reading tasks (Balasubramanian, Cronin & Max, 2010; Max & Baldwin, 2010), improvement was still less marked than in controls and skill maintenance was not tested past a 24 hour retention period.

19

1.5.3 Dual task interference Dual task studies are commonly used to measure the level of automaticity one can achieve in practicing the first task so that available attention resources are left for the concurrent task (Logan, 1985). The added cognitive load of having to attend to two unrelated tasks simultaneously, typically a motor task paired with a non-motor cognitive task, seems to further impair performance in AWS (Bosshardt, 2002; Forster & Webster, 2001; Smits-Bandstra & De Nil, 2009). For example, stuttering increased during word repetitions when instructed to simultaneously memorize a different word from the one being repeated (Bosshardt, 2002). AWS also demonstrated poorer accuracy and longer response initiation when a colour recognition task was added either to a reading task, or during a finger tapping sequence learning task, whereas the impact on controls was not as significant (Smits-Bandstra & De Nil, 2009). Normally, if a task is well-practiced, greater attention can be directed to performing a second task simultaneously (Logan, 1985). The susceptibility to interference observed in AWS is possibly a reflection of a reduced efficiency in automatizing a new motor task as a result of slower motor-learning skill (Bauerly & De Nil, 2011). Susceptibility to interference from simultaneous motor and cognitive processes may further impair AWS in transferring their therapy skills to real-word conversations and could be a contributing factor to the 14-70% relapse reported within a year following therapy (Arya, 2013; Baxter et al., 2015). In fact, a focus on improving speech skills in the presence of such cognitive distracters (e.g., telling a story while performing a decision making task) has been recently proposed in new treatment approaches (Metten et al., 2011).

The wide spectrum of non-speech motor tasks in which AWS demonstrate differences in motor performance and sensory monitoring, as well as the differences in orofacial kinematics that were observed during fluent speech in AWS, indicate that speech dysfluency is only one expression of a more extensive underlying learning and co-ordination impairment in the motor network. Overall the majority of studies point towards a possible deficit in coordinated motor skill performance, acquisition, and long-term retention. The missing component to the motor control literature is the way in which such motor (and sensory) coordination is different prior to onset of speech itself and the brain regions that are responsible for it. On one hand, there is a wide range of DTI, fMRI, and PET literature that demonstrates atypical coordination in the

20

speech network during speech production but lacks the temporal resolution to zoom in on processes prior to speech onset itself, which is required in order to separate what are presumably different stages of speech preparation and execution. On the other hand, there is a wide range of behavioural data that obtains millisecond resolution in kinematic analysis of motor control but is investigating processes downstream of the motor control centers in the brain. There is therefore a window of opportunity to fill acritical gap in our knowledge on motor speech coordination.

1.6 The role of stuttering anticipation

The previous section discussed a wide range of evidence for existing factors that increase stuttering frequency, such as the presence of co-occurring cognitive demands that require divided attention and increased linguistic and motor-task complexity. Such common characteristics among AWS suggest that the speech-motor network may be more susceptible to interference from external factors and allow speech to be interrupted. One such factor that has received limited attention so far is the anticipation to stutter. In the current report the term anticipation of stuttering refers to the individual’s conscious ability to predict whether or not they will make an error (stutter) on a given word by drawing on their past experiences and associations with specific sounds and words. Although stuttering can be a random occurrence, there are a few common patterns across a majority of AWS, such as increased stuttering frequency on consonants rather than on vowels, on words imbedded in a sentence rather than on single words, and a tendency to occur at sentence onset (Bloodstein & Ratner, 2007). It was found that AWS stuttered more often on a clause placed at the beginning of a sentence, than the same clause but at the end of a sentence or on its own (Jayaram, 1984), and similarly AWS stuttered more on a clause if it appeared at the onset of a longer sentence (Tornick & Bloodstein, 1976), suggesting a potential key role of anticipation in inducing stuttering once it is ingrained in individual habits. In fact, AWS can predict on average 80-95% of their stutters (Bloodstein & Ratner, 2007). This prediction ability is typically lower and more variable in younger groups but improves significantly with age, presumably because individual patterns become more consistent and re-occur on the same words or sounds (Neelley & Timmons, 1967; Williams, Silverman & Kools, 1969). In this situation it is possible that the

21

growing anticipation acts as a source of positive feedback, with consistency increasing as one anticipates stuttering on the same phoneme sequences, which further strengthens one’s anticipation on these same phonemes in future. Several studies tested AWS on their ability to anticipate their own stuttering behaviour. AWS were asked to a) read a passage silently and identify words that they are likely to stutter on (Brutten & Janssen, 1979), b) read out loud and signal prior to a word of high anticipated stuttering (Avari & Bloodstein, 1974) or c) indicate “yes”, “no” or “maybe” prior to each individually presented word (Brocklehurst, Lickley & Corley, 2012). Recent studies have also tried to quantify the degree of stuttering anticipation by asking AWS to rate different consonants and vowels on a scale of 1 to 9 (Bowers, Saltuklaroglu & Kalinowski, 2012) or to rate a list of 50 words on a continuous scale of 0 (“no expectation to stutter”) to 100 (Arenas, 2012). Anticipation has been linked to physiological measures in at least one study where AWS had increased skin conductance prior to speaking words beginning with their self-rated feared sounds relative to their neutral sounds (Bowers et al., 2012). Skin conductance is commonly used to measure anticipatory autonomic arousal, or nervousness, in response to an upcoming aversive event (Bach, Friston & Dolan, 2010). Consequently, it may be hard to differentiate between the anticipatory thought and the downstream physiological effect that it induces in AWS, which can be a freezing response, reduced heart rate, or other expressions of fear (Alm, 2004). The findings discussed above have been summarized in a recently proposed theoretical model that outlines the consolidation of stuttering anticipation through a speech monitoring system that assesses whether an upcoming word is associated with previous stuttering incidences (Garcia-Barrera, 2015). The association itself is strengthened overtime as a conditioned response, often physiological, to a stuttering experience. Garcia-Barrera (2015) therefore argues for a very natural and potent role that anticipation plays in potentially triggering stuttering behaviour. In a study on thirty adults who stutter, Jackson, Yaruss, Quesal, Terranova and Whalen (2015) found that 87% of subjects report to respond to their momentary stuttering anticipation with a variety of learned self-management strategies, such as increased focus and preparation for speech, muscle relaxation, and the application of therapy tools designed to increase control of speech onset, such as the reduction of speech rate (Jackson, Yaruss, Quesal, Terranova & Whalen, 2015). Coping strategies varied depending on the type of speech therapy participants were exposed to.

22

Stuttering anticipation is therefore becoming a growing area of interest as evidence for its role in stuttering behaviour continues to evolve. Functional evidence for the role of anticipation in stuttering comes in part from the observed over-activation of the anterior cingulate cortex (ACC) in AWS during silent and oral reading tasks (Braun et al., 1997; De Nil, Kroll & Houle, 2001; De Nil, Kroll, Lafaille & Houle, 2003; De Nil & Kroll, 2001; Stager, Jeffries & Braun, 2003). Such activation was missing in control fluent speakers. Increased activation of the ACC has been associated with preparation for perceived complex stimuli (Paus, Koski, Caramanos & Westbury, 1998; Paus et al., 1996) and more recently was found to respond proportionally to the perceived likelihood of error and the error consequence (Brown & Braver, 2005, 2007). The ACC is also widely reported to partake in both speech and non-speech error monitoring. An “error-related negativity” (ERN), a negative deflection in the event- related potential originating in the ACC, seems to be associated with both speech and non-speech errors (Holroyd & Coles, 2002; Ullsperger, 2006). Specifically in speech production tasks, increased ERN was observed in situations where pressure on correct speech performance was introduced (Ganushchak, Christoffels & Schiller, 2011), following a spoonerism (Möller, Jansma, Rodriguez-Fornells & Münte, 2007), following incorrect responses in a fast picture naming task (Riès, Janssen, Dufau, Alario & Burle, 2011), and following vocal slips of the tongue in a Stroop test (Masaki, Tanaka, Takasawa & Yamazaki, 2001). Interestingly, similar effects are observed during action-monitoring tasks in patients with obsessive-compulsive disorder (Gehring, Himle & Nisenson, 2000). In a recent study, AWS showed heightened ERN peaks in the ACC during rhyming tasks that was independent of whether a mistake was actually made during the task (Arnstein, Lakey, Compton & Kleinow, 2011). The authors propose the finding reflects a hyper-vigilant monitoring system that could be prompted by anticipatory processes. Theoretical models proposed that the anticipated chance of impending speech errors can render AWS hyper-vigilant of their preparation for speech (Brocklehurst et al., 2012; Postma, 2000). We have reviewed abundant evidence for a role of stuttering anticipation in speech monitoring of AWS. However, the effect of stuttering anticipation on speech coordination processes is unknown. On the one hand, stuttering anticipation prompts AWS to apply strategies in order to control a dysfluency, which can resolve a potential stutter and pass unnoticed. On the

23

other hand, stuttering anticipation may result in unnecessary vigilance of speech and consequently a fluency block, resulting in either an avoidance of the word all together, or in a stutter. Such processes have not yet been investigated in this population. It is possible that increased speech error monitoring may result in induced oscillatory changes in the speech network components coordinating speech preparation given the strategic maneuvers that AWS report to apply in moments of high stuttering anticipation. Increased stuttering anticipation may increase oscillatory desynchronization in the left inferior frontal gyrus and bilateral premotor cortex prior to speech onset as a reflection of increased articulatory-motor planning to overcome or to monitor an expected fluency disruption. We have already reviewed evidence for a possible association of the left inferior frontal gyrus with increased articulatory-motor load in AWS (Biermann-Ruben et al., 2005; Sowman et al., 2012). Considering that stuttering anticipation varies for different words and sounds between individuals, studying its effect on speech-motor processes may require a method of identifying word-specific anticipation separately for each participant. Such an approach could reveal whether group differences in speech preparation between AWS and typical speakers are modulated by the presence of such anticipation in AWS. The findings discussed herein therefore suggest that perceived likelihood of stuttering, whether or not it results in a stuttering incident, could affect the mechanism of speech preparation. The current study strictly defines stuttering anticipation as word-specific prediction of whether or not a stutter will occur, as opposed to feelings of anxiety or fear that may be associated with such an anticipatory response. In this way the study narrows down anticipation to a very short-term conscious response that is specific to an upcoming speech utterance. Although such momentary anticipation of stuttering may still result in conditioned anxiety and fear (Garcia-Barrera, 2015; Jackson et al., 2015), both of which are inarguably contributing factors to stuttering (Alm, 2004; Brocklehurst et al., 2012; Craig & Tran, 2014; Van Lieshout, Ben-David, Lipski & Namasivayam, 2014), we view anxiety as a secondary downstream response to the conscious anticipation of word-specific stuttering. We also recognize that the physiological expression of anxiety (e.g., freezing response, reduced heart rate) may happen without the conscious anticipatory thought, as a conditioned physiological response to specific sounds or words following consistent stuttering occurrence, but our focus is only on the conscious

24

declarative word-specific anticipation (that may or may not follow with a downstream physiological response).

1.7 Summary of study objectives and hypotheses

The presented review highlighted a need to combine a high temporal resolution imaging modality with an appropriate task design that would allow to investigate the recruitment of the speech motor network in AWS prior to the onset of speech itself and to better separate what are presumed to be separate stages of preparation and execution of speech. Furthermore, including in the study design a more complex overt-speaking task with an individually tailored set of stimuli may result in greater task-induced stuttering, which has to date not been generated in sufficient studies. The primary objective was therefore to study the sensory and motor neural modulations during speech preparation and execution in AWS. The secondary objective was to investigate the effect, if any, of word-specific stuttering anticipation on these neural modulations within AWS, particularly in the speech preparation phase. We combine our objective with advanced algorithms for optimized source localization on whole-brain MEG data.

Regarding general speech production, the hypotheses are the following:

1. AWS will show increased bilateral motor beta suppression relative to fluent speakers (FS) during preparation and execution of speech. If confirmed, this will reflect increased task complexity due to reduced automaticity in the motor-speech skill. 2. Bilateral motor engagement will correlate positively with stuttering severity. This correlation will be stronger for the right-hemisphere. Such a correlation will reflect increased motor load and effort during speech and a reliance on the right hemisphere for a facilitative or compensatory response in more severe stuttering cases. 3. AWS will show reduced alpha or beta ERD in the bilateral auditory regions during preparation and execution of speech. This would suggest aberrant sensory monitoring and possibly abnormal auditory-motor integration in the speech network.

25

4. AWS will show reduced engagement of the left inferior frontal gyrus in alpha, beta, or gamma range. Such a finding would be interpreted as a consequence of the reduced white matter underlying this region and would reflect impaired recruitment necessary for speech preparation.

Further differences are proposed regarding the effects of high and low stuttering anticipation within AWS.

5. Stimuli associated with a high anticipation of stuttering will increase oscillatory power in the bilateral premotor cortex. This will be especially pronounced in the left inferior frontal gyrus and premotor cortex during the speech preparation stage. If confirmed, such a finding would indicate increased articulatory-motor planning in speech preparation and execution in order to over-come an expected fluency disruption. 6. High stuttering anticipation will exaggerate the group difference between AWS and controls. This would also suggest that group differences may be modulated by stuttering anticipation.

2. METHODS

2.1 Participant criteria

Twelve AWS and twelve FS were recruited. Consent was obtained as approved by the Hospital for Sick Children, as well as the University of Toronto, Research Ethics Board. All participants were right-handed, as determined by the Edinburgh Handedness Inventory (Oldfield, 1971), had no neurological conditions affecting motor ability, speech, vision, or hearing, and reported that English was their primary language of use. Control participants reported no history of speech or language therapy. Stuttering participants also reported no history of speech or language therapy other than stuttering-specific therapy. To be included in the stuttering cohort participants must have scored at least “mild” on the Stuttering Severity Index (SSI-IV) (Riley & Bakker, 2009; Riley, 1972) and self-reported to have been stuttering since early childhood. Control participants were selected to match the stuttering participants by age and sex. 26

2.2 Stimuli selection

Single words were generated from the English Lexicon Project (Balota et al., 2007), a database commonly used for stimulus generation (Brennan, Lignos, Embick & Roberts, 2014; Keuleers, Diependaele & Brysbaert, 2010; Plummer, Perea & Rayner, 2014; Simon, Lewis & Marantz, 2012; Yap & Balota, 2009). Controlled variables were syllable number (2), phoneme length (5-7), letter length (5-9), bigram frequency sum (18000-28000), word frequency (1-20 per million), and naming reaction time (set to 630 ms maximum). These filters generated 471 words. From this list morphemic derivatives and proper nouns were removed, as well as general and stuttering specific threat words as suggested by recent studies (Hennessey, Dourado & Beilby, 2014; Van Lieshout et al., 2014) resulting in a list of semantically neutral 414 words.

2.3 Visit 1: word ranking task

In the first study visit, AWS were asked to rank each word on how likely they are to stutter on it in a spontaneous conversation setting. Each word from the randomized 414 word list was presented one at a time at the center of the computer screen. Participants were asked to press a number 1 through 6 to indicate their ranking (1 = “Extremely unlikely”, 2 = “Very unlikely”, 3 = “Somewhat unlikely”, 4 = “Somewhat likely”, 5 = “Very likely”, 6 = “Extremely likely”), after which the next word was presented. Similar methods of identifying stuttering-prone words were applied in previous studies (Bowers et al., 2012; Ouden et al., 2014). As some participants noted that they have days when they do not stutter at all, and other days when the stuttering is very disruptive (examples of “bad days” given by participants were lack of sleep, stress, a meeting at work or with a client, or a random occasion), they were asked to base the ranking on what the likely outcome would be if the word were to occur on one of their bad days. An option was given to take a break of 2 minutes at the half-point. In order to provide the control group with the same familiarization with the word-list, FS performed the same ranking task but were asked instead to rate each word on the likelihood that they have used it during the previous week. The same ranking scale was used. As this was purely a familiarization task, the rankings were not used in the study.

27

Following the completion of the ranking task, we first administered the Vocabulary Knowledge and Digit Span sub-tests from the Wechsler Adult Intelligence Scale-III (WAIS-III, Wechsler, 1997) to both participant groups. These tests are commonly used in lieu of the full length WAIS-III (Ardila, Ramos & Barrocas, 2011; Bennett, Madden, Vaidya, Howard & Howard, 2011; Bosch et al., 2012). These subtests were used to confirm that there were no group differences in vocabulary or working memory (Table 1). If such group differences were present they could be argued to affect linguistic and motor processes in the speech production task described below.

Table 1

Participant summary of measured variables

AWS FS p-value Participants 12 12 - Sex 10 M, 2 F 10 M, 2 F Age 32 (6) 30 (8) 0.46 Vocabulary 12.4 (2.4) 13.0 (2.1) 0.5 Digit Span 11.3 (2.9) 12.4 (2.1) 0.26 STAI -State 32.4 (11.9) 28.9 (8.9) 0.42 STAI - Trait 40.1 (8.6) 32.7 (7.5) 0.02 STAI Total 73 (19) 62 (16) 0.13

Note. Values show mean and SD. Significant group differences are indicated by the p-value.

For the stuttering participants only, the WAIS was followed by the SSI, which included a 385- syllable reading task and a 10-minute conversation sample. Both groups were then asked to repeat the word ranking task but on a newly randomized order. Participants were told, “you may recognize some words but please do not try to recall your previous ranking for this word”. No participants showed a clear recognition of the repeated word list. At the end of the session we administered the State-Trait Anxiety Inventory (STAI) (Spielberger & Gorsuch, 1983), which

28

evaluates both the “state” anxiety, pertaining to the participant’s state at the end of the session, and general day-to-day (“trait”) anxiety levels. Given the acknowledged association of anxiety with stuttering (Craig & Tran, 2014) and with stuttering anticipation (Bowers et al., 2012; Garcia-Barrera, 2015), this measure can serve as a covariate if group and word category differences are observed.

2.4 Stuttering anticipation ranking-task scoring

Each word’s score was averaged across the two trials. A word was included in the final list if it satisfied two conditions; 1) average scores across the two trials was either between 4.5 and 6 or between 1 and 2 (corresponding to the two ends of the scale), and 2) the difference between the two trial scores was less than 2. Scores of 4 to 6 corresponded to high-likelihood of stuttering (HLS) words, while scores of 1 to 2.5 (and 3 if necessary) corresponded to low- likelihood of stuttering (LLS) words. Because a score of 4 was on the lower end (“somewhat likely”) it was only added to the final word list if the word obtained this score on both trials. Only those who had at least 80 words in both categories were invited back for the imaging component of the study. It was reasoned that these subjects would have stronger anticipatory responses, as they are better able to separate words of low and high likelihood of stuttering. Thus, 25 stuttering adults were recruited to perform the ranking task until 12 such participants could be identified. Table 2 lists the number of words that fell in each category for the final 12 selected participants who stutter. An exception was made for two participants who achieved only 77 and 58 HLS words (S07, S10), and a few participants who had very few LLS words (S21, S18). The latter group was still included due to the high number of HLS ratings, which were also on the extreme end of the scale. If the subject obtained less than 110 words in either category, additional words were added (in descending score order for HLS and ascending score order for LLS words) to bring each category total to this number. If there were more than 110 words that fell into either category (S18 to S27), preference was given to highest (for HLS) or lowest (for LLS) scored words.

29

Table 2 Number of HLS and LLS words collected from each subject.

AWS ID HLS LLS S02 83 143 S06 108 155 S07 77 144 S09 116 64 S10 58 52 S15 88 274 S18 343 25 S21 190 16 S23 130 202 S24 180 110 S25 246 112 S27 172 159

2.5 Visit 2: MEG task procedure

A minimum of two weeks (range of 13-90 days, and 11 days for one participant) following the ranking-task session, the selected participants were invited back for the experimental task. Each participant’s specific list of 110 HLS and 110 LLS words were visually presented for cued overt speaking while their brain activity was recorded. The stimulus was either an HLS or LLS word (in random order) presented at the onset of a carrier phrase “[X] is a word”. Each fluent speaker received the same words as their matched (by sex and age) adult who stutters.

2.5.1 Brief justification for the implemented task design Studying neural speech processes is complicated by the presence of speech-induced movement and muscle artifacts. For this reason many previous studies focused on covert response tasks (Ganushchak et al., 2011; Okada, Smith, Humphries & Ca, 2003; Perani et al., 2003; Sabbah et al., 2003; Wildgruber, Ackermann & Grodd, 2001). Although covert and overt speech involve similar networks, motor and auditory regions, and generally regions that are more 30

involved in feedback processing, are significantly more activated during overt tasks (Huang, Carr & Cao, 2001; Shuster & Lemieux, 2005), suggesting a somewhat different underlying process. Another problem is that separating processes of speech preparation and execution is nearly impossible during continuous speech as they are most likely happening concurrently. For this reason a cue-target paradigm can be used, where a cue is first presented with information of an upcoming target such that the preparation for the upcoming target can be initiated (Tanji & Evarts, 1976). Delaying the target stimulus would presumably allow for greater separation of preparatory processes from the time of speech execution and is thus a reasonable approach to study speech (Mock et al., 2011; Salmelin et al., 2000). A similar approach is therefore adopted in the current study.

2.5.2 Task procedure Our task sequence was comprised of 220 trials and performed while participants were seated. Total trial number was determined based on the inherently low signal in MEG that requires in the very least 100 trials in order to maximize signal-to-noise ratio, as suggested by the MEG literature (Gross et al., 2013). Trial number was maximized given the amount of time available for the MEG task and the inter-trial intervals implemented in the task design. As shown in Figure 1, each trial started with a target fixation cross that was presented for a duration randomly alternating between 1 and 2 seconds. The stimulus (“[X] is a word”) then appeared for 500 ms, followed with a 500 ms blank screen. A cue then appeared in the form of “< )))” and remained for 3 seconds. Subjects were to speak the stimulus sentence following the speech cue. The cue was followed by the fixation cross of the next trial. Every 50 trials (the quarter mark) a 5 second message was displayed to notify the subject of their progress (e.g., “1/4 Done!”). As part of the preparation before the scan, participants were acquainted with the task using a sequence of eight test words. Participants were told that in the occurrence of stuttering they are to complete the entire utterance even if it runs into the next trial. All presented text was white, height 1.7 cm, centered on a gray projection screen 75 cm away. No participant required glasses during the task. Stimulus presentation was performed using PsychoPy software (http://www.psychopy.org/).

31

Figure 1. Task schematic and time course of two successive trials. Task sequence (top) includes a fixation (+) of alternating length (1 or 2s), stimulus sentence (S, 0.5s), a blank (B, 0.5s), and the cue to speak (<))), 3s). Lip EMG signal (middle) and voice signal (bottom) are taken from one subject. The determined speech onset with respect to the lip EMG signal is shown by the dotted line. Both a stuttered utterance (left) and a succeeding fluent utterance (right) are shown for the chosen subject.

2.5 Data acquisition

Neuromagnetic activity was recorded continuously using a 151-channel whole head CTF (Omega) system (Hospital for Sick Children, Toronto). All data signals were collected at 12,000 samples/second and band passed at 0-4000 Hz to preserve acoustic integrity of the speech signal. MEG data were off-line filtered to 0.4-250 Hz and down-sampled to 1000 samples/second. Verbal responses were recorded using a Rode NTG-2 directional condenser microphone placed about 1.8 meters from the subject and recorded as an auxiliary channel. Stimulus onset was

32

indicated by a luminance sensor on the back of the projection screen to correct for signal delay between the presentation computer and the display. Participants were asked to avoid blinking between stimulus and start of speech and to minimize head movement during the overt response. Participant head position was measured continuously during the MEG recordings using fiducial coils placed at right and left pre-auricular and nasion points for later co-registration with the anatomical MRI. T1-weighted structural MRI MPRAGE gradient echo sequence (flip angle=9°, TE/TR=2.96 ms/2300 ms, 192 sagittal slices, 1 mm thick, 256×256 matrix, 25.6 cm FOV) was acquired for each subject on a 3T Siemens Trio MRI scanner at the Hospital for Sick Children immediately following the MEG session.

2.6 Reliability of severity and stuttering measures

Speech was monitored offline for stuttering incidences in the AWS group and assessed by two independent listeners, one being a registered speech language-pathologist. The two listeners rated each utterance as a “yes”, “no”, or “maybe”. Trials were considered as stuttered if classified by both raters as a definite stutter (yes/yes) or if classified by one rater as a definite stutter and by the second rater as a possible stutter (yes/maybe). A discrepancy between the two raters was only counted if a trial was considered as a definite stutter by one and as a definite fluent word by the second (yes/no). The speech pathologist quantified stuttering in the MEG task videos of 3 participants who stutter on which an agreement of 90%, 93%, and 96% was observed. Ambiguous trials (maybe/maybe) did not exceed 2%, 7%, and 6% across the two raters. The speech pathologist also assessed 4 participants who stutter on their stuttering severity and arrived at a total SSI score without having seen the assessment of the first rater. Final SSI inter-rater differences were 0, 1, 3, and 5 score points, but remained within the same severity category for all 4 participants. Any potential source of discrepancies was discussed.

33

2.7 EMG recording

Surface EMG was measured from the orbicularis oris muscle bilaterally using two pairs of bipolar EMG electrodes (AMBU adhesive electrodes, oval design, 22x30mm) above and below the lip (Goncharova, McFarland, Vaughan & Wolpaw, 2003; Liljeström et al., 2014; Saarinen, Laaksonen, Parviainen & Salmelin, 2006; Salmelin et al., 2000). Each pair of electrodes (upper and lower lip) was processed via differential channels and the final subtracted signal for the right and left mouth underwent the same processing as the MEG channels (12,000 samples/second, band passed at 0-4000 Hz, filtered 0.4-250 Hz, down-sampled to 1000 Hz). Speech onsets and offsets were identified and marked offline after rectifying the bandpass filtered signal on the rectified 0.4-250 Hz signal (Salmelin & Sams, 2002; Salmelin et al., 2000). An automated script was used to identify onsets at 1 standard deviation from baseline. Only one channel was selected for onset marking. Channel selection was based on a visual inspection for a cleaner signal. For half of the participants, three more pairs of electrodes were placed unilaterally on the left temporalis, frontalis, and masseter muscles (Boxtel, 1983; Goncharova et al., 2003). These were used to observe any patterns of speech-related muscle artefacts. Figure 2 displays all electrode locations.

34

Figure 2. Location of bipolar EMG electrode pairs. Adopted from Goncharova et al. (2003).

2.8 Data processing

Continuous MEG data were segmented off-line into 8 second epochs. One epoched dataset was time-locked to lip-movement onset determined from the EMG signal, and the other was time-locked to the presentation of the stimulus (-4 to 4 seconds). These data sets included all stimuli. A second set of epoched data was generated by separating the stimuli into HLS and LLS trials, deriving a stimulus-locked and an EMG-locked data-set for each stimulus category. While all participants had 220 raw trials, one control participant (F07) had only 145 trials recorded due to technical problems during the scan. Trials in each data set were visually inspected and removed if the following was observed: a) fixation period was contaminated with voicing or EMG signal from the previous trial, b) EMG artifacts were observed between stimulus presentation and speech onset or during fixation and c) if MEG activity exceeded 5 picoTesla (typically corresponding to eye blinks or muscle artefact). This inspection was performed for stimulus-locked and EMG-locked epoched datasets separately. Each dataset had two additional 35

exclusion criteria. The stimulus-locked data-set additionally excluded trials where premature voicing occurred prior to the speech cue and contaminated the early processing time window. These trials were maintained in the EMG-locked dataset because they still contained a viable motor response where execution processes could be quantified. The EMG-data set excluded trials where EMG onset could not be clearly identified. Yet these trials were maintained in the stimulus-locked data-set because they were otherwise clean and the EMG activity was not contaminating the early processing time windows. No trials were removed due to excessive head movement, which was below 0.6 cm for the majority of trials. Across all subjects, only 30 trials showed head movement between 0.6 and 0.8 cm, and for one subject 80 of 220 trials were between 0.7 to 1cm. The final trial numbers for both data-sets are listed in Table A (min = 134, max = 217, Appendix A). The selection criteria above resulted in some subject-specific differences between the two data sets. However, within-group one-sample and two-sample t-tests confirmed there was no difference in trial number between the data sets. There were also no group differences in trial number. Of a total of 2640 trials in the stuttering group, 314 (12%) stuttered trials were identified (Table B, Appendix A). Stuttering trials were not specifically removed in the current analysis in order to maintain the same signal-to-noise ratio between the two groups by having an equivalent number of trials. The HLS and LLS data-sets in the stuttering group began with 110 trials per group but following the trial removal procedure above contained 66 to 102 across the twelve subjects in the HLS dataset, and 73 to 99 trials in the LLS dataset. No difference was found between final trial numbers in the two conditions.

2.9 SAM beamformer analysis

Source analysis of frequency specific power changes was conducted using the Synthetic Aperture Magnetometry (SAM) algorithm (Robinson, 1999) implemented in C++ and Matlab (Mathworks, Natick, MA) using the BrainWave toolbox (http://cheynelab.utoronto.ca/brainwave). Pseudo-T images of power changes over time were generated using a sliding active window of 200 ms duration defined at 50 ms intervals starting

36

from stimulus presentation (0 ms) to 1400 ms for the stimulus-locked data and from 1200 ms prior to EMG onset to 200 ms post onset for the EMG-locked dataset. The same fixed 200 ms baseline window during the fixation period (-500 to -300 ms preceding stimulus onset) was used for both datasets. This baseline window was visually inspected once averaged across all subjects to confirm the absence of time-locked eye-blinks. Pseudo-T images were computed for alpha (8- 13 Hz), beta (15-25Hz, 25-30 Hz), low gamma (31-50 Hz) and high gamma (70-100 Hz) frequency bands over the entire brain at 4 mm resolution. Resulting pseudo-t images were averaged across subjects and spatially normalized to the MNI template using SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/). Group averages were then scanned for maximum peaks of power changes and corrected for multiple-comparison using a non- parametric permutation approach (Nichols & Holmes, 2002) adopted for Beamformer source imaging (Singh, Barnes & Hillebrand, 2003). MNI coordinates were also converted to Talairach coordinates using the mni2tal conversion for reference to the Talairach brain atlas anatomical locations (www.talairach.org).

2.10 Statistical analyses

Time-windows were co-ordinates of maximal source power change were localized were then used to extract single-trial source activity over time (virtual sensors) using the peak location in each subject’s non-normalized SAM image and a 10 mm search radius from the peak location in the group image (Cheyne, Bakhtazad & Gaetz, 2006). Selected peak locations are described in more detail in the text. Time series were band-pass filtered between 1 and 100 Hz and used to generate time-frequency representation (TFR) plots using a Morlet wavelet based decomposition (wavelet cycles =5, frequency step = 1 Hz). TFR plots were inspected to assess changes across frequency bands. Power was averaged across each frequency band of interest to generate an envelope time-course of event-related desynchronization (ERD) or event-related synchronization (ERS). Group and hemisphere comparisons on the time courses were done in two ways. For descriptive purposes, time-courses were averaged across successive 50 ms windows and submitted to parametric t-tests. Second, power time-courses were integrated across subject- specific time windows. Two stages of speech were defined: a) speech preparation (PREP), and b)

37

speech execution (EXEC), which will be explained in the following sections. The latencies and integrated power were analyzed using ANOVAs and standard t-tests. Laterality index (LI) was computed based on the integrated power according to the following equation: (Left- Right)/ABS(Left+Right). When integrating across ERD, left-laterality corresponds to LI < 0 and right-laterality corresponds to LI > 0.

3. RESULTS 3.A. Behavioural results

3.A.1 Consistency of stuttering anticipation rankings

Table 2 (see Methods) displays the number of words collected from each participant who stutters that matched the scoring criteria. Ten of twelve participants had above 80 words in the HLS category. Two participants had under 25 words in the LLS list, but still ranked over 110 words as very likely to stutter on and were kept for this reason. The common phonemes recorded in each category are listed in Table C (Appendix A). The ranking results generally made sense across subjects as vowels typically received low stuttering anticipation rankings while plosives received high rankings of stuttering anticipation. This is consistent with the general characteristics of stuttering cases, where plosives are stuttered on significantly more than vowels. The cross-trial score difference was collected across all words (414) and all twelve stuttering participants (4968 total trials) and displayed in a histogram to evaluate consistency (see Figure 3). Almost 90% of all trials were ranked either the same or with a difference of 1.

38

Figure 3. Score difference between the two repeated ranking tasks. Includes all words for all 12 AWS participants (4968 total trials).

3.A.2 Stuttering and anxiety

To confirm that the word ranking task had no significant effect on the anxiety of AWS and that they were not under stress while performing the task, we tested group differences in the STAI scores. A two-way ANOVA was done separately on the raw scores corresponding to state and trait anxiety, as well as on the combined state and trait score. A main effect of group was only significant for trait anxiety with higher anxiety scores in AWS (p=0.02, Fig.4), but no interaction with severity class (grouped into “Mild” and “Severe”) was found.

Figure 4. Group differences in state and trait anxiety scores.

39

3.A.3 Group effects on response times

In order to assess whether neural oscillatory group differences may be associated with a behavioural difference in speech response time, we checked for group differences in the lip EMG onset relative to the speech cue (a consistent 1 s following stimulus presentation). Histograms of response times collected from every trial across all subjects are displayed in Figure 5. Medians of response times across all trials were 0.231 s and 0.227 s for AWS and FS respectively, and no group difference was observed.

Figure 5. Distribution of response times relative to the speech cue, obtained from all trials. Top: AWS, bottom: FS. The percent of negative response times is indicated.

3.A.3.1 Addressing negative response times The group histograms (Fig.5) show 13 to 15% of negative response times across the groups, corresponding to trials where EMG onsets preceded the cue to speak. The earliest response time across the groups was 0.385 s but the majority of premature trials appear to fall 40

within 0.250 s preceding the speech cue. It is important to note that the voice onsets for these trials were still following the speech cue and that these trials were also otherwise clean. Given that there was no jitter between the stimulus presentation and the speech cue (consistently set at 1 s), the premature onset could be an anticipatory EMG response that is acquired as the task continues. However, a linear regression of negative response time on trial number showed no effect. Later trials therefore did not specifically result in more negative onset times. The number of participant-specific negative response trials is listed in Table D (Appendix A). In order to confirm that the presence of negative response times did not obliterate an existing group difference, we tested a group effect on response time separately for negative and positive response times. We found no group difference in either the number of premature trials or the mean negative or positive response times. We therefore combined all response times, averaged across trials, and obtained subject- specific mean trial responses, resulting in group average response times of 0.230 +/- 0.083s and 0.225 +/- 0.103 s (mean +/- SD) for AWS and FS respectively (p>0.5, one-sample t-test, Fig.6A). Although subject-specific coefficients of variation did not differ at the group level, they were quite high for both groups (Fig.6B).

Figure 6. Participant averaged response times. A: Mean response times relative to the speech cue, averaged across all trials for each participant and collapsed across response type (HLS +LLS). B: Coefficient of variation (CV) relative to each participant’s mean response, collapsed across response type (HLS+LLS).

3.A.3.2 Addressing response type The analysis above was repeated for HLS and LLS category trials separately. This was done for AWS only in order to check whether increased stuttering anticipation may result in 41

slower speech initiation. In the case that a difference was found in motor recruitment between HLS and LLS categories, it could then be associated with a potential effect on response time as well. However, there was no effect of category on the number of premature responses, and no main effect of group or category (HLS, LLS) on the response time.

3.A.4 Trial effect on response times

As an indirect measure of subject alertness during the task, we tested whether response times became longer as the task continued. Such an effect could suggest task-induced fatigue or distractedness. A significant effect of trial number on response time was in fact observed, but in the opposite direction. Regressing all trial response times against trial number shows shorter response times with task duration (p<0.0001, Fig.7). The groups did not differ in the linear regression slope, showing an identical trial effect. It is possible that this reflects a form of impatience as the repetitive task continued.

Figure 7. Regression of response time on trial number. For display purposes only the group average trial response is shown, rather than plotting all participants.

3.A.5 Task-induced stuttering

Given the consistency found between word-specific stuttering anticipation and actual stuttering occurrence in previous studies (Garcia-Barrera, 2015), it was of interest to see whether our task design achieved similar consistency between words AWS ranked as highly likely to be stuttered on and words they actually stuttered on during the task itself. The number and percent

42

(of 220 total) trials stuttered during the MEG task are displayed for each subject in Table 3 in order of increasing stuttering occurrence, along with the total SSI scores. One participant (S09) demonstrated strong stuttering anticipation during the ranking task but showed no stuttering behaviour, receiving an SSI score of 0. The subject was still included in the study once a diagnosis was confirmed from the Speech and Stuttering Institute in Toronto, where the participant was approved for a stuttering therapy program. Of the twelve AWS, nine stuttered during the experimental task, and eight of the nine stuttered on more than 8% of the trials. Notably, participant S09 stuttered on 9% of trials. Across all subjects who stuttered, a total of 314 stuttered trials were observed.

Table 3 SSI scores and task-induced stuttering numbers

Stuttered % % % AWS ID SSI Class SSI Total Trials Stuttered HLS LLS S07 Very mild 17 0 0 - - S10 Very mild 12 0 0 - - S21 Mild 18 0 0 - - S27 Mild 24 6 3 100 0 S24 Mild 22 18 8 83 17 S09 none 0 19 9 95 5 S25 Severe 32 22 10 100 0 S23 Very mild 16 34 15 74 26 S15 Mild 23 35 16 71 29 S18 Very severe 37 40 18 85 15 S02 Mild 19 59 27 68 32 S06 Severe 34 81 38 66 34

Note. Subject S09 showed no stuttering behaviour during the SSI assessment in visit 1 and therefore received a score of 0. The subject was included in the study due to a confirmed stuttering status from the patient’s speech therapy clinic.

Indeed we found a strong correspondence between stuttered trials and the stuttering anticipation rankings. Across the nine participants who stuttered during the task, 70-100% of the stuttered

43

trials coincided with the stimulus words ranked in the first visit as having a high likelihood of stuttering. For four participants this correspondence surpassed 85% (“% HLS”, “% LLS”, Table 3, Fig. 8). However, frequency of task-induced stuttering was not related to STAI or SSI scores (Fig.9). A multiple linear regression confirmed that neither state or trait anxiety, nor SSI scores were predictive of task-induced stuttering frequency (p=0.34). Considering that stuttering instances are typically difficult to induce in experimental settings such as these, the occurrence of 314 stuttered trials was surprising and reflects positively on the choice of designing participant-specific word lists.

Figure 8. Correspondence of stuttered words with their anticipation ranking for subjects who stuttered (ID S27 to S06, Table 3).

Figure 9. Correlations of percent stuttered trials with SSI and STAI scores.

44

3.B. Neuromagnetic results – all stimuli combined

Two analyses were carried out. The first included both word categories (HLS and LLS, Section 3.B) and the second separated the trials into HLS or LLS words and analyzed the epoched datasets separately (Section 3.C). The analyses were performed on both the stimulus- locked and EMG-locked data sets and are discussed below concurrently for comparison. Each was used for defining a key window of interest in the speech process. We will refer to these datasets as “STIM” and “EMG” throughout the sections below. The stimulus-locked dataset defined the preparatory speech stage, presumed to occur between the stimulus presentation and the cue to speak. The EMG-locked dataset defined the execution speech stage, taking place from the start of detected lip movement onset. These stages will be described in more detail in the proceeding sections.

3.B.1 Localization of beta (15-25Hz) band ERD

Beta ERD was localized to the visual and premotor regions, specifically to the bilateral cuneus (BA18, 17) and precentral gyrus (BA6) in both the EMG and STIM datasets. Group SAM localized co-ordinates of beta ERD are listed in Table 4 along with group peak values (Pseudo-T) (alpha localizations in Table 4 will be discussed in Section 3.B.3). Group localizations are displayed on the glass brain in Figure 10A for a few selected time-windows. In the STIM dataset, the cuneus region peaked bilaterally at 300-500 ms, after which the intensity decreased. The precentral gyrus first peaked bilaterally at 300-500 ms, began to decrease, and began to increase once more at 800-1000 ms. These peak time windows were used for virtual sensor extraction. Virtual sensor time-courses are discussed below. The EMG dataset (Fig.10B) show a continuous increase towards speech onset.

45

Table 4 SAM-localized alpha and beta ERD coordinates and pseudo-T group peak on the group average image

STIMULUS-LOCKED AWS FS Anatomy Coordinates Pseudo-T Coordinates Pseudo-T L BA6 -46, -2, 28 -4.6 -46,-2,28 -3.14 R BA6 50, 2, 31 -3.4 50, 2, 31 -1.35

Beta L BA18 -18,-77,17 -4.53 -18,-81,21 -4.11 R BA18 22,-73,17 -4.99 22,-81,21 -4.86

L BA13/41 -32,-44,22 -2.09 -53,-23,10 -1.31

R BA41 53,-19,14 -1.5 50,-23,7 -1.01 L BA18 -14,-81,17 -3.82 -22,-84,21 -4.03 Alpha R BA18 22,-77,20 -3.73 14,-85,13 -4.3 EMG-LOCKED AWS FS Anatomy Coordinates Pseudo-T Coordinates Pseudo-T L BA6 -42,-6, 28 -4.2 -46,-2,28 -2.75 R BA6 53,-2,35 -3.49 42,-3,24 -1.21

Beta L BA18 -18,-81,13 -1.83 -18,-81,21 -2.69 R BA18 22,-73,20 -2.52 22, -77,20 -3.04

L BA13/41 -42,-18,25 -1.79 -50,-23,10 -1.3

R BA41 53,-19,14 -1.26 53,-23,7 -1.02 L BA18 -18,-77,17 -2.36 -22,-84,21 -3.26 Alpha R BA18 22,-77,13 -2.07 18,-81,17 -3.6

Note. Alpha localizations will be discussed in section 3B.3.

46

Figure 10A. STIM-locked localization of beta (15-25Hz) ERD on the group average image. 1= left BA18, 2 = right BA18, 3 = left BA6, 4 = right BA6. Three time-windows are shown: 150- 350ms (+150ms), 300-500ms (+300ms), 1000-1200ms (+1000ms). Speech is indicated based on the mean lip EMG onset across all participants and trials (1228ms). The right BA6 in the fluent speakers is not visible at the applied image threshold.

47

Figure 10B. EMG-locked localization of beta (15-25Hz) ERD relative to EMG-onset. 1= left BA18, 2 = right BA18, 3 = left BA6, 4 = right BA6. Three time-windows are shown: -1200 to - 1000ms (-1200ms), -700 to -500ms (-700ms), and -400 to -200ms (-400ms). The right BA6 in the fluent speakers is not visible at the applied threshold.

3.B.1.1 Bilateral visual beta suppression

Virtual sensors were extracted from time window 300-500ms where the peak maximum was observed. In the STIM data, a sharp prominent beta band ERD was observed in the cuneus bilaterally following stimulus presentation and was preceded by a burst of beta ERS right at stimulus presentation (0s) (Fig.11A). A second beta ERD immediately follows the speech cue presentation (1s), preceded once again by an even stronger ERS (13-40 Hz), which may be 48

associated with the blank at 0.5s. A similar visual response was observed in the EMG data but is notably weaker in amplitude with the first ERD peak appearing to be washed out (Fig.11B), despite the close proximity of the identified peaks in the two data sets (Table 4). This is expected, as the visual response should be time-locked to the stimulus presentation. For this reason only the visual response extracted from the STIM data was analyzed further. Participant-specific onset times of the first beta ERD in the cuneus were extracted from each participant bilaterally. A two-sample t-test found a group difference for the left BA18 first onset times, beginning earlier in AWS (AWS: 55 ms, SD=34 ms; FS: 103 ms, SD=35 ms; p=0.001). Descriptively, AWS also showed higher amplitude in the second ERS peak of the bilateral cuneus compared to FS, though this was not quantified.

Figure 11A. STIM-locked virtual sensors from the left cuneus (BA 18) for AWS and FS. Top: TFR plot showing prominent beta (15-25 Hz) ERD immediately following stimulus presentation (0 s) and speech cue (1 s). Bottom: Time-courses collapsed across the beta band (15-25 Hz) for the bilateral cuneus. The onset of the first ERD peak is identified.

49

Figure 11B. EMG-locked virtual sensors from the left cuneus (BA 18) for AWS and FS. Top: TFR plot showing prominent beta (15-25 Hz) ERD roughly following stimulus presentation (~ - 1.1 s) and the speech cue (~ -0.2 s). Bottom: Time-courses collapsed across the beta band (15-25 Hz) for the bilateral cuneus.

3.B.1.2 Bilateral motor beta suppression

Participant-specific coordinates of the localized beta ERD in the BA6 from both STIM and EMG datasets are listed in Tables E-F (Appendix A). A two-sample t-test on the x, y, and z coordinates confirmed there was no significant difference between the two datasets (p>0.2). The average over participant-specific coordinates was in good agreement with the SAM localized group peaks suggesting there was no influence of participant bias, meaning there was not a single participant that was skewing the SAM localization of the group average peak. Virtual sensors were extracted from 300-500ms window in the STIM data and from -400 to -200 ms window in the EMG data. Source localization for these two selected time windows is displayed on the brain

50

surface in Figure 12, confirming the location to that of the mouth motor cortex anterior to the in both datasets.

Figure 12. SAM localization of beta ERD on the brain surface. Left: EMG-locked ERD, time window -400 to -200 ms. Right: STIM-locked ERD, time-window 300 to 500 ms. Both data-sets localize to the bilateral mouth motor cortex (BA6, precentral gyrus).

In the STIM dataset, a clear beta band suppression is evident immediately after stimulus presentation (Figure 13A). The time-courses demonstrate an early preparatory ERD peak around 300ms following stimulus onset, followed with stronger ERD that continues to increase after the speech cue and during speech itself (Figure 13B). Descriptively, we can see that this ERD peak appears bilaterally in AWS but is absent in the right BA6 of FS. Similar early ERD suppression peaks have been observed in both speech and non-speech motor tasks (Salmelin et al., 2000; Tzagarakis et al., 2010). Two-sample t-tests on 50 ms time windows across the time courses show bilateral mouth motor engagement in AWS and a reduced right motor cortex engagement in FS, which is most apparent from 200 to 1600 ms (Figure 13B).

51

Figure 13A. STIM-locked TFR plots of virtual sensors extracted from the left and right precentral gyrus (BA6). Strong beta band suppression is indicated by the dashed line. Stimulus = 0 s, cue = 1 s, fixation= -1 to 0 s.

52

Figure 13B. STIM-locked time-course of beta suppression (15-25 Hz) in the bilateral precentral gyrus (BA6). Top: group contrast, bottom: hemisphere contrast. Differences are indicated based on a two-sided t-test on the averaged amplitude across 50 ms windows. The first ERD peak is indicated in the top left plot. Error bars reflect standard mean error.

The EMG-locked data show a similar beta suppression pattern (Figure 14A) but the observed early preparatory ERD peak no longer appears (Figure 14B). This is most likely due to the jitter of the EMG onset relative to the stimulus presentation time. Considering that 50% of trials had a response time within 100 ms of the group mean (1228 ms), this is a significant inter- trial jitter that would average-out the sharp ERD peak observed in the STIM data. Consequently, there appears to be a clearer separation of AWS and FS time-courses as early as 1000 ms prior to speech onset in the BA6 bilaterally. We consider this to be a misleading effect that is not a true reflection of the underlying neural speech processes. It is more likely that the STIM-dataset provides more accurate insight into processes occurring this early prior to speech production.

53

On the other hand, the speech execution stage may be more time-locked to the lip EMG onset. We can infer this from the much smoother time-course and relatively smaller standard mean error during speech execution in the EMG dataset. Evidently, the time-courses across the two data sets show a preparatory response component that is time-locked to stimulus- presentation, and an execution component time-locked to EMG-onset. We therefore examine these processes separately in the sections below.

Figure 14A. EMG-locked TFR plots of virtual sensors extracted from the left and right precentral gyrus. Strong beta band suppression is indicated. EMG speech onset (0 s) and average fixation time are indicated. Pre-speech beta suppression is missing in the right BA6 of fluent speakers (top right).

54

Figure 14B. EMG-locked time-course of beta suppression (15-25 Hz) in the bilateral precentral gyrus (BA6). Top: group contrast, bottom: hemisphere contrast. ERD suppression curve no longer shows an early ERD peak (as compared with STIM-locked data, Fig.13B). Significant windows are indicated based on a two-sided t-test on the averaged amplitude across 50 ms windows. Error bars reflect standard error of the mean.

3.B.2 Quantifying beta ERD in preparation and execution stages of speech

To quantify differences in the magnitude of beta ERD, the source power was integrated over the relevant time-course to obtain area under the curve, using 1 ms time windows. The speech preparation (PREP) stage was defined in the STIM dataset as the time from ERD onset (extracted from each participant) to the speech cue (a fixed 1s following stimulus presentation). The speech execution (EXEC) stage was defined in the EMG dataset as the time from lip EMG onset (fixed at 0 s) to the participant-specific ERD offset (Figure 15). Latency comparisons will be made using ERD onset times and the first ERD peak, defined relative to the STIM data set, as 55

well as the ERD offset times defined relative to the EMG dataset. In order to compute over-all ERD duration, the onset and offset times must be defined relative to same time-point. We chose to define over-all ERD duration relative to the EMG data.

Figure 15. Combined EMG and STIM-locked group average time course of beta (15-25 Hz) suppression in the left precentral gyrus. Speech preparation (PREP) is defined from stimulus presentation to the speech cue (0 to 1 s). Speech execution (EXEC) is defined from EMG speech onset to subject specific beta ERD offset. The STIM-locked ERD onset and first ERD peak are also shown.

3.B.2.1 Speech preparation, comparing ERD extent

A significant group effect was seen only in the right BA6 ERD, appearing much stronger in AWS (p=0.007, two-sample t-test, Fig.16A). Descriptively, AWS seem to engage the left BA6 more than FS as well, but this is not significant. A one-sample t-test on the laterality index (LI) revealed significant left-dominance (LI<0) only for FS (p=0.01) (Fig.16B). An outlier was observed in the AWS group with an LI of +0.99 due to negligible left motor cortex recruitment. When this outlier was removed the LI was significantly left-lateralized in AWS as well (p=0.01). The wider range of LIs in the FS group is due to four FS participants who showed no right motor recruitment and thus an LI of –1.

56

Figure 16. Beta ERD during speech preparation. A: Integrated ERD power in each hemisphere compared between groups. B: Hemisphere differences expressed as the LI computed from A. Note. AWS group was significantly left-lateralized when the outlier (LI=+0.99) was removed (*).

3.B.2.2 Speech preparation, comparing latencies Participant-specific STIM-locked beta ERD onset latencies were obtained to investigate latency differences. Both the left and right mouth motor cortex were recruited on average just before stimulus presentation (Table 5). This could be an effect of the anticipation of the stimulus, despite the randomized fixation duration between 1 s and 2 s.

57

Table 5

STIM locked beta ERD onsets and first ERD peak latencies in the precentral gyrus

ERD ONSET LATENCIES ERD PEAK LATENCIES ID AWS FS AWS FS AWS FS LEFT RIGHT LEFT RIGHT LEFT RIGHT LEFT RIGHT 2 3 119 -400 158 193 440 377 335 882 6 4 -8 127 196 1081 374 297 NA 1151 7 5 109 70 -345 -404 342 252 598 301 9 6 140 221 150 1122 535 274 438 928 10 7 181 0 129 332 402 349 421 446 15 8 -350 0 -225 57 331 322 243 128 18 9 -281 -212 163 395 429 352 327 514 21 10 -47 0 -371 -72 324 398 399 412 23 11 -104 -81 0 71 416 115 432 401 24 12 0 150 249 1026 NA 310 377 NA 25 13 -297 -259 -341 1873 333 393 344 NA 27 14 -319 0 153 78 309 348 353 261 MEAN -71 -32 -7 479 385 316 388 542 SD 196 181 241 655 68 78 90 332

At the group-level, right motor ERD was initiated earlier in AWS (AWS: 32 ms, SD=181 ms; FS: 479 ms, SD=655 ms; p=0.02, two-sample t-test, Fig.17A). At the hemisphere- level, the right BA6 in FS was significantly delayed (left: -7 ms, SD=241 ms; right: 479 ms, SD=655 ms; p=0.03, two-sample t-test, Fig.17A), while no hemisphere differences were seen in AWS (left: -71 ms, SD=196 ms; right: -32 ms, SD=181 ms). The low significance is likely due to high variability in the right BA6 onset times in the FS group, despite the notably higher mean. The latencies of the first ERD peak (identified in Figure 15) were also compared as a secondary measure (Fig.17B). There are two key observations, 1) there is a noticeable reduction of group SD in the ERD peak latencies, as compared with ERD onsets, and 2) while the results are comparable, there is an additional hemisphere difference in the AWS that is now moderately significant, with the right preceding the left BA6 (right: 316 ms, SD=78 ms; left: 385 ms,

58

SD=67 ms; p=0.03, two-sample t-test). Notably, the right mouth motor cortex in FS therefore shows weaker time-locking to the stimulus presentation, as suggested by the high variability in the peak latency.

Figure 17. STIM-locked beta ERD latencies in the bilateral precentral gyrus. A: ERD onset times, B: ERD peak latencies. Average latency and standard deviation (mean (SD)) are displayed in milliseconds relative to stimulus-onset.

3.B.2.3 Comparing visual and motor ERD latencies

Table 6 also lists the beta ERD onsets in the bilateral cuneus. Because both the precentral gyrus and the cuneus demonstrate an early ERD peak time-locked to the stimulus, and as the cuneus response was much stronger than that of the latter, we wanted to confirm that the early visual response was not driving that of the precentral gyrus. Comparing ERD onset times yielded no significant differences between the recruitment of the two regions. Descriptively, ERD onsets in the bilateral cuneus were consistently following the stimulus-presentation, with earliest onset at 0 s and relatively smaller variability in this value (e.g., latency of 103 ms, SD=35ms, in the left BA18 of FS, Table 6). The precentral gyrus on average was recruited prior to the stimulus cue (descriptively) and shows greater variability around this value (e.g., left BA6 FS: –7 ms, SD=241; left BA6 AWS: -71ms, SD=196 ms; Table 5). The bilateral cuneus response therefore appears to be more time-locked to the stimulus appearance itself, while the preparatory motor

59

beta modulation is possibly more susceptible to participant-specific differences in anticipated motor preparation, which can result in a pre-stimulus initiation. The difference between the cuneus and precentral gyrus engagement was revealed only in the ERD peak latencies. As expected, beta suppression peaked in the bilateral cuneus first, followed by an ERD peak in the bilateral mouth motor cortex (p=0.0001, two-sample t-test). Although the peaks were sometimes difficult to define, the emergence of a significant difference and the notable smaller variability in the group means suggests that the STIM-locked ERD peak latency might be a functional measure of visual processing and motor preparatory response, that is possibly more accurate than the ERD onsets.

Table 6

STIM locked beta ERD onsets and first ERD peak latencies in the cuneus

ERD ONSET LATENCIES ERD PEAK LATENCIES ID AWS FS AWS FS AWS FS LEFT RIGHT LEFT RIGHT LEFT RIGHT LEFT RIGHT 2 3 0 88 85 82 143 134 181 176 6 4 63 94 118 108 203 192 221 206 7 5 0 0 114 70 218 179 9 6 90 62 104 74 168 178 171 170 10 7 307 90 82 388 205 203 15 8 109 87 99 59 233 207 208 182 18 9 134 222 21 10 53 47 157 135 141 188 277 205 23 11 97 102 128 129 174 219 290 250 24 12 44 49 48 176 174 159 25 13 86 101 53 45 167 169 169 189 27 14 0 85 142 113 173 175 289 202 MEAN 55 92 103 94 164 204 219 195 SD 45 78 35 33 45 66 49 25

60

3.B.2.4 Speech execution, comparing ERD extent

Firstly, there was a significant increase in motor engagement bilaterally for both groups from the PREP stage to the EXEC stage (p<0.001, two-sample t-test), demonstrating speech- induced recruitment of the motor cortex. The EXEC stage itself shows a very similar pattern to that of the PREP stage. A two-factor ANOVA (group, hemisphere) found a significant main effect of group, with greater bilateral engagement in AWS (p=0.006). AWS continued to engage the right hemisphere more strongly than FS throughout the execution stage (p=0.03, two-sample t-test, Fig.18A). AWS also engaged the left-hemisphere more but only when one outlier in the FS group was removed (p=0.01, two-sample t-test). In contrast to the left-lateralized LI observed in the PREP stage, the LI values during the EXEC stage were not significantly different from zero. LI values were less negative and some participants appeared more positive and right lateralized (particularly for FS), thereby appearing more bilateral during speech production (Fig.18B).

Figure 18. Beta ERD in the precentral gyrus during speech execution. A: Integrated ERD power in each hemisphere compared between groups. B: Hemisphere differences expressed as the LI computed from A. No laterality effect was found within or between groups. Note. Left BA6 group difference was significant only when the outlier in the FS group was removed (*).

3.B.2.5 Speech execution, examining latencies No significant differences were found in either EMG-locked ERD offset times or over-all ERD duration. Meaning, beta ERD did not last longer in either group or hemisphere.

61

3.B.2.6 Correlations with stuttering severity

A multiple linear regression was performed to see whether beta ERD power in the precentral gyrus during the PREP or EXEC stages is predictive of stuttering severity, but yielded no significant results. However, moderate positive correlations were observed between stuttering severity score and the right motor ERD during both the PREP (r=0.43) and EXEC (r=0.51) stages (Fig.19), meaning that severity corresponded to less engagement of the right motor cortex. Correlations with the left motor ERD were very weak (r<0.1, data not shown). In order to evaluate the extent of execution-induced motor cortex recruitment, we obtained the difference between the PREP and EXEC stages using the absolute integrated ERD power (EXEC-PREP) and defined it as “speech-induced engagement”. A smaller positive value therefore indicated a smaller difference between engagement in the EXEC and PREP stages, and therefore less speech-induced engagement. We found that this difference correlated negatively with the SSI only in the right motor cortex, reiterating once more that there was less speech induced recruitment of the right mouth motor cortex in the severe cases (Fig.19).

Figure 19. Correlations between stuttering severity and beta ERD in the right precentral gyrus. Participant S09 was excluded from the correlations due to an SSI of 0.

62

In order to better understand the changes in the severe individuals relative to the rest of the group, we plotted the integrated beta ERD transition from the PREP to the EXEC stage in each subject (Fig.20A-B). The three severe AWS participants had moderately smaller slopes between the PREP and EXEC stages in the right hemisphere, and have weaker EXEC stage recruitment of the right-mouth motor cortex compared to the mild AWS participants, as reflected by the smaller ERD power values. Moreover, the right hemisphere values of severe AWS participants in the EXEC stage also appear to be smaller than a majority of the FS participants (Fig.20C-D). This is notable given that AWS as a group were observed to over-engage the right motor cortex relative to the FS in both PREP and EXEC stages. Evidently, this group difference appears to be primarily driven by the mild stuttering subjects. As a consequence of reduced right hemisphere engagement during the EXEC stage, the severe participants appear to be more strongly left-lateralized than the rest of the group in this phase. We display the subject specific LI values for both speech preparation stages in Figure 21.

Figure 20. Beta band power in the left and right precentral gyrus compared across PREP (P) and EXEC (E) stages. A,B: severe against mild. C, D: severe against controls.

63

Figure 21. Laterality index during the EXEC and PREP stages compared between mild and severe participants. The SSI score is indicated for the three severe cases.

3.B.3 Localization of alpha band (8-13 Hz) ERD

Localization results for alpha ERD is shown in Figure 22. Alpha changes localized bilaterally to the cuneus (BA17,18) in the same locations as the previously reported beta ERD (Table 4). Virtual sensors from the bilateral cuneus were extracted from the same time-windows used for the beta response. We compared induced alpha and beta time-courses (obtained from the same coordinates) and found that the alpha time-course closely followed (descriptively) that of the beta response (see Figure 27). The visual alpha suppression was therefore not analyzed further. The second source of alpha modulation was localized to the bilateral posterior insula and superior temporal gyrus (BA 13,41). Participant-specific localizations are shown in Tables G-H 64

(Appendix A) for STIM and EMG datasets, with a majority of participants localizing to the superior temporal gyrus (BA 41, BA 22), and some to the inferior parietal lobule (BA 40) and posterior insula (BA13). No statistical differences in localized coordinates were observed between the EMG and STIM-locked data sets. The group average co-ordinates are in close proximity to the location of the primary auditory cortex (Desai, Liebenthal, Possing, Waldron & Binder, 2005; Kopčo et al., 2012; Rademacher et al., 2001; Wasserthal, Brechmann, Stadler, Fischl & Engel, 2014; Weeks et al., 2000).

Figure 22A. STIM-locked localization of alpha (8-13Hz) ERD. 1= left BA18, 2 = right BA18, 3 = left BA13, 4 = left BA41, 5= right BA41, 6 = right BA13, 7 = left BA6. Three time-windows are shown: 600-800ms (+600ms), 750-950ms (+750ms), and 950-1150 (+950ms). The time of speech is indicated as the mean EMG onset across all subjects (1228ms).

65

Figure 22B. EMG-locked localization of alpha (8-13Hz) ERD. 1= left BA18, 2 = right BA18, 3 = left BA13, 4 = left BA41, 5= right BA41. Three time-windows are shown preceding speech onset: -500 to -300ms (-500ms), 200-0ms (-200ms), 0-200ms (0ms).

In the STIM data, the left auditory ERD first peaks around 850ms (750-950ms time window), joined by the right at 950ms (950-1150ms) (Fig.22A). Virtual sensors were extracted for the left and right at 850ms and 1050ms respectively. In the EMG-locked data, virtual sensors were extracted from –500 to –300ms window, where alpha ERD peaked bilaterally (Fig.22B). Source localization of the selected time windows is displayed on the brain surface in Figure 23, highlighting the bilateral auditory region.

66

Figure 23: SAM localization of alpha ERD on the brain surface. Left: EMG-locked ERD, time window -500 to -300ms. Right: STIM-locked ERD, time-window 750 to 950ms. Both data-sets localize to the bilateral auditory cortex (BA41,22,13, superior temporal gyrus, posterior insula).

The TFR plots and time courses show no significant group or hemisphere differences, as determined from two-sample t-tests on the averaged 50ms intervals along the time-courses (Fig.24-25). Extent of alpha ERD was integrated across the same PREP and EXEC stages as the beta band analysis above. No group or hemisphere differences were observed in ERD extent, ERD latencies, or over-all duration. In other words groups did not differ in the engagement of the auditory cortices and both demonstrated significant bilateral alpha suppression from the PREP to the EXEC stage (p<0.007, two-sample t-test), corresponding to the expected increase in auditory engagement during speech production. The auditory cortex also shows a STIM-locked early alpha ERD peak around 300 ms, which is once again not apparent in the EMG-locked data set. Alpha ERD onsets and peak latencies are displayed in Table I (Appendix A).

67

Figure 24A. STIM-locked TFR plots of virtual sensors extracted from the left and right auditory cortex (BA41, 22, 13). Alpha suppression is indicated by the dashed line.

68

Figure 24B. STIM-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA13, 41, 22). Top: group contrast, bottom: hemisphere contrast. No significant differences were found between the time-courses. The first ERD peak is indicated. Error bars reflect standard mean error.

69

Figure 25A. EMG-locked TFR plots of virtual sensors extracted from the left and right auditory cortex (BA41, 22, 13). Alpha suppression is indicated by the dashed line.

70

Figure 25B. EMG-locked time-course of alpha suppression (8-13Hz) in the bilateral auditory cortex (BA13, 41, 22). Top: group contrast, bottom: hemisphere contrast. No significant differences were found between the time-courses.

3.B.4 Alpha-beta combined

3.B.4.1 Comparing latencies across all regions

It was of interest to see whether there was a time sequence in the recruitment of the visual, auditory and motor regions. Comparing STIM-locked ERD onsets between the three regions bilaterally using two-sample t-tests revealed a significantly earlier recruitment in the beta ERD of the bilateral cuneus (p<0.0001) and no latency difference between the motor-beta and the auditory-alpha recruitment. However, more differences were revealed when ERD peak latencies were compared. Given the strong STIM-locked suppression peak occurring in the bilateral visual, motor, and auditory cortices within 500ms of stimulus presentation, we 71

combined these time courses in Figure 26 to reiterate the differential time-courses between these regions. The cuneus beta suppression peak precedes that of the precental gyrus bilaterally (p<0.001). The precentral gyrus in turn precedes the auditory alpha peaks but appears to do so significantly only in the left-hemisphere of FS and in the right-hemisphere of AWS (p<0.01). There is no significant difference between motor-auditory ERD peak latencies in the right- hemisphere of FS and in the left-hemisphere of AWS. Notably, compared to the beta cuneus response, the alpha ERD peaks of the bilateral cuneus appears significantly later (p<0.001) and over-lap with the motor-beta and auditory-alpha responses (p>0.1).

Figure 26. Close-up of STIM-locked alpha and beta ERD peaks across all observed regions. Significant differences in peak latencies are indicated: * = <0.01, ** = < 0.001, *** = <0.0001. The right motor beta peak in FS (top right) was not visually identifiable due to low subject consistency, but was on average at 542 +/- 332 ms based on the extracted participant peaks (see Table 5).

72

3.B.4.2 Comparing auditory-motor ERD power and latencies

Given the acknowledged integrated role of auditory and motor regions in speech, we contrasted the integrated ERD power between the beta band (mouth motor cortex) and alpha band (auditory cortex). First, we display the STIM-locked time courses of ERD in all observed regions (bilateral cuneus, precentral gyrus, and auditory cortex) (Fig.27). We added the difference between motor- beta and auditory-alpha time-courses (Fig.27, dashed line). The ERD-time locked courses are not shown as they are not relevant to the discussion below. Descriptively, these plots demonstrate a moderate difference between motor and auditory stimulus-induced suppression in AWS.

Figure 27A. Right-hemisphere time-courses of alpha-beta suppression contrasted between the auditory, motor, and visual cortices. Blue dashed line indicates the difference between the auditory and motor time courses (Auditory – motor, or alpha – beta).

73

Figure 27B. Left-hemisphere time-courses of alpha-beta suppression contrasted between the auditory, motor, and visual cortices. Blue dashed line indicates the difference between the auditory and motor time courses (Auditory – motor, or alpha – beta).

This difference was quantified by comparing the integrated auditory and motor ERD power using two-sample t-tests. During the PREP stage the left motor cortex in AWS appears to be more engaged than the left auditory cortex (p=0.038, Fig.28A). The right hemisphere also shows a similar effect but reaches significance only when an outlier is removed (p=0.02, Fig.28B). Thus we see an imbalance between motor and auditory engagement bilaterally in AWS, with no such trend in FS. No such differences were observed in the EXEC stage.

74

Figure 28. PREP stage ERD power compared between motor (beta) and auditory (alpha) cortices. A: left hemisphere, B: right hemisphere. Note. Auditory-motor difference is significant in B only after the removal of the outlier (*).

While there were no differences in STIM-locked ERD onset times, EMG-locked ERD offset times were significantly later for the left auditory than the left mouth motor cortex in both groups (p<0.002), meaning that the auditory cortex was engaged for longer. However, this effect was not apparent in the right hemisphere. Additionally, for the FS group only, the overall duration of left auditory ERD (time-locked to EMG onset) was longer than that of the left motor ERD (p=0.002). This is only weakly significant in AWS (p=0.054) and not significant in the right hemisphere of either group.

3.B.5 Modulations in high beta, low and high gamma

High beta (20-30Hz, 25-30Hz) and low gamma (31-50Hz) ERD was localized to the same bilateral BA6 coordinates. Upon inspection of the pseudo-T values of the localizations on the glass brain it was found that the 25-30Hz and 31-50Hz peaks were weaker compared to the15-25Hz range and for this reason they were not analyzed further. A post-hoc analysis added an additional band in the 20-30Hz range. This was done because the AWS TFR plots for localization of the 15-25Hz band in Figure 13-14A showed a distinct shift to the 20-30Hz range. It was therefore necessary to check whether 20-30Hz localization would result in a different localization and a stronger time course. As Figure A (Appendix B) shows, the two frequency bands localized to the same coordinates. Although the 20-30Hz suppression is a little stronger

75

prior to EMG onset, the time-courses are otherwise identical (Figure B, Appendix B). This confirms that our analysis on the 15-25Hz band was not inaccurate. High gamma (70-100Hz) ERS was localized outside the brain region to what is likely the mouth area and coincided with speech onset (Fix.29A). The time-frequency decomposition of a virtual sensor from the peak voxel in this region shows a dominant broadband high gamma component (40-100Hz, Fig.29B) and a low frequency component (0.5-3Hz), the latter likely corresponding to articulatory movement during speech. This confirms that the high frequency ERS seen in the TFR plots of the mouth motor cortex beta rhythms was possibly associated with speech-related artifact during speech movement but was successfully differentiated by the Beamformer algorithm.

Figure 29. SAM localization of 70-100Hz ERS. A: localized to the proximity of the mouth motor cortex. B: TFR plot shows broadband artifact spanning 0.1-3Hz and 35-120Hz (right).

76

3.C. Neuromagnetic results: words split into HLS and LLS 3.C.1 Contrasting beta and alpha localizations in HLS and LLS

The above analysis was repeated for the separate epoched datasets of HLS and LLS trials of the AWS group. Beta (15-25Hz) and alpha (8-13Hz) localizations were observed in the same mouth motor cortex and auditory cortex regions bilaterally as those observed above. These localizations are shown in Figure 30A-B for a selected time window from which virtual sensors were extracted. Group average peaks are displayed in Table 7. HLS and LLS time-courses (Fig. 31A-B, 32A-B) were integrated as in the analysis described above. No differences between HLS and LLS were observed in the PREP or EXEC stages in terms of ERD extent or laterality index. There was also no difference in STIM-locked peak ERD latencies or ERD onsets. The reduced number of trials by a factor of two in this analysis did not seem to affect the obtained time courses or localization. Because no differences were found in the AWS group, this was not repeated for FS.

77

Figure 30A. STIM locked localization of beta (15-25 Hz) and alpha (8-13 Hz) ERD compared between HLS and LLS data-sets. Time windows are 350-550ms for beta ERD, 750-900ms for alpha ERD.1= left BA6, 2 = right BA6, 3 = left BA41, 4 = left BA13. The right BA22 and right BA40 in the HLS and LLS conditions, respectively, does not appear at the applied threshold (see Table 7).

78

Figure 30B. EMG-locked localization of beta (15-25 Hz) and alpha (8-13 Hz) ERD compared between HLS and LLS datasets. Time window is -400 to -200 ms. 1= left BA6, 2 = right BA6, 3 = left BA40, 4 = right BA41, 5 = left BA41, 6 = right BA40.

79

Table 7

SAM-localized beta and alpha ERD compared between HLS and LLS datasets

STIMULUS-LOCKED HLS LLS Anatomy Coord Pseudo-T Coord Pseudo-T L BA6 -46, -6, 28 -2.3 -42,-2,28 -1.81

Beta R BA6 53,2,31 -1.37 53,-2,31 -1.45 L BA13/41 -38,-30,14 -0.81 -46,-30,22 -0.82

Alpha R BA41 61,-15,6 -0.46 50,-26,22 -0.50

EMG-LOCKED HLS LLS Anatomy Coord Pseudo-T Coord Pseudo-T

L BA6 -42, -6, 28 -2.43 -50,-5,32 -2.16

Beta R BA6 53, 2, 31 -1.62 53,-2,35 -1.97 L BA40/13 -46,-18, 21 -0.53 -46,-30,22 -0.69

Alpha R BA41/40 57,-16,6 -0.47 61,-23,14 -0.40

Note. Group image coordinates are given in Talairach space.

80

Figure 31A. STIM-locked time-course of beta suppression (15-25 Hz) in the bilateral precentral gyrus (BA6). Top: condition contrast (HLS, LLS), bottom: hemisphere contrast. No significant differences between HLS and LLS conditions were found.

81

Figure 31B. EMG-locked time-course of beta suppression (15-25 Hz) in the bilateral precentral gyrus (BA6). Top: condition contrast, bottom: hemisphere contrast. No significant differences between HLS and LLS conditions were found.

82

Figure 32A. STIM-locked time-course of alpha suppression (8-13 Hz) in the bilateral auditory cortex (BA41,22,13). Top: condition contrast, bottom: hemisphere contrast. No significant differences between HLS and LLS conditions were found.

83

Figure 32B. EMG-locked time-course of alpha suppression (8-13 Hz) in the bilateral auditory cortex (BA41,22,13). Top: condition contrast, bottom: hemisphere contrast. No significant differences between HLS and LLS conditions were found.

84

5. DISCUSSION

The current study offers insights into the neural processes of typical speech coordination, and extends our knowledge on speech coordination in developmentally stuttering adults. The study adopted a cued overt speech task that allowed separating what are presumed to be preparatory and execution stages of speech coordination. We found that visual, premotor, and auditory components of the speech network are primed bilaterally immediately following the visual presentation of the speech sequence and preceding the cue to speak by about a second. While the visual response consisted of both alpha (8-13 Hz) and beta (15-25 Hz) rhythms, the premotor cortex, which was specifically the mouth motor cortex, showed a beta (15-25 Hz) suppression, while the auditory cortex showed distinctly an alpha (8-13 Hz) suppression. The bilateral cuneus showed a sharp suppression immediately following the stimulus and then the speech cue. The mouth motor cortex and the auditory cortex showed a stimulus-locked sharp suppression peak around 300-500 ms followed with a gradual increasing suppression that peaked during speech production and returned to baseline thereafter. While motor recruitment was left- lateralized in the preparation stage, auditory and visual cortex recruitment was bilateral throughout. Group differences were only observed bilaterally in the mouth motor cortex, with stronger bilateral recruitment seen in AWS, who also showed premature activation specifically of the right hemisphere. First we address the discrepancies between the STIM and EMG-locked datasets and justify our analysis approach. Second, we present interpretations of the observed effects in the motor and auditory regions. Third and last, we address the response observed in the bilateral visual cortex.

5.1 Addressing differences between stimulus-locked and EMG-locked datasets

The STIM and EMG data were analyzed with the SAM algorithm using the same fixation baseline window. The data sets showed identical localization of alpha suppression in the bilateral cuneus and auditory cortex and of beta suppression in the bilateral cuneus and mouth motor cortex. There were a few differences that are important to address, however.

85

Time-courses of both beta suppression in the mouth motor cortex and alpha suppression in the auditory cortex showed a distinct stimulus-locked ERD peak around 300-500 ms post stimulus presentation. The latencies of this ERD peak showed significantly lower group variability (~60-70 ms, SD) than those determined based on the ERD onset (~180-240 ms, SD), which on average also began prior to stimulus presentation. Most importantly, this early 300 ms ERD peak disappeared from the time-course when time-locked to the EMG onset and was replaced by a smoother and slower increase in beta suppression preceding speech. Similar effects were observed in the cuneus response that showed strong time-locking to the stimulus- presentation. Given the relatively small group variability of the ERD peak, it is very likely that the stimulus to EMG onset jitter, which had a group variability of 90 ms (SD), averaged it out across the EMG-locked trials. In fact we saw that the time-courses extracted from both data sets are essentially the same other than the presence of the early ERD peak. Therefore we posit that this early suppression peak in the mouth motor and auditory cortices is a functionally significant response in the speech preparation stage and will be discussed in more detail down below. We also confirmed that this early ERD peak was not driven by the much stronger response in the bilateral cuneus, which occurred significantly earlier. It is possible that ERD onset is a more subject-specific anticipatory motor response that arises from the expectation of an upcoming stimulus, while the ERD peak in the beta and alpha suppression time course is a more specific motor preparatory response that occurs when the phonemes to be articulated are revealed and a motor plan can be initiated. The EMG data did however seem to show a smoother time-course with smaller standard mean error values in the time period following the EMG onset, suggesting that the execution stage may be better time-locked to the EMG data set. For these reasons we defined speech preparation relative to the STIM dataset and the speech execution stage relative to the EMG dataset. A similar analysis was also performed by Tzagarakis, Ince, Leuthold, and Pellizzer (2010).

86

5.2 Modulations of beta ERD in the motor cortex

5.2.1 Speech preparation Our design allowed a sufficiently long time interval during which preparatory processes could be quantified. Beta ERD was observed bilaterally in the lip and tongue motor cortex in both groups and was in close proximity to those reported in recent meta-analyses of brain imaging in typical speech and in stuttering (Belyk et al., 2015; Brown et al., 2005). Time courses showed an early beta suppression peak around 300ms post stimulus presentation, followed with a continuous suppression that peaked during speech execution. A similar beta suppression pattern in the premotor cortex prior to speech onset has been observed in several studies on typical fluent speakers (Gehrig et al., 2012; Jenson et al., 2014) and is generally acknowledged to accompany a variety of self-paced movement tasks, not necessarily speech (Alegre et al., 2006; Bai et al., 2005; Cheyne et al., 2006; Doyle et al., 2005; Erbil & Ungan, 2007; Pfurtscheller & Da Silva, 1999; Tzagarakis et al., 2010). We found that both AWS and FS showed left-lateralization in the recruitment of the mouth motor cortex during speech preparation, as is expected for speech processes (Blank, Bird, Turkheimer & Wise, 2003; Riecker, Ackermann, Wildgruber, Dogil & Grodd, 2000; Watkins, Strafella & Paus, 2003; Wilson, Saygin, Sereno & Iacoboni, 2004). Yet, in agreement with hypothesis 1, AWS showed stronger recruitment bilaterally, which was more pronounced in the right hemisphere. In fact, some FS showed no significant recruitment of the right hemisphere. This difference was expressed in the laterality indices of the motor beta suppression. Although there was no group difference in this laterality index, the FS group had four subjects who were 100% left-lateralized (LI = -1). AWS showed smaller negative LI values, due to the right hemisphere engagement, but were still significantly left-dominant as a group when one strongly right-lateralized subject (LI=0.99) was removed as an outlier. The weaker right motor engagement in FS was also evident from the absent early ERD peak in the time-course of their right hemisphere, despite a strong peak being present in the left around 300 ms post stimulus. A comparison of the ERD latencies confirmed that the right motor cortex engagement in FS is also significantly delayed, being recruited between 250 ms and 1000 ms post stimulus presentation for half of the participants. The very high variability in the onset time of this region

87

suggests that this response is less time-locked to the neural speech preparation process in FS. AWS, however, recruit the right motor cortex 70 ms before the left. These findings replicate a previous study by Salmelin, Schnitzler, Schmitz, and Freund (2000), who were the first and only to date to study beta band motor recruitment in speech production of AWS. We proposed earlier that the ERD peak may be functionally different from the ERD onset and may reflect the initiation of a specific motor plan when the phonemic sequence is revealed to the subject. Preparatory beta suppression is proposed to reflect the anticipation of the upcoming motor act as a predictive model for upcoming sensory monitoring (Engel & Fries, 2010; Gehrig et al., 2012; Kilavik et al., 2013; Klimesch, 2012; Liljeström et al., 2014). This is a key element in speech preparation, as motor speech plans from the inferior frontal and motor cortex need to be forwarded to posterior temporal regions for real-time analysis of the production outcome, which is an integral component of typical fluent speech (Gehrig et al., 2012; Jenson et al., 2014; Liljeström et al., 2014). It thus appears that both hemispheres of AWS are engaged at this early stage of motor planning.

5.2.2 Speech execution During speech execution we saw that both groups were no longer significantly left- lateralized, as both showed increased right motor cortex engagement going from the preparation to the execution stage. This may be consistent with the role of the left inferior frontal gyrus in translating speech representations to articulatory commands via communication with the left ventral premotor cortex (Beal et al., 2015; Hickok, 2012; Papoutsi et al., 2009), which are left- lateralized processes that are likely to occur prior to actual speech onset. Left-lateralized oscillatory modulation in the beta (and alpha) bands has been previously reported in a similar cued task design (Gehrig et al., 2012). Once speech motor execution is initiated, however, the motor response may be more bilateral. This is reasonable as the orofacial articulators are bilaterally innervated and coordination of both right and left muscular control is required for proper articulation (Grabski et al., 2012). Unlike the haemodynamic studies that point to a left- lateralized speech production network (Blank et al., 2003; Riecker et al., 2000; Watkins et al., 2003; Wilson et al., 2004), where differentiation between speech stages is compromised by low

88

temporal resolution, we demonstrated that left-lateralization may be specific to the preparatory stages, rather than speech execution itself. Yet despite over-all bilateral activation in both groups, AWS maintained a more strongly engaged right hemisphere throughout the execution, and also appeared to engage the left hemisphere more strongly, barring an outlier in the FS group. In other words, while there were no group differences in the laterality index, AWS were bilaterally more engaged in the mouth motor cortex during speech production.

5.2.3 Role of the left hemisphere in AWS Our finding of greater left mouth motor engagement in AWS, particularly during speech execution, is contrasted with the reported hypo-activation of the BOLD response in the left larynx and tongue motor cortex (Belyk et al., 2015; Brown et al., 2005; Sommer et al., 2002). In a TMS study by Neef et al. (2015), the authors report that left hemispheric facilitation of the mouth motor cortex prior to speech occurred only in FS and was not observed in AWS, proposing impaired or insufficient left motor recruitment in this population (Neef et al., 2015). Reduced beta ERD in the left mouth motor cortex of AWS was also observed in Salmelin et al. (2000), who used an equivalent current dipole analysis in the first study on oscillatory modulation in this population. However, the paucity of studies on neural speech processes in AWS so far renders it rather difficult to reason through our contradictory findings. Increased left motor engagement of the precentral gyrus in AWS relative to FS was only observed in one study on overt cued repetition, reporting increased evoked amplitude in this region in preparation for speech (Biermann-Ruben et al., 2005). Biermann-Ruben et al. (2005) argue that this may indicate greater feedforward control and articulatory-motor planning for upcoming speech. Our findings promote this interpretation, especially given the proximity of the localized mouth motor cortex to key regions related to speech-motor planning, such as the inferior frontal gyrus and the ventral premotor cortex. Both of these regions are also underlined by deficient white matter tracts in AWS (Cai et al., 2014; Chang et al., 2008; Cykowski, Fox, Ingham, Ingham & Robin, 2010; Watkins et al., 2008), which could be a key factor affecting the motor speech preparation process in this population. Although it is not clear how reduced white matter density would translate into changes in beta rhythms in these or nearby regions, we can

89

expect that aberrant structure in neural projections will reflect on the task-induced recruitment of the affected areas.

5.2.4 Role of the right hemisphere in AWS The finding of greater right beta suppression in the mouth motor cortex of AWS is consistent with the wide spectrum of studies reporting BOLD response hyper-activity in the right laryngeal and right lip primary motor cortex (Belyk et al., 2015). Recent investigations using both fMRI and MEG measures on the same task sequence showed that the extent of beta desynchronization specifically in the motor cortex is positively correlated with BOLD response (Hall et al., 2014; Hermes et al., 2012; Ritter et al., 2009). Therefore the increased beta engagement and a hyper-activated BOLD signal in the right mouth motor cortex may be reflecting the same underlying atypical process. A similar increase in right-hemisphere beta suppression was also reported in Salmelin et al. (2000). However, Salmelin et al. (2000) reported only on the speech execution stage. We expand this finding by showing that right hemisphere engagement is significant already in the speech preparation phase. This implies that the effect of the right hemisphere, whether compensatory, interfering, or facilitative, is already in play prior to the initiation of speech, which is a first-time finding in a simple speech production task in this population. Multiple authors have interpreted such right hemisphere activation as an interference effect that induces speech dysfluencies (Barwood et al., 2013; Fox et al., 2000; Fox et al., 1996; Ingham et al., 2000; Jiang et al., 2012; Neef et al., 2011; Ouden et al., 2014; Sowman et al., 2012; Toyomura et al., 2011; Wymbs et al., 2013). A couple of TMS studies probed the mouth motor cortex in AWS prior to speech onset and tongue contraction, and found over-excitability on the right relative to controls, compelling these authors to suggest slower or delayed inhibitory control of the right tongue region (Barwood et al., 2013; Neef et al., 2011). Haemodynamic neuroimaging studies found that over-activation of the right mouth motor cortex and anterior insula was associated with a greater incidence of stuttering in an averaged reading trial, while fluent and stutter-free speech resulted in a shift to the left hemisphere (Ingham et al., 2000; Jiang et al., 2012; Ouden et al., 2014; Toyomura et al., 2011; Wymbs et al., 2013). If the interference hypothesis is true, then one would expect that right hemisphere engagement in speech

90

preparation should be associated with more frequent occurrence of dysfluencies once speech is initiated. Yet this could not be confirmed in the current report due to the mixed fluent and stuttered trials (12%) and would require a separation of these utterances in a secondary analysis. Alternatively, the right hemisphere over-engagement has been proposed to be a compensatory strategy for deficient left-hemisphere processes (Kell et al., 2009; Preibisch et al., 2003). This is a proposition that is primarily based on the observed activation of right homologue regions following left-hemisphere damage in stroke or brain injury (Blank et al., 2003). Such a view is not exactly applicable in our case given that we do not find a left-hemispheric impairment necessarily, but rather an over-engagement of the motor cortex bilaterally, with a more prominent effect in the right hemisphere. We therefore propose instead that the bilateral over-engagement in AWS is reflecting a facilitative strategy adopted in order to optimize a poorly coordinated speech process. Studies show that engagement of the motor cortex becomes more strongly bilateral when complexity of unimanual tasks increases (Catalan, Honda, Weeks, Cohen & Hallett, 1998; Shibasaki et al., 1993; Verstynen et al., 2005). A few studies also found that pre-movement beta ERD can be modulated by task difficulty, such as increasing the load opposing a self-paced index finger and hand extensions (Nakayashiki, Saeki, Takata, Hayashi & Kondo, 2014; Stančák, Riml & Pfurtscheller, 1997) or increasing task speed (Pastötter, Berchtold & Bäuml, 2012). Moreover, the motor cortex becomes more bilaterally engaged specifically in elderly populations performing unimanual motor tasks as a facilitative mechanism in a degrading motor control network (Graziadio et al., 2015; Mattay et al., 2002; Naccarato et al., 2006; Wu & Hallett, 2005; Zimerman et al., 2014). Disrupting specifically the ipsilateral motor cortex in older adults using TMS while they are performing a complex unimanual motor task showed significant degradation in performance (Zimerman et al., 2014). Elderly populations with declining motor automaticity therefore demonstrate an increased reliance on the ipsilateral hemisphere but also a greater general bilateral motor recruitment during unimanual motor tasks, a pattern that is mirrored in our current findings in developmentally stuttering adults. Furthermore, Krings et al. (2000) report reduced motor cortex activity in piano players compared to controls when performing a complex finger movement task, proposing that increased practice and automaticity in piano players consequently recruits less , and conversely, that the absence of such learned

91

automaticity in inexperienced players results in over-activation (Krings et al., 2000). Similarly, authors have proposed reduced automaticity in AWS based on over-activated cerebellar and right homologues regions during speech (Belyk et al., 2015; De Nil & Kroll, 2001), and behavioural studies have demonstrated different kinematics of orofacial movements in AWS that could be further reflection of a less optimal speech coordination (Namasivayam & van Lieshout, 2011). This view would be in line with the motor-skill theory approach that proposes that AWS may use strategies to stabilize a motor speech system that has not perfected an automatized coordination of this complex motor skill due to an over-all reduced capability of motor skill learning (Namasivayam & Van Lieshout, 2008; Van Lieshout, Hulstijn & Peters, 1996; Van Lieshout, Hulstijn & Peters, 2004). We have seen how such motor learning and performance impairment was robustly shown in behavioural studies on a wide range of speech and non-speech motor tasks, showing slower initiation and execution, greater reliance on external (e.g., visual) movement feedback, and atypical orofacial muscle kinematics (Archibald & De Nil, 1999; Loucks et al., 2007; Max et al., 2003; Sasisekaran & Weisberg, 2014). AWS may therefore require additional resources for proper performance, such as early recruitment of the right motor cortex. In this way we can interpret bilateral over-activation as facilitating the execution of a more difficult task. Specifically, the over-engagement of the left hemisphere may be associated with increased motor-articulatory planning load.

5.3 Correlation of beta response with stuttering severity

A finding that promotes the idea of facilitative recruitment, specifically in the right hemisphere, is the moderate, although non-significant, correlation we found between stuttering severity and the engagement of the right mouth motor cortex. Contrary to hypothesis 3, the three severe AWS participants showed a tendency for weaker engagement of the right mouth motor cortex, particularly during the execution stage. This is also in contrast with previous findings by Salmelin et al. (2000) who reported increased right hemisphere engagement for four severe AWS participants (out of a total of nine). Additionally, these subjects also showed less speech-induced engagement in the right hemisphere, showing a smaller change from preparation to execution. Specifically in the execution stage two of the three severe AWS participants had weaker right

92

motor ERD (lowest of the group), and as a consequence these same two participants were most strongly left-lateralized out of the group. Although these numbers are few, if this was a reflection of a true effect then this would argue against current theories of right hemisphere compensation and interference, according to which more severe cases should demonstrate greater right- lateralized engagement. Instead, in our adopted view of facilitative motor strategy, this effect could mean that severe cases are not as able to recruit the additional right-hemisphere resources required for stabilizing their speech, and that more mild cases are those that can adopt a new control strategy successfully. Fluent speakers do not need the additional resources of the right and therefore do not engage it to the same extent, as was seen in the group differences of right hemisphere motor ERD. Notably, the severe AWS participants were in the same range as fluent speakers. If these findings are replicated more robustly, it could highlight differential roles of the left and right hemispheres in speech motor control, especially if studied in in the time window preceding stuttered speech.

5.4 Modulations of alpha ERD in the auditory cortex

Localization of alpha oscillations found alpha suppression in the bilateral visual cortex in nearly identical coordinates to those of the bilateral beta suppression in the cuneus, and additionally in the bilateral auditory cortex. Both AWS and FS showed auditory alpha suppression that was initiated about 1 s prior to speech-onset and was significantly enhanced when speech execution began, which is in line with recent studies on speech in typical speakers (Gehrig et al., 2012; Jenson et al., 2014). Contrary to hypothesis 3, no group differences in the degree or latency of auditory recruitment were observed. This was surprising as auditory under- activation is one of the most common reported effects in the stuttering population (Belyk et al., 2015; Brown et al., 2005; Ingham et al., 2012) and is also in proximity of deficient white matter tracts (Cieslak et al., 2015). However, previous authors studying instead the evoked auditory responses in AWS did find meaningful group differences. For example, AWS failed to show modulation of the N100 amplitude during preparation for speech (Daliri & Max, 2015), a response typically observed in FS (Curio et al., 2000; Flinker et al., 2010; Gunji et al., 2001; Houde et al., 2002). The latencies of the MEG N100 equivalent (M100) were also much shorter

93

in AWS and shifted to appear earlier in the right auditory cortex, whereas FS latencies were bilateral (Beal et al., 2010). In light of such findings many authors proposed that preparation of appropriate sensory feedback networks, such as the auditory-motor interface, is improperly primed prior to speech in AWS (Beal et al., 2010; Daliri & Max, 2015). In this regard, our comparison of motor-auditory engagement in the groups did, however, yield an intriguing result. While extent of ERD in the auditory and motor cortices was equivalent in FS, the left mouth motor cortex was significantly more engaged compared to the left auditory cortex in AWS, albeit occurring in different frequency ranges. This effect was only significant during the speech preparation stage. The right hemisphere of AWS also showed a similar effect but was not as significant. Seeing as we only found group differences in the motor beta band and not in the auditory alpha band, we know that the observed auditory-motor imbalance is entirely due to the differential behaviour of the mouth motor cortex. We also found that alpha ERD duration in the auditory cortex surpassed that of the motor beta ERD in the left hemisphere of FS only, and not in AWS. Preparation for overt speech was previously shown to bilaterally induce oscillatory connectivity between the temporo-parietal gyrii, which include the auditory areas, and the premotor cortex during an auditory stimulus overt repetition and a picture naming task (Alho et al., 2014; Liljeström et al., 2014). Moreover, alpha suppression has been widely recorded in visual and somatosensory regions engaged in task-specific processing, and in this way has become a recognized index of induced engagement (Klimesch, Sauseng, & Hanslmayr, 2007; Klimesch, 2012). For these reasons the equivalent extent of recruitment in both auditory and motor areas that we see in FS could reflect auditory-motor communication and transmission of efference copy of the motor plan in preparation for speech. It is possible that this imbalance during the speech preparation stage of AWS reflects some level of asynchrony between the two regions. Moreover, we can hypothesize based on our findings that this imbalance is driven by an abnormal recruitment in the mouth motor cortex, rather than abnormal auditory behaviour. As auditory modulation of oscillatory rhythms prior to speech has not yet been investigated in AWS, the absence of group differences specifically in the auditory response in the current study may suggest either an effect of task, or that the differences detected by evoked auditory responses are not expressed via oscillatory modulations. It is also possible that oscillatory

94

modulations are better time locked to the voice onset itself, as was the case in the evoked response studies by Beal et al. (2010, 2011), but this analysis was not performed on the current data.

5.5 Alpha-beta in the bilateral cuneus

The last region that was observed in our task was the bilateral cuneus (BA17,18), where both alpha and beta suppression were localized. The response first peaked about 300 ms following stimulus presentation, followed with what looked like a rebound ERS of a high positive peak around 600 ms following the blank, and then a second ERD following the speech cue (>1000 ms). The strong alpha-beta ERS occurring following the blank may be an indication of a disengagement of the visual cortex when only a blank black screen is presented (Klimesch, 2012). It was therefore surprising that AWS had a noticeably stronger ERS peak compared to FS, especially in the left cuneus. AWS also recruited the left cuneus earlier than FS. It is possible that this latter effect is a reflection of increased word scanning in AWS, reflected in faster visual recruitment as they evaluate whether the presented stimulus contains any difficult sounds. In the case that it does, it might result in excessive fixation on the visual characters as they are preparing to speak the sentence out loud, which could possibly then be followed by a stronger “release” or disengagement when the blank screen comes on, generating a higher ERS rebound peak.

5.6 Effects of stuttering anticipation

We predicted that stuttering anticipation modulates group differences between AWS and FS in the way that they recruit the speech network in preparation for speech production. Specifically, hypotheses 5-6 predicted that high stuttering anticipation will increase articulatory- motor planning and will be expressed as increased suppression in the left inferior frontal gyrus and the left premotor cortex. However, the current study did not yield differences between words of high and low stuttering anticipation. Localizations of alpha and beta ERD appeared in the same regions for both HLS and LLS stimuli and time-courses were highly over-lapping.

95

Behavioural data also demonstrated no effect of stimulus type on mean response times or response time coefficient of variation. Group differences were therefore not driven by the degree of anticipated stuttering. Instead, AWS showed stronger motor engagement bilaterally for all utterances, even on stimuli that they would be extremely unlikely to stutter on. We saw no activity in the anterior cingulate cortex despite past evidence of the engagement of the ACC in preparation for perceived complex stimuli (Paus et al., 1998) and in response to the perceived likelihood of error and degree of error consequence (Brown & Braver, 2005, 2007). In past studies the ACC was also observed to generate an “error-related negativity” (ERN), a peak component of the evoked ERP response following speech and non-speech errors, where greater ERN peaks reflect greater error-monitoring (Ganushchak et al., 2011; Ganushchak & Schiller, 2006, 2008; Holroyd & Coles, 2002; Ullsperger, 2006). Specifically in speech production tasks, increased ERN was observed in various tasks that induced spoonerism and other slips of the tongue (Masaki et al., 2001; Möller et al., 2007; Riès et al., 2011). Critical to this study, as it was a key motivation for exploring the effect of anticipation on oscillatory modulation during speech in AWS, was the finding of greater stimulus-locked ERN in the ACC of AWS during a non-speech rhyming decision task (Arnstein et al., 2011). The ERN was enhanced in AWS throughout the task regardless of performance outcome, which was proposed to reflect a hyper-vigilant monitoring of speech in this population. Considering that the ERN is an evoked response, it is possible that the same error-monitoring process is simply not expressed via induced oscillatory changes or does not necessarily result in changes in motor and sensory recruitment. Despite these null findings, we saw that the word ranking task itself yielded very positive results. First, for the most part strong anticipation of stuttering was associated with plosives and was weaker for vowels, which is a common trend in the stuttering population. Second, although 13 of the original 25 participants had to be disqualified due to insufficient ranking scores, it was possible to identify those who had strong anticipatory responses. These selected participants showed a good consistency between the two ranking trials, with 90 percent falling within a score difference of 1. The participants who were disqualified showed for the most part consistent low to moderate rankings with few words of high stuttering anticipation. Third, task-induced stuttering rates were surprisingly high (314 total trials), which would allow us to conduct an

96

analysis on stuttering utterances only. Most importantly words that were stuttered during the task corresponded to high anticipation of stuttering rankings for 70-100% of cases, despite the 2-4 weeks that passed between the first and second visit. Such high consistency is likely a result of having pre-selected participants based on their ranking task results in the first visit, which emphasizes the utility of the word ranking approach when the goal is to capture stuttering moments, especially in light of the lack of correlation between task induced stuttering and other behavioural measures we obtained such as stuttering severity and anxiety. Lastly, this study confirmed that AWS were of significantly higher trait anxiety than fluent controls. This is in line with recent reviews confirming the prevalence of trait anxiety among stuttering adults and the onus to address it in therapy programs (Craig & Tran, 2014). State anxiety measured at the end of the first visit indicated no difference between the groups, which indicates that the word ranking process was not done in a high-pressure and high-anxiety situation that could introduce bias to the ranking process. This is a very positive outcome given the difficulty in generating stuttering in experimental settings. We propose that such a methodology be used in stuttering research in order to induce more stuttering in the task.

6. CONCLUSIONS AND LIMITATIONS

The current study presented a very feasible approach to separating and quantifying preparation and execution stages of speech. We observed induced oscillatory modulation in the bilateral visual cortex, mouth motor cortex, and auditory cortices as soon as the stimulus sentence was presented. Time courses showed an early stimulus-locked suppression in the motor and auditory cortices bilaterally followed with an increased suppression during speech production itself. Specifically in the motor cortex, while the ERD was initiated on average prior to the stimulus presentation, the suppression peak was consistently around 300 ms and occurred following the visual response. We propose that this reflects essentially two components of speech preparation. The first is an anticipatory modulation of the motor cortex that is fairly participant-specific, given that participants were aware that a new motor response will be required following every fixation period. The second was a sharper stimulus-specific response that took place when the exact articulatory sequence was presented to the subject. A third

97

component was then associated with the execution task itself. An interesting finding was that left-lateralization of the mouth motor cortex was stronger in the speech preparation phase, while the execution was more bilateral. This may be a reflection of the left-lateralized speech planning regions such as the left inferior frontal gyrus and ventral premotor cortex and of the bilateral innervations of the orofacial articulators that are recruited for speech execution. The auditory cortex demonstrated the same early preparatory suppression peak, mostly occurring after the motor response, and continuing to increase towards speech production. Unlike the mouth motor cortex, no lateralization was observed in either speech stage, promoting a bilateral role of auditory sensory processing during self-generated speech. The concurrent engagement of auditory and motor regions may be a reflection of their close cooperation in setting up both the motor commands and the sensory feed-back mechanisms required for speech production. Contrary to our expectation, no engagement of the left inferior frontal gyrus was observed in the current data. This was surprising considering the key role this region plays in integrating articulatory and motor commands. Differences in the speech network recruitment between fluent speakers and developmentally stuttering adults were observed only in the engagement of the mouth motor cortex. Our group of stuttering adults showed increased bilateral engagement of the mouth motor cortex during both speech preparation and execution, and specifically showed right mouth motor engagement in the preparatory stage when fluent speakers did not. Adults who stutter also appeared to recruit the right mouth motor cortex before the left, while fluent speakers showed a left-hemisphere preference. An intriguing finding was the reduced engagement of the right hemisphere in the three severe participants. We propose that facilitative engagement specifically of the right motor cortex may be an important asset in the speech coordination of stuttering adults and that insufficient recruitment of this asset may result in loss of motor-speech control that is expressed as increased stuttering severity, characterized by increase in stuttering frequency and secondary behaviours such as facial grimaces or finger tapping. Furthermore, although we did not observe our predicted differences in the auditory engagement per se, the over-active motor cortex resulted in a misbalanced engagement of the motor and auditory regions in the stuttering adults, whereas fluent speakers engaged both regions to the same extent. The observed misbalance between these areas therefore encourages a more

98

thorough connectivity analysis or a correlation of individual trial time-courses as a follow-up analysis in order to better quantify the integration of these two critical regions. Contrary to our hypotheses we observed no effect of stuttering anticipation on the speech preparation or execution process. However, the study still demonstrated the feasibility of identifying stuttering individuals with strong anticipatory responses. This was evident from the ranking task results and from the high consistency between task-induced stuttered words and their anticipatory ratings. We would therefore recommend using such an approach in future studies on stuttering in order to generate more stuttered utterances in experimental settings. Lastly, given the prevalence of studies showing evoked, rather than induced, responses to anticipated and actual speech errors, and given the strong stuttering anticipation expressed in our pool of participants, one option for a follow-up analysis is to quantify evoked responses on the current data. Using an established measure of an ERN response may determine whether there is really a quantitative difference between the high and low stuttering anticipation conditions in our participant pool.

6.1 Limitations

A limitation of the current data is that the observed bilateral over-engagement, and specifically the early recruitment of the right mouth motor cortex, cannot be associated with stuttering behaviour itself. Meaning, we do not know whether bilateral over-engagement promotes stuttering occurrences or prevents them. This is because the 12% of stuttered trials were not removed from this data for the purposes of maintaining an equivalent signal-to-noise ratio between our two groups. Therefore, at the moment, these findings can only be interpreted as reflecting the general trait in stuttering adults, rather than an association with a stuttering moment in itself. The next step is therefore to remove all stuttered trials from the data. A few possible outcomes are possible. First, the removal of stuttered trials may have no effect on the observed group differences, which would mean that greater bilateral recruitment and reliance on the right motor hemisphere is a general trait of fluent speech production in the stuttering population. Second, the removal of stuttered trials may result in a greater group effect: AWS will be even more strongly engaged than currently observed. This could suggest that stuttering trials

99

are associated with reduced motor-engagement, and combining them with fluent utterances only reduces the true group difference. Such a result would promote a facilitative role of motor over- engagement that prevents stuttering and supports fluent speech. This will, however, need to be confirmed by analyzing the 314 stuttered trials on their own, by matching them to the same number of trials in the control group. The stuttered data-set should then result in a smaller group difference, meaning that AWS would still be over-engaged but less so then when they are dysfluent. The third outcome is that removal of stuttered trials will result in no group difference at all. This is the most worrisome as that could mean either that only stuttering is associated with strong bilateral over-engagement, or that the resulting trial number is too low for any group effect to be observed. It was in consideration of this last outcome that the choice was made to first analyze the entire data-set, regardless of stuttering, rather than risk a null effect due to low signal-to-noise ratio. Lastly, the small number of severe participants in our study, consequent of the small subject number over-all, limits our conclusions on the differential effects observed for the three severe subjects. There is therefore an onus to investigate the severe stuttering population at large.

100

REFERENCES

Agnew, Z. K., McGettigan, C., Banks, B., & Scott, S. K. (2013). Articulatory movements modulate auditory responses to speech. NeuroImage, 73, 191–9. http://doi.org/10.1016/j.neuroimage.2012.08.020

Aguirre, G. K., Zarahn, E., & Esposito, M. D. (1998). The Variability of Human , BOLD Hemodynamic Responses, 369(8), 360–369.

Alegre, M., Imirizaldu, L., Valencia, M., Iriarte, J., Arcocha, J., & Artieda, J. (2006). Alpha and beta changes in cortical oscillatory activity in a go/no go randomly-delayed-response choice reaction time paradigm. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology, 117(1), 16–25. http://doi.org/10.1016/j.clinph.2005.08.030

Alho, J., Lin, F.H., Sato, M., Tiitinen, H., Sams, M., & Jääskeläinen, I. P. (2014). Enhanced neural synchrony between left auditory and premotor cortex is associated with successful phonetic categorization. Frontiers in Psychology, 5(May), 394. http://doi.org/10.3389/fpsyg.2014.00394

Alm, P. (2004). Stuttering and the basal ganglia circuits: a critical review of possible relations. Journal of Communication Disorders, 37(4), 325–69. http://doi.org/10.1016/j.jcomdis.2004.03.001

Alm, P. (2004). Stuttering, emotions, and heart rate during anticipatory anxiety: a critical review. Journal of Fluency Disorders, 29(2), 123–33. http://doi.org/10.1016/j.jfludis.2004.02.001

Alm, P., Karlsson, R., Sundberg, M., & Axelson, H. (2013). Hemispheric lateralization of motor thresholds in relation to stuttering. PloS One, 8(10), e76824. http://doi.org/10.1371/journal.pone.0076824

Archibald, L., & De Nil, L. F. (1999). The relationship between stuttering severity and kinesthetic acuity for jaw movements in adults who stutter. Journal of Fluency Disorders, 24(1), 25–42. http://doi.org/10.1016/S0094-730X(98)00023-0

Ardila, A., Ramos, E., & Barrocas, R. (2011). Patterns of stuttering in a Spanish/English bilingual: A case report. Clinical Linguistics & Phonetics, 25(1), 23–36. http://doi.org/10.3109/02699206.2010.510918

Arenas, R. M. (2012). The role of anticipation and an adaptive monitoring system in stuttering : a theoretical and experimental investigation.

101

Arnstein, D., Lakey, B., Compton, R. J., & Kleinow, J. (2011). Preverbal error-monitoring in stutterers and fluent speakers. Brain and Language, 116(3), 105–15. http://doi.org/10.1016/j.bandl.2010.12.005

Arya, P. (2013). Factors related to Recovery and Relapse in Persons with Stuttering Following Treatment: A Preliminary Study. Disability, CBR & Inclusive Development, 24(1), 82–98. http://doi.org/10.5463/dcid.v24i1.189

Ashburner, J., & Friston, K. J. (2000). Voxel-based morphometry--the methods. NeuroImage, 11(6 Pt 1), 805–21. http://doi.org/10.1006/nimg.2000.0582

Assaf, Y., & Pasternak, O. (2008). Diffusion tensor imaging (DTI)-based white matter mapping in brain research: a review. Journal of Molecular Neuroscience : MN, 34(1), 51–61. http://doi.org/10.1007/s12031-007-0029-0

Avari, D. N., & Bloodstein, O. (1974). Adjacency and prediction in school-age stutterers. Journal of Speech, Language, and Hearing Research, 17(1), 33–40.

Bach, D. R., Friston, K. J., & Dolan, R. J. (2010). Analytic measures for quantification of arousal from spontaneous skin conductance fluctuations. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 76(1), 52–5. http://doi.org/10.1016/j.ijpsycho.2010.01.011

Bai, O., Mari, Z., Vorbach, S., & Hallett, M. (2005). Asymmetric spatiotemporal patterns of event-related desynchronization preceding voluntary sequential finger movements: a high- resolution EEG study. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology, 116(5), 1213–21. http://doi.org/10.1016/j.clinph.2005.01.006

Balasubramanian, V., Cronin, K. L., & Max, L. (2010). Dysfluency levels during repeated readings, choral readings, and readings with altered auditory feedback in two cases of acquired neurogenic stuttering. Journal of Neurolinguistics, 23(5), 488–500. http://doi.org/10.1016/j.jneuroling.2009.04.004

Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A, Kessler, B., Loftis, B., … Treiman, R. (2007). The English Lexicon Project. Behavior Research Methods, 39(3), 445–59. Retrieved from http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3193910&tool=pmcentrez&ren dertype=abstract

Barwood, C. H. S., Murdoch, B. E., Goozee, J. V, & Riek, S. (2013). Investigating the neural basis of stuttering using transcranial magnetic stimulation: Preliminary case discussions. Speech, Language and Hearing, 16(1), 18–27. http://doi.org/10.1179/2050571X12Z.0000000001

102

Bauerly, K. R., & De Nil, L. F. (2011). Speech sequence skill learning in adults who stutter. Journal of Fluency Disorders, 36(4), 349–60. http://doi.org/10.1016/j.jfludis.2011.05.002

Baxter, S., Johnson, M., Blank, L., Cantrell, A., Brumfitt, S., Enderby, P., & Goyder, E. (2015). The state of the art in non-pharmacological interventions for developmental stuttering. Part 1: a systematic review of effectiveness. International Journal of Language & Communication Disorders, n/a–n/a. http://doi.org/10.1111/1460-6984.12171

Beal, D. S., Cheyne, D. O., Gracco, V. L., Quraan, M. A, Taylor, M. J., & De Nil, L. F. (2010). Auditory evoked fields to vocalization during passive listening and active generation in adults who stutter. NeuroImage, 52(4), 1645–53. http://doi.org/10.1016/j.neuroimage.2010.04.277

Beal, D. S., Gracco, V. L., Brettschneider, J., Kroll, R. M., & De Nil, L. F. (2013). A voxel- based morphometry (VBM) analysis of regional grey and white matter volume abnormalities within the speech production network of children who stutter. Cortex; a Journal Devoted to the Study of the Nervous System and Behavior, 49(8), 2151–61. http://doi.org/10.1016/j.cortex.2012.08.013

Beal, D. S., Lerch, J. P., Cameron, B., Henderson, R., Gracco, V. L., & De Nil, L. F. (2015). The trajectory of gray matter development in Broca’s area is abnormal in people who stutter. Frontiers in Human Neuroscience, 9(March), 89. http://doi.org/10.3389/fnhum.2015.00089

Beal, D. S., Quraan, M. A, Cheyne, D. O., Taylor, M. J., Gracco, V. L., & De Nil, L. F. (2011). Speech-induced suppression of evoked auditory fields in children who stutter. NeuroImage, 54(4), 2994–3003. http://doi.org/10.1016/j.neuroimage.2010.11.026

Behroozmand, R., Shebek, R., Hansen, D. R., Oya, H., Robin, D. A, Howard, M. A, & Greenlee, J. D. W. (2015). Sensory-motor networks involved in speech production and motor control: an fMRI study. NeuroImage, 109, 418–28. http://doi.org/10.1016/j.neuroimage.2015.01.040

Belyk, M., Kraft, S. J., & Brown, S. (2015). Stuttering as a trait or state - an ALE meta-analysis of neuroimaging studies. The European Journal of Neuroscience, 41(2), 275–84. http://doi.org/10.1111/ejn.12765

Bennett, I. J., Madden, D. J., Vaidya, C. J., Howard, J. H., & Howard, D. V. (2011). White matter integrity correlates of implicit sequence learning in healthy aging. Neurobiology of Aging, 32(12), 2317.e1–12. http://doi.org/10.1016/j.neurobiolaging.2010.03.017

Bernal, B., & Altman, N. (2010). The connectivity of the superior longitudinal fasciculus: a tractography DTI study. Magnetic Resonance Imaging, 28(2), 217–25. http://doi.org/10.1016/j.mri.2009.07.008

103

Biermann-Ruben, K., Salmelin, R., & Schnitzler, A. (2005). Right rolandic activation during speech perception in stutterers: a MEG study. NeuroImage, 25(3), 793–801. http://doi.org/10.1016/j.neuroimage.2004.11.024

Blank, S. C., Bird, H., Turkheimer, F., & Wise, R. J. S. (2003). Speech Production after Stroke : The Role of the Right Pars Opercularis, 310–320.

Blomgren, M. (2013). Behavioral treatments for children and adults who stutter: A review. Psychology Research and Behavior Management, 6, 9–19. http://doi.org/10.2147/PRBM.S31450

Blood, I. M., Wertz, H., Blood, G. W., Bennett, S., & Simpson, K. C. (1997). The effects of life stressors and daily stressors on stuttering. Journal of Speech, Language, and Hearing Research, 40(1), 134–143.

Bloodstein, O., & Ratner, N. (2007). A Handbook on Stuttering (6th ed.). Thomson: Delmar Learning.

Blumgart, E., Tran, Y., & Craig, A. (2010). Social anxiety disorder in adults who stutter. Depression and Anxiety, 27(7), 687–92. http://doi.org/10.1002/da.20657

Bohland, J. W., & Guenther, F. H. (2006). An fMRI investigation of syllable sequence production. NeuroImage, 32(2), 821–41. http://doi.org/10.1016/j.neuroimage.2006.04.173

Bosch, B., Arenaza-Urquijo, E. M., Rami, L., Sala-Llonch, R., Junqué, C., Solé-Padullés, C., … Bartrés-Faz, D. (2012). Multiple DTI index analysis in normal aging, amnestic MCI and AD. Relationship with neuropsychological performance. Neurobiology of Aging, 33(1), 61– 74. http://doi.org/10.1016/j.neurobiolaging.2010.02.004

Bosshardt, H.G. (2002). Effects of concurrent cognitive processing on the fluency of word repetition: comparison between persons who do and do not stutter. Journal of Fluency Disorders, 27(2), 93–113; quiz 113–4. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12145987

Bowers, A., Saltuklaroglu, T., Harkrider, A., & Cuellar, M. (2013). Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing. PloS One, 8(8), e72024. http://doi.org/10.1371/journal.pone.0072024

Bowers, A., Saltuklaroglu, T., & Kalinowski, J. (2012). Autonomic arousal in adults who stutter prior to various reading tasks intended to elicit changes in stuttering frequency. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 83(1), 45–55. http://doi.org/10.1016/j.ijpsycho.2011.09.021

104

Boxtel, A. V. A. N. (1983). Changes in electromyogram power spectra of facial and jaw-elevator muscles during fatigue.

Braun, A. R., Varga, M., Stager, S., Schulz, G., Selbie, S., Maisog, J. M., … Ludlow, C. L. (1997). Altered patterns of cerebral activity during speech and language production in developmental stuttering. An H2(15)O positron emission tomography study. Brain : A Journal of Neurology, 120 ( Pt 5, 761–84. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9183248

Brennan, J., Lignos, C., Embick, D., & Roberts, T. P. L. (2014). Spectro-temporal correlates of lexical access during auditory lexical decision. Brain and Language, 133, 39–46. http://doi.org/10.1016/j.bandl.2014.03.006

Bressler, S. L., & Kelso, J. A. S. (2001). Cortical coordination dynamics and cognition, 5(1), 26– 36.

Brocklehurst, P. H., Lickley, R. J., & Corley, M. (2012). The influence of anticipation of word misrecognition on the likelihood of stuttering. Journal of Communication Disorders, 45(3), 147–60. http://doi.org/10.1016/j.jcomdis.2012.03.003

Brown, J., & Braver, T. (2005). Learned predictions of error likelihood in the anterior cingulate cortex. Science (New York, N.Y.), 307(5712), 1118–21. http://doi.org/10.1126/science.1105783

Brown, J., & Braver, T. (2007). Risk prediction and aversion by anterior cingulate cortex. Cognitive, Affective & Behavioral Neuroscience, 7(4), 266–77. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/18189000

Brown, P. (2003). Oscillatory nature of human basal ganglia activity: relationship to the pathophysiology of Parkinson’s disease. Movement Disorders : Official Journal of the Movement Disorder Society, 18(4), 357–63. http://doi.org/10.1002/mds.10358

Brown, S., Ingham, R. J., Ingham, J. C., Laird, A. R., & Fox, P. T. (2005). Stuttered and fluent speech production: An ALE meta-analysis of functional neuroimaging studies. Human Brain Mapping, 25(1), 105–117. http://doi.org/10.1002/hbm.20140

Brutten, G., & Janssen, P. (1979). An eye-marking investigation of anticipated and observed stuttering. Journal of Speech, Language, and Hearing Research, 22(1), 20–28.

Byrd, C. T., Vallely, M., Anderson, J. D., & Sussman, H. (2012). Nonword repetition and phoneme elision in adults who do and do not stutter. Journal of Fluency Disorders, 37(3), 188–201. http://doi.org/10.1016/j.jfludis.2012.03.003

105

Cai, L., Chan, J. S. Y., Yan, J. H., & Peng, K. (2014). Brain plasticity and motor practice in cognitive aging. Frontiers in Aging Neuroscience, 6(March), 31. http://doi.org/10.3389/fnagi.2014.00031

Cai, S., Beal, D. S., Ghosh, S. S., Guenther, F. H., & Perkell, J. S. (2014). Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter. Brain and Language, 129, 24–9. http://doi.org/10.1016/j.bandl.2014.01.002

Cai, S., Tourville, J., Beal, D., Perkell, J., Guenther, F., & Ghosh, S. (2014). Diffusion imaging of cerebral white matter in persons who stutter: evidence for network-level anomalies. Frontiers in Human Neuroscience, 8(February), 54. http://doi.org/10.3389/fnhum.2014.00054

Cardy, J. E. O., Ferrari, C. A. P., Flagg, E. J., Roberts, W., & Roberts, T. P. L. (2004). Prominence of M50 auditory evoked response over M100 in childhood and , 15(12), 0–3.

Carota, F., Posada, A., Harquel, S., Delpuech, C., Bertrand, O., & Sirigu, A. (2010). Neural dynamics of the intention to speak. Cerebral Cortex (New York, N.Y. : 1991), 20(8), 1891– 7. http://doi.org/10.1093/cercor/bhp255

Catalan, M. J., Honda, M., Weeks, R. A., Cohen, L. G., & Hallett, M. (1998). The functional neuroanatomy of simple and complex sequential finger movements : a PET study, 253–264.

Chait, M., Simon, J. Z., Poeppel, D., & Simon, C. A. J. Z. (2004). Auditory M50 and M100 responses to broadband noise: functional implications. Neuroreport, 15(16), 2455–8. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15538173

Chang, S., Synnestvedt, A., Ostuni, J., & Ludlow, C. L. (2010). Similarities in speech and white matter characteristics in idiopathic developmental stuttering and adult-onset stuttering. Journal of Neurolinguistics, 23(5), 455–469. http://doi.org/10.1016/j.jneuroling.2008.11.004

Chang, S., & Zhu, D. C. (2013). Neural network connectivity differences in children who stutter. Brain : A Journal of Neurology, 136(Pt 12), 3709–26. http://doi.org/10.1093/brain/awt275

Chang, S., Zhu, D. C., Choo, A. L., & Angstadt, M. (2015). White matter neuroanatomical differences in young children who stutter. Brain : A Journal of Neurology, 138(Pt 3), 694– 711. http://doi.org/10.1093/brain/awu400

Chang, S.E., Erickson, K. I., Ambrose, N. G., Hasegawa-Johnson, M. A, & Ludlow, C. L. (2008). Brain anatomy differences in childhood stuttering. NeuroImage, 39(3), 1333–44. http://doi.org/10.1016/j.neuroimage.2007.09.067

106

Cheng, C.H., Baillet, S., Hsiao, F.J., & Lin, Y.Y. (2015). Effects of aging on the neuromagnetic mismatch detection to speech sounds. Biological Psychology, 104, 48–55. http://doi.org/10.1016/j.biopsycho.2014.11.003

Cheyne, D. (2013). MEG studies of sensorimotor rhythms: a review. Experimental Neurology, 245, 27–39. http://doi.org/10.1016/j.expneurol.2012.08.030

Cheyne, D., Bakhtazad, L., & Gaetz, W. (2006). Spatiotemporal mapping of cortical activity accompanying voluntary movements using an event-related beamforming approach. Human Brain Mapping, 27(3), 213–29. http://doi.org/10.1002/hbm.20178

Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). Neural correlates of verbal feedback processing: an fMRI study employing overt speech. Human Brain Mapping, 28(9), 868–79. http://doi.org/10.1002/hbm.20315

Cieslak, M., Ingham, R., Ingham, J., & Grafton, S. (2015). Anomalous White Matter Morphology in Adults Who Stutter. Journal of Speech, Language, and Hearing Research, 58(2), 268–277.

Clerget, E., Badets, A., Duqué, J., & Olivier, E. (2011). Role of Broca’s area in motor sequence programming: a cTBS study. Neuroreport, 22(18), 965–9. http://doi.org/10.1097/WNR.0b013e32834d87cd

Connally, E. L., Ward, D., Howell, P., & Watkins, K. E. (2014). Disrupted white matter in language and motor tracts in developmental stuttering. Brain and Language, 131, 25–35. http://doi.org/10.1016/j.bandl.2013.05.013

Craig, A., Blumgart, E., & Tran, Y. (2009). The impact of stuttering on the quality of life in adults who stutter. Journal of Fluency Disorders, 34(2), 61–71. http://doi.org/10.1016/j.jfludis.2009.05.002

Craig, A., & Tran, Y. (2014). Trait and social anxiety in adults with chronic stuttering: Conclusions following meta-analysis. Journal of Fluency Disorders, 40, 35–43. http://doi.org/10.1016/j.jfludis.2014.01.001

Crawcour, S., Bowers, A., Harkrider, A., & Saltuklaroglu, T. (2009). Mu wave suppression during the perception of meaningless syllables: EEG evidence of motor recruitment. Neuropsychologia, 47(12), 2558–63. http://doi.org/10.1016/j.neuropsychologia.2009.05.001

Cuellar, M., Bowers, A., Harkrider, A. W., Wilson, M., & Saltuklaroglu, T. (2012). Mu suppression as an index of sensorimotor contributions to speech processing: evidence from continuous EEG signals. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 85(2), 242–8. http://doi.org/10.1016/j.ijpsycho.2012.04.003

107

Curio, G., Neuloh, G., Numminen, J., & Jousma, V. (2000). Speaking Modifies Voice-Evoked Activity in the Human Auditory Cortex, 191, 183–191.

Cykowski, M. D., Fox, P. T., Ingham, R. J., Ingham, J. C., & Robin, D. a. (2010). A study of the reproducibility and etiology of diffusion anisotropy differences in developmental stuttering: a potential role for impaired myelination. NeuroImage, 52(4), 1495–504. http://doi.org/10.1016/j.neuroimage.2010.05.011

Da Silva, L., & Fernando, H. (2006). Event-related neural activities: what about phase? Progress in Brain Research, 159, 3–17. http://doi.org/10.1016/S0079-6123(06)59001-6

Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. W., Lewine, J. D., … Louis, S. (2000). Neurotechnique Mapping : Combining fMRI and MEG for High-Resolution Imaging of Cortical Activity, 26, 55–67.

Daliri, A., & Max, L. (2015). Modulation of auditory processing during speech movement planning is limited in adults who stutter. Brain and Language, 143, 59–68. http://doi.org/10.1016/j.bandl.2015.03.002

Daliri, A., Prokopenko, R. A., Flanagan, J. R., & Max, L. (2014). Control and Prediction Components of Movement Planning in Stuttering Versus Nonstuttering Adults. Journal of Speech, Language, and Hearing Research, 57(6), 2131–2141. http://doi.org/10.1044/2014

De Nil, L., & Abbs, J. (1991). Kinaesthetic acuity of stutterers and non-stutterers for oral and non-oral movements. Brain : A Journal of Neurology, 114 ( Pt 5, 2145–58. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/1933239

De Nil, L., & Kroll, R. (2001). Searching for the neural basis of stuttering treatment outcome: recent neuroimaging studies. Clinical Linguistics & Phonetics, 15(1-2), 163–8. http://doi.org/10.3109/02699200109167650

De Nil, L., Kroll, R., & Houle, S. (2001). Functional neuroimaging of cerebellar activation during single word reading and verb generation in stuttering and nonstuttering adults. Neuroscience Letters, 302(2-3), 77–80. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11290391

De Nil, L., Kroll, R., Lafaille, S., & Houle, S. (2003). A positron emission tomography study of short- and long-term treatment effects on functional brain activation in adults who stutter. Journal of Fluency Disorders, 28(4), 357–380. http://doi.org/10.1016/j.jfludis.2003.07.002

Desai, R., Liebenthal, E., Possing, E. T., Waldron, E., & Binder, J. R. (2005). Volumetric vs. surface-based alignment for localization of auditory cortex activation. NeuroImage, 26(4), 1019–1029. http://doi.org/10.1016/j.neuroimage.2005.03.024

108

Dick, A. S., & Tremblay, P. (2012). Beyond the arcuate fasciculus: consensus and controversy in the connectional anatomy of language. Brain : A Journal of Neurology, 135(Pt 12), 3529– 50. http://doi.org/10.1093/brain/aws222

Doyle, L. M. F., Kühn, A., Hariz, M., Kupsch, A., Schneider, G.H., & Brown, P. (2005). Levodopa-induced modulation of subthalamic beta oscillations during self-paced movements in patients with Parkinson’s disease. The European Journal of Neuroscience, 21(5), 1403–12. http://doi.org/10.1111/j.1460-9568.2005.03969.x

Dworzynski, K., Howell, P., & Natke, U. (2003). Predicting stuttering from linguistic factors for German speakers in two age groups. Journal of Fluency Disorders, 28(2), 95–113. http://doi.org/10.1016/S0094-730X(03)00009-3

Engel, A. K., & Fries, P. (2010). Beta-band oscillations--signalling the status quo? Current Opinion in Neurobiology, 20(2), 156–65. http://doi.org/10.1016/j.conb.2010.02.015

Erbil, N., & Ungan, P. (2007). Changes in the alpha and beta amplitudes of the central EEG during the onset, continuation, and offset of long-duration repetitive hand movements. Brain Research, 1169(2004), 44–56. http://doi.org/10.1016/j.brainres.2007.07.014

Fields, R. D. (2010). Change in the Brain’s White Matter. Science, 330(November), 768–769.

Flinker, A., Chang, E. F., Kirsch, H. E., Barbaro, N. M., Crone, N. E., & Knight, R. T. (2010). Single-trial speech suppression of auditory cortex activity in humans. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 30(49), 16643–50. http://doi.org/10.1523/JNEUROSCI.1809-10.2010

Forster, D. C., & Webster, W. G. (2001). Speech-motor control and interhemispheric relations in recovered and persistent stuttering. Developmental Neuropsychology, 19(2), 125–45. http://doi.org/10.1207/S15326942DN1902_1

Fox, P., Ingham, R., Ingham, J., Hirsch, T., Downs, H., Martin, C., … Lancaster, J. (1996). A PET study of the neural systems of stuttering. Nature, 158–162.

Fox, P. T., Ingham, R. J., Ingham, J. C., Zamarripa, F., Xiong, J. H., & Lancaster, J. L. (2000). Brain correlates of stuttering and syllable production. A PET performance-correlation analysis. Brain : A Journal of Neurology, 123 ( Pt 1(2000), 1985–2004. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11004117

Fridriksson, J., Guo, D., Fillmore, P., Holland, A., & Rorden, C. (2013). Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia. Brain : A Journal of Neurology, 136(Pt 11), 3451–60. http://doi.org/10.1093/brain/awt267

109

Fries, P. (2005). A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in Cognitive Sciences, 9(10), 474–80. http://doi.org/10.1016/j.tics.2005.08.011

Ganushchak, L. Y., Christoffels, I. K., & Schiller, N. O. (2011). The use of in language production research: a review. Frontiers in Psychology, 2(September), 208. http://doi.org/10.3389/fpsyg.2011.00208

Ganushchak, L. Y., & Schiller, N. O. (2006). Effects of time pressure on verbal self-monitoring: an ERP study. Brain Research, 1125(1), 104–15. http://doi.org/10.1016/j.brainres.2006.09.096

Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error- monitoring activity: an event-related brain potentials study. NeuroImage, 39(1), 395–405. http://doi.org/10.1016/j.neuroimage.2007.09.001

Garcia-Barrera, M. A. (2015). Anticipation in stuttering: A theoretical model of the nature of stutter prediction. Journal of Fluency Disorders, 44, 1–15.

Gehrig, J., Wibral, M., Arnold, C., & Kell, C. A. (2012). Setting up the speech production network: how oscillations contribute to lateralized information routing. Frontiers in Psychology, 3(June), 169. http://doi.org/10.3389/fpsyg.2012.00169

Gehring, W. J., Himle, J., & Nisenson, L. G. (2000). Action-monitoring dysfunction in obsessive-compulsive disorder. Psychological Science, 11(1), 1–6.

Ghosh, S. S., Tourville, J. A., & Guenther, F. H. (2008). A Neuroimaging Study of Premotor Lateralization and Syllables. Journal of Speech Language Hearing Research, 51(5), 1183– 1202. http://doi.org/10.1044/1092-4388(2008/07-0119).

Giraud, A.L., Neumann, K., Bachoud-Levi, A.C., von Gudenberg, A. W., Euler, H. A., Lanfermann, H., & Preibisch, C. (2008). Severity of dysfluency correlates with basal ganglia activity in persistent developmental stuttering. Brain and Language, 104(2), 190–9. http://doi.org/10.1016/j.bandl.2007.04.005

Goncharova, I., McFarland, D., Vaughan, T., & Wolpaw, J. (2003). EMG contamination of EEG: spectral and topographical characteristics. Clinical Neurophysiology, 114(9), 1580– 1593. http://doi.org/10.1016/S1388-2457(03)00093-2

Grabski, K., Lamalle, L., Vilain, C., Schwartz, J.L., Vallée, N., Tropres, I., … Sato, M. (2012). Functional MRI assessment of orofacial articulators: neural correlates of lip, jaw, larynx, and tongue movements. Human Brain Mapping, 33(10), 2306–21. http://doi.org/10.1002/hbm.21363

110

Graziadio, S., Nazarpour, K., Gretenkord, S., Jackson, A., & Eyre, J. A. (2015). Greater intermanual transfer in the elderly suggests age-related bilateral motor cortex activation is compensatory. Journal of Motor Behavior, 47(1), 47–55. http://doi.org/10.1080/00222895.2014.981501

Greve, D. N., Haegen, L. V., Cai, Q., Stufflebeam, S., Sabuncu, M. R., Fischl, B., & Brysbaert, M. (2011). A Surface-based Analysis of Language Lateralization and Cortical Asymmetry, 1477–1492. http://doi.org/10.1162/jocn

Gross, J., Baillet, S., Barnes, G. R., Henson, R. N., Hillebrand, A., Jensen, O., … Schoffelen, J. M. (2013). Good practice for conducting and reporting MEG research. NeuroImage, 65, 349–363. http://doi.org/10.1016/j.neuroimage.2012.10.001

Guenther, F. H., Ghosh, S. S., & Tourville, J. A. (2006). Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language, 96(3), 280–301. http://doi.org/10.1016/j.bandl.2005.06.001

Guenther, F. H., Hampson, M., & Johnson, D. (1998). A Theoretical Investigation of Reference Frames for the Planning of Speech Movements, 105(4), 611–633.

Guenther, F. H., & Vladusich, T. (2012). A Neural Theory of Speech Acquisition and Production. Journal of Neurolinguistics, 25(5), 408–422. http://doi.org/10.1016/j.jneuroling.2009.08.006

Gunji, A., Hoshiyama, M., & Kakigi, R. (2001). Auditory response following vocalization : a magnetoencephalographic study, 112, 514–520.

Hall, E. L., Robson, S. E., Morris, P. G., & Brookes, M. J. (2014). The relationship between MEG and fMRI. NeuroImage, 102 Pt 1, 80–91. http://doi.org/10.1016/j.neuroimage.2013.11.005

Heinks-Maldonado, T. H., Mathalon, D. H., Gray, M., & Ford, J. M. (2005). Fine-tuning of auditory cortex during speech production. Psychophysiology, 42(2), 180–90. http://doi.org/10.1111/j.1469-8986.2005.00272.x

Hennessey, N. W., Dourado, E., & Beilby, J. M. (2014). Anxiety and speaking in people who stutter: An investigation using the emotional Stroop task. Journal of Fluency Disorders, 40, 44–57. http://doi.org/10.1016/j.jfludis.2013.11.001

Herdman, A. T., Pang, E. W., Ressel, V., Gaetz, W., & Cheyne, D. (2007). Task-related modulation of early cortical responses during language production: an event-related synthetic aperture magnetometry study. Cerebral Cortex (New York, N.Y. : 1991), 17(11), 2536–43. http://doi.org/10.1093/cercor/bhl159

111

Hermes, D., Miller, K. J., Vansteensel, M. J., Aarnoutse, E. J., Leijten, F. S. S., & Ramsey, N. F. (2012). Neurophysiologic correlates of fMRI in human motor cortex. Human Brain Mapping, 33(7), 1689–99. http://doi.org/10.1002/hbm.21314

Hickok, G. (2012). The cortical organization of speech processing: feedback control and predictive coding the context of a dual-stream model. Journal of Communication Disorders, 45(6), 393–402. http://doi.org/10.1016/j.jcomdis.2012.06.004

Hickok, G., Erhard, P., Kassubek, J., Helms-Tillery, A. K., Naeve-Velguth, S., Strupp, J. P., … Ugurbil, K. (2000). A functional magnetic resonance imaging study of the role of left posterior superior temporal gyrus in speech production: implications for the explanation of conduction aphasia. Neuroscience Letters, 287(2), 156–160. http://doi.org/10.1016/S0304- 3940(00)01143-5

Hickok, G., Houde, J., & Rong, F. (2011). Sensorimotor integration in speech processing: computational basis and neural organization. , 69(3), 407–22. http://doi.org/10.1016/j.neuron.2011.01.019

Holroyd, C. B., & Coles, M. G. H. (2002). The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109(4), 679–709. http://doi.org/10.1037//0033-295X.109.4.679

Horsfield, M. A, & Jones, D. K. (2002). Applications of diffusion-weighted and diffusion tensor MRI to white matter diseases - a review. NMR in Biomedicine, 15(7-8), 570–7. http://doi.org/10.1002/nbm.787

Houde, J. F., Nagarajan, S. S., Sekihara, K., & Merzenich, M. M. (2002). Modulation of the auditory cortex during speech: an MEG study. Journal of Cognitive Neuroscience, 14(8), 1125–38. http://doi.org/10.1162/089892902760807140

Huang, J., Carr, T. H., & Cao, Y. (2001). Comparing Cortical Activations for Silent and Overt Speech Using Event-Related fMRI, 53, 39–53. http://doi.org/10.1002/hbm.XXXX

Huang, M., Theilmann, R., Robb, A., Angeles, A., Nichols, S., Drake, A., … Lee, R. R. (2009). Integrated imaging approach with MEG and DTI to detect mild traumatic brain injury in military and civilian patients. Journal of Neurotrauma, 26(8), 1213–1226. http://doi.org/10.1089/neu.2008.0672

Hughes, S., Gabel, R., Irani, F., & Schlagheck, A. (2010). University students’ perceptions of the life effects of stuttering. Journal of Communication Disorders, 43(1), 45–60. http://doi.org/10.1016/j.jcomdis.2009.09.002

Hulstijn, W., Summers, J. J., Van Lieshout, P., & Peters, H. F. M. (1992). Timing in finger tapping and speech : A comparison between stutterers and fluent speakers *, 11, 113–124.

112

Hutton, C., Draganski, B., Ashburner, J., & Weiskopf, N. (2009). A comparison between voxel- based cortical thickness and voxel-based morphometry in normal aging. NeuroImage, 48(2), 371–380. http://doi.org/10.1016/j.neuroimage.2009.06.043

Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(October), 255. http://doi.org/10.3389/fpsyg.2011.00255

Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101–44. http://doi.org/10.1016/j.cognition.2002.06.001

Ingham, R., Fox, P., Ingham, C., & Zamarripa, F. (2000). Is overt stuttered speech a prerequisite for the neural activations associated with chronic developmental stuttering? Brain and Language, 75(2), 163–94. http://doi.org/10.1006/brln.2000.2351

Ingham, R., Grafton, S., Bothe, A., & Ingham, J. (2012). Brain activity in adults who stutter: similarities across speaking tasks and correlations with stuttering frequency and speaking rate. Brain and Language, 122(1), 11–24. http://doi.org/10.1016/j.bandl.2012.04.002

Iverach, L., & Rapee, R. M. (2013). Social anxiety disorder and stuttering: Current status and future directions. Journal of Fluency Disorders, 40, 69–82. http://doi.org/10.1016/j.jfludis.2013.08.003

Jackson, E., Yaruss, J., Quesal, R., Terranova, V., & Whalen, D. (2015). Responses of adults who stutter to the anticipation of stuttering. Journal of Fluency Disorders.

Jayaram, M. (1984). Distribution of Stuttering in SentencesRelationship to Sentence Length and Clause Position. Journal of Speech, Language, and Hearing Research, 27(3), 338–341.

Jenson, D., Thornton, D., Saltuklaroglu, T., & Harkrider, A. (2014). Speech perception, production, and the sensorimotor mu rhythm. Proceedings of the 2014 Biomedical Sciences and Engineering Conference, 1–4. http://doi.org/10.1109/BSEC.2014.6867736

Jiang, J., Lu, C., Peng, D., Zhu, C., & Howell, P. (2012). Classification of types of stuttering symptoms based on brain activity. PloS One, 7(6), e39747. http://doi.org/10.1371/journal.pone.0039747

Joos, K., De Ridder, D., Boey, R. A, & Vanneste, S. (2014). Functional connectivity changes in adults with developmental stuttering: a preliminary study using quantitative electro- encephalography. Frontiers in Human Neuroscience, 8(October), 783. http://doi.org/10.3389/fnhum.2014.00783

Jurkiewicz, M. T., Gaetz, W. C., Bostan, A. C., & Cheyne, D. (2006). Post-movement beta rebound is generated in motor cortex: Evidence from neuromagnetic recordings. NeuroImage, 32(3), 1281–1289. http://doi.org/10.1016/j.neuroimage.2006.06.005 113

Kalveram, K. T. (2001). Neurobiology of speaking and stuttering. In Fluency Disorders: Theory, Research, Treatment and Self-help (pp. 59–65).

Kell, C. A, Neumann, K., von Kriegstein, K., Posenenske, C., von Gudenberg, A. W., Euler, H., & Giraud, A.-L. (2009). How the brain repairs stuttering. Brain : A Journal of Neurology, 132(Pt 10), 2747–60. http://doi.org/10.1093/brain/awp185

Kell, C., Morillon, B., Kouneiher, F., & Giraud, A.L. (2011). Lateralization of speech production starts in sensory cortices--a possible sensory origin of cerebral left dominance for speech. Cerebral Cortex (New York, N.Y. : 1991), 21(4), 932–7. http://doi.org/10.1093/cercor/bhq167

Kenneth, L. J., & Conture, E. G. (1995). Length, grammatical complexity, and rate differences in stuttered and fluent conversational utterances of children who stutter. Journal of Fluency Disorders, 20(1), 35–61.

Keuleers, E., Diependaele, K., & Brysbaert, M. (2010). Practice effects in large-scale visual word recognition studies: a lexical decision study on 14,000 dutch mono- and disyllabic words and nonwords. Frontiers in Psychology, 1(November), 174. http://doi.org/10.3389/fpsyg.2010.00174

Kilavik, B. E., Zaepffel, M., Brovelli, A., MacKay, W. A, & Riehle, A. (2013). The ups and downs of β oscillations in sensorimotor cortex. Experimental Neurology, 245, 15–26. http://doi.org/10.1016/j.expneurol.2012.09.014

Kilner, J., Bott, L., & Posada, A. (2005). Modulations in the degree of synchronization during ongoing oscillatory activity in the human brain. The European Journal of Neuroscience, 21(9), 2547–54. http://doi.org/10.1111/j.1460-9568.2005.04069.x

Klein, J., & Hood, S. (2004). The impact of stuttering on employment opportunities and job performance. Journal of Fluency Disorders, 29(4), 255–73. http://doi.org/10.1016/j.jfludis.2004.08.001

Kleinow, J., & Smith, A. (2000). Influences of length and syntactic complexity on the speech motor stability of the fluent speech of adults who stutter. Journal of Speech, Language, and Hearing Research, 43(2), 548–559.

Klimesch, W. (2012). Αlpha Band Oscillations, Attention, and Controlled Access To Stored Information. Trends in Cognitive Sciences, 16(12), 606–17. http://doi.org/10.1016/j.tics.2012.10.007

Klimesch, W., Sauseng, P., & Hanslmayr, S. (2007). EEG alpha oscillations: the inhibition- timing hypothesis. Brain Research Reviews, 53(1), 63–88. http://doi.org/10.1016/j.brainresrev.2006.06.003

114

Klingberg, T., Hedehus, M., Temple, E., Salz, T., Gabrieli, J. D. E., Moseley, M. E., & Poldrack, R. A. (2000). Microstructure of Temporo-Parietal White Matter as a Basis for Reading Ability : Evidence from Diffusion Tensor Magnetic Resonance Imaging, 25, 493–500.

Kopčo, N., Huang, S., Belliveau, J. W., Raij, T., Tengshe, C., & Ahveninen, J. (2012). Neuronal representations of distance in human auditory cortex. Proceedings of the National Academy of Sciences of the United States of America, 109(27), 11019–24. http://doi.org/10.1073/pnas.1119496109

Krings, T., Töpper, R., Foltys, H., Erberich, S., Sparing, R., Willmes, K., & Thron, A. (2000). Cortical activation patterns during complex motor tasks in piano players and control subjects. A functional magnetic resonance imaging study. Neuroscience Letters, 278(3), 189–193. http://doi.org/10.1016/S0304-3940(99)00930-1

Lieshout, P., Ben-David, B., Lipski, M., & Namasivayam, A. (2014). The impact of threat and cognitive stress on speech motor control in people who stutter. Journal of Fluency Disorders, 40, 93–109. http://doi.org/10.1016/j.jfludis.2014.02.003

Liljeström, M., Jan, K., Stevenson, C., & Salmelin, R. (2014). Dynamic reconfiguration of the language network preceding onset of speech in picture naming. Human Brain Mapping, 00. http://doi.org/10.1002/hbm.22697

Logan, G. D. (1985). Skill and automaticity: Relations, implications, and future directions. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 39(2), 367–386. http://doi.org/10.1037/h0080066

López-Barroso, D., Catani, M., Ripollés, P., Dell’Acqua, F., Rodríguez-Fornells, A., & de Diego-Balaguer, R. (2013). Word learning is mediated by the left arcuate fasciculus. Proceedings of the National Academy of Sciences of the United States of America, 110(32), 13168–73. http://doi.org/10.1073/pnas.1301696110

Loucks, T., & De Nil, L. (2006). Anomalous sensorimotor integration in adults who stutter: a tendon vibration study. Neuroscience Letters, 402(1-2), 195–200. http://doi.org/10.1016/j.neulet.2006.04.002

Loucks, T., & De Nil, L. (2012). Oral Sensorimotor Integration in Adults Who Stutter. Folia Phoniatrica et Logopaedica : Official Organ of the International Association of Logopedics and Phoniatrics (IALP), 64(3), 116–121. http://doi.org/10.1159/000338248

Loucks, T., De Nil, L., & Sasisekaran, J. (2007). Jaw-phonatory coordination in chronic developmental stuttering. Journal of Communication Disorders, 40(3), 257–72. http://doi.org/10.1016/j.jcomdis.2006.06.016

115

Loucks, T. M. J., & De Nil, L. F. (2006). Oral kinesthetic deficit in adults who stutter: a target- accuracy study. Journal of Motor Behavior, 38(3), 238–46. http://doi.org/10.3200/JMBR.38.3.238-247

Lu, C., Chen, C., Peng, D., You, W., Zhang, X., Ding, G., … Howell, P. (2012). Neural anomaly and reorganization in speakers who stutter: a short-term intervention study. Neurology, 79(7), 625–32. http://doi.org/10.1212/WNL.0b013e31826356d2

Ludlow, C. L., & Loucks, T. (2003). Stuttering: a dynamic motor control disorder. Journal of Fluency Disorders, 28(4), 273–295. http://doi.org/10.1016/j.jfludis.2003.07.001

Månsson, H. (2000). Childhood stuttering: Incidence and development. Journal of Fluency Disorders, 25(1), 47–57. http://doi.org/10.1016/S0094-730X(99)00023-6

Martikainen, M. H., Kaneko, K., & Hari, R. (2005). Suppressed responses to self-triggered sounds in the human auditory cortex. Cerebral Cortex (New York, N.Y. : 1991), 15(3), 299– 302. http://doi.org/10.1093/cercor/bhh131

Masaki, H., Tanaka, H., Takasawa, N., & Yamazaki, K. (2001). Error-related brain potentials elicited by vocal errors. Neuroreport, 12(9), 1851–5. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11435911

Mattay, V. S., Fera, F., Tessitore, A., Hariri, A. R., Das, S., Callicott, J. H., & Weinberger, D. R. (2002). Neurophysiological correlates of age-related changes in human motor function. Neurology, 58(4), 630–635. http://doi.org/10.1212/WNL.58.4.630

Max, L., & Baldwin, C. J. (2010). The role of motor learning in stuttering adaptation: Repeated versus novel utterances in a practice-retention paradigm. Journal of Fluency Disorders, 35(1), 33–43. http://doi.org/10.1016/j.jfludis.2009.12.003

Max, L., Caruso, A., & Gracco, V. (2003). Kinematic analyses of speech, orofacial nonspeech, and finger movements in stuttering and nonstuttering adults. Journal of Speech, Language, and Hearing Research, 46(1), 215–232.

Max, L., & Gracco, V. (2005). Coordination of oral and laryngeal movements in the perceptually fluent speech of adults who stutter. Journal of Speech, Language, and Hearing Research, 48(3), 524–542.

Max, L., Guenther, F., Gracco, V., Ghosh, S., & Wallace, M. (2004). Unstable or Insufficiently Activated Internal Models and Feedback-Biased Motor Control as Sources of Dysfluency :, 105–122.

Max, L., & Yudman, E. (2003). Accuracy and variability of isochronous rhythmic timing across motor systems in stuttering versus nonstuttering individuals. Journal of Speech, Language, and Hearing Research, 46(1), 146–163. 116

McClean, M., Goldsmith, H., & Cerf, A. (1984). Lower-lip EMG and displacement during bilabial disfluencies in adult stutterers. Journal of Speech, Language, and Hearing Research, 27(3), 342–349.

Meltzer, J. A., Wagage, S., Ryder, J., Solomon, B., & Braun, A. R. (2013). Adaptive significance of right hemisphere activation in aphasic language comprehension. Neuropsychologia, 51(7), 1248–1259. http://doi.org/10.1016/j.neuropsychologia.2013.03.007

Metten, C., Bosshardt, H.G., Jones, M., Eisenhuth, J., Block, S., Carey, B., … Menzies, R. (2011). Dual tasking and stuttering: from the laboratory to the clinic. Disability and Rehabilitation, 33(11), 933–44. http://doi.org/10.3109/09638288.2010.515701

Mier, W., & Mier, D. (2015). Advantages in functional imaging of the brain. Frontiers in Human Neuroscience, 9(May), 249. http://doi.org/10.3389/fnhum.2015.00249

Mock, J. R., Foundas, A. L., & Golob, E. J. (2011). Modulation of sensory and motor cortex activity during speech preparation. The European Journal of Neuroscience, 33(5), 1001–11. http://doi.org/10.1111/j.1460-9568.2010.07585.x

Möller, J., Jansma, B. M., Rodriguez-Fornells, A., & Münte, T. F. (2007). What the brain does before the tongue slips. Cerebral Cortex (New York, N.Y. : 1991), 17(5), 1173–8. http://doi.org/10.1093/cercor/bhl028

Möttönen, R., Dutton, R., & Watkins, K. E. (2013). Auditory-motor processing of speech sounds. Cerebral Cortex (New York, N.Y. : 1991), 23(5), 1190–7. http://doi.org/10.1093/cercor/bhs110

Naccarato, M., Calautti, C., Jones, P. S., Day, D. J., Carpenter, T. A, & Baron, J.-C. (2006). Does healthy aging affect the hemispheric activation balance during paced index-to-thumb opposition task? An fMRI study. NeuroImage, 32(3), 1250–6. http://doi.org/10.1016/j.neuroimage.2006.05.003

Nakayashiki, K., Saeki, M., Takata, Y., Hayashi, Y., & Kondo, T. (2014). Modulation of event- related desynchronization during kinematic and kinetic hand movements, 1–9.

Namasivayam, A. K., & van Lieshout, P. (2008). Investigating speech motor practice and learning in people who stutter. Journal of Fluency Disorders, 33(1), 32–51. http://doi.org/10.1016/j.jfludis.2007.11.005

Namasivayam, A. K., & van Lieshout, P. (2011). Speech motor skill and stuttering. Journal of Motor Behavior, 43(6), 477–89. http://doi.org/10.1080/00222895.2011.628347

Neef, N. E., Hoang, T. N. L., Neef, A., Paulus, W., & Sommer, M. (2015). Speech dynamics are coded in the left motor cortex in fluent speakers but not in adults who stutter. Brain : A Journal of Neurology, 138(Pt 3), 712–25. http://doi.org/10.1093/brain/awu390 117

Neef, N. E., Jung, K., Rothkegel, H., Pollok, B., von Gudenberg, A. W., Paulus, W., & Sommer, M. (2011). Right-shift for non-speech motor processing in adults who stutter. Cortex; a Journal Devoted to the Study of the Nervous System and Behavior, 47(8), 945–54. http://doi.org/10.1016/j.cortex.2010.06.007

Neelley, J., & Timmons, R. (1967). Adaptation and consistency in the disfluent speech behavior of young stutterers and nonstutterers. Journal of Speech, Language, and Hearing Research, 10(2), 250–256.

Neumann, K., Euler, H. A, von Gudenberg, A. W., Giraud, A.L., Lanfermann, H., Gall, V., & Preibisch, C. (2003). The nature and treatment of stuttering as revealed by fMRI. Journal of Fluency Disorders, 28(4), 381–410. http://doi.org/10.1016/j.jfludis.2003.07.003

Neumann, K., Preibisch, C., Euler, H. A, von Gudenberg, A. W., Lanfermann, H., Gall, V., & Giraud, A.-L. (2005). Cortical plasticity associated with stuttering therapy. Journal of Fluency Disorders, 30(1), 23–39. http://doi.org/10.1016/j.jfludis.2004.12.002

Nichols, T. E., & Holmes, A. P. (2002). Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping, 15(1), 1–25. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11747097

Ocklenburg, S., Hugdahl, K., & Westerhausen, R. (n.d.). Author ’ s personal copy NeuroImage Structural white matter asymmetries in relation to functional asymmetries during speech perception and production.

Okada, K., Smith, K. R., Humphries, C., & Ca, G. H. (2003). Word length modulates neural activity in auditory cortex during covert object naming, 14(18). http://doi.org/10.1097/01.wnr.0000094104.16607

Oldfield, R. C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia, 9(1), 97–113. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/5146491

Oram Cardy, J. E., Flagg, E. J., Roberts, W., & Roberts, T. P. L. (2008). Auditory evoked fields predict language ability and impairment in children. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 68(2), 170–5. http://doi.org/10.1016/j.ijpsycho.2007.10.015

Ouden, D. Den, Adams, C., Montgomery, A., & den Ouden, D.B. (2014). Simulating the neural correlates of stuttering. Neurocase, 20(4), 434–45. http://doi.org/10.1080/13554794.2013.791863

Panizzon, M. S., Fennema-Notestine, C., Eyler, L. T., Jernigan, T. L., Prom-Wormley, E., Neale, M., … Kremen, W. S. (2009). Distinct genetic influences on cortical surface area and

118

cortical thickness. Cerebral Cortex (New York, N.Y. : 1991), 19(11), 2728–35. http://doi.org/10.1093/cercor/bhp026

Papoutsi, M., de Zwart, J. A, Jansma, J. M., Pickering, M. J., Bednar, J. A, & Horwitz, B. (2009). From phonemes to articulatory codes: an fMRI study of the role of Broca’s area in speech production. Cerebral Cortex (New York, N.Y. : 1991), 19(9), 2156–65. http://doi.org/10.1093/cercor/bhn239

Pastötter, B., Berchtold, F., & Bäuml, K.H. T. (2012). Oscillatory correlates of controlled speed- accuracy tradeoff in a response-conflict task. Human Brain Mapping, 33(8), 1834–49. http://doi.org/10.1002/hbm.21322

Paulin, M. G. (1993). The role of the cerebellum in motor control and perception. Brain, Behavior and Evolution, 41(1), 39–50. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8431754

Paus, T., Koski, L., Caramanos, Z., & Westbury, C. (1998). Regional differences in the effects of task difficulty and motor output on blood flow response in the human anterior cingulate cortex: a review of 107 PET activation studies. NeuroReport, 9(9), R37–R47.

Paus, T., Perry, D., Zatorre, R., Worsley, K., & Evans, A. (1996). Modulation of Cerebral Blood Flow in the Human Auditow Cortex During Speech : Role of Motor-to-sensory Discharges, 8(June), 2236–2246.

Perani, D., Cappa, S. ., Tettamanti, M., Rosa, M., Scifo, P., Miozzo, A., … Fazio, F. (2003). A fMRI study of word retrieval in aphasia. Brain and Language, 85(3), 357–368. http://doi.org/10.1016/S0093-934X(02)00561-8

Peters, H., Hulstijn, W., & Starkweather, C. (1989). Acoustic and physiological reaction times of stutterers and nonstutterers. Journal of Speech, Language, and Hearing Research, 32(3), 668–680.

Petrides, M., & Pandya, D. N. (2009). Distinct parietal and temporal pathways to the homologues of Broca’s area in the monkey. PLoS Biology, 7(8), e1000170. http://doi.org/10.1371/journal.pbio.1000170

Pfurtscheller, G., & Da Silva, F. H. L. (1999). Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, 110(11), 1842–1857. http://doi.org/10.1016/S1388-2457(99)00141-8

Plummer, P., Perea, M., & Rayner, K. (2014). The influence of contextual diversity on eye movements in reading. Journal of Experimental Psychology. Learning, Memory, and Cognition, 40(1), 275–83. http://doi.org/10.1037/a0034058

119

Poeppel, D., Emmorey, K., Hickok, G., & Pylkkänen, L. (2012). Towards a new neurobiology of language. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 32(41), 14125–31. http://doi.org/10.1523/JNEUROSCI.3244-12.2012

Postma, A. (2000). Detection of errors during speech production: a review of speech monitoring models. Cognition, 77(2), 97–132. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10986364

Preibisch, C., Neumann, K., Raab, P., Euler, H. A, von Gudenberg, A. W., Lanfermann, H., & Giraud, A.L. (2003). Evidence for compensation for stuttering by the right frontal operculum. NeuroImage, 20(2), 1356–64. http://doi.org/10.1016/S1053-8119(03)00376-8

Price, C. (2010). The anatomy of language: a review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences, 1191, 62–88. http://doi.org/10.1111/j.1749- 6632.2010.05444.x

Price, C., Wise, R., Warburton, E. A., Moore, C. J., Howard, D., Patterson, K., & Friston, K. J. (1996). Hearing and saying. The functional neuro-anatomy of auditory word processing. Brain, 119(3), 919–931. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/8673502

Rademacher, J., Morosan, P., Schormann, T., Schleicher, A., Werner, C., Freund, H. J., & Zilles, K. (2001). Probabilistic mapping and volume measurement of human primary auditory cortex. NeuroImage, 13(4), 669–83. http://doi.org/10.1006/nimg.2000.0714

Riecker, A., Ackermann, H., Wildgruber, C. A. D., Dogil, G., & Grodd, W. (2000). Opposite hemispheric lateralization effects during speaking and singing at motor cortex , insula and cerebellum, 11(9), 1997–2000.

Ries, M. L., Boop, F. A., Griebel, M. L., Zou, P., Phillips, N. S., Johnson, S. C., … Ogg, R. J. (2004). Functional MRI and Wada Determination of Language Lateralization : A Case of Crossed Dominance, 45(1), 85–89.

Riès, S., Janssen, N., Dufau, S., Alario, F., & Burle, B. (2011). General-purpose monitoring during speech production. Journal of Cognitive Neuroscience, 23(6), 1419–1436.

Riley, G. (1972). A stuttering severity instrument for children and adults. Journal of Speech and Hearing Disorders, 37(3), 314–322.

Riley, G., & Bakker, K. (2009). Stuttering Severity Instrument: SSI-4 (4th ed.). Austin: Pro-Ed.

Ritter, P., Moosmann, M., & Villringer, A. (2009). Rolandic alpha and beta EEG rhythms’ strengths are inversely related to fMRI-BOLD signal in primary somatosensory and motor cortex. Human Brain Mapping, 30(4), 1168–87. http://doi.org/10.1002/hbm.20585

120

Robinson, S. (1999). Functional neuroimaging by synthetic aperture magnetometry (SAM). Recent Advances in Biomagnetism, 302–305.

Saarinen, T., Laaksonen, H., Parviainen, T., & Salmelin, R. (2006). Motor cortex dynamics in visuomotor production of speech and non-speech mouth movements. Cerebral Cortex (New York, N.Y. : 1991), 16(2), 212–22. http://doi.org/10.1093/cercor/bhi099

Sabbah, P., Chassoux, F., Leveque, C., Landre, E., Baudoin-Chial, S., Devaux, B., … Cordoliani, Y. . (2003). Functional MR imaging in assessment of language dominance in epileptic patients. NeuroImage, 18(2), 460–467. http://doi.org/10.1016/S1053-8119(03)00025-9

Salmelin, R. (2007). Clinical neurophysiology of language: the MEG approach. Clinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology, 118(2), 237–54. http://doi.org/10.1016/j.clinph.2006.07.316

Salmelin, R., & Sams, M. (2002). Motor cortex involvement during verbal versus non-verbal lip and tongue movements. Human Brain Mapping, 16(2), 81–91. http://doi.org/10.1002/hbm.10031

Salmelin, R., Schnitzler, A., Schmitz, F., & Freund, H. (2000). Single word reading in developmental stutterers and fluent speakers. Brain : A Journal of Neurology, 123 ( Pt 6, 1184–202. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10825357

Saporta, A. S. D., Kumar, A., Govindan, R. M., Sundaram, S. K., & Chugani, H. T. (2011). Arcuate fasciculus and speech in congenital bilateral perisylvian syndrome. Pediatric Neurology, 44(4), 270–4. http://doi.org/10.1016/j.pediatrneurol.2010.11.006

Sasisekaran, J., & Weisberg, S. (2014). Practice and retention of nonwords in adults who stutter. Journal of Fluency Disorders, 41, 55–71. http://doi.org/10.1016/j.jfludis.2014.02.004

Seyal, M., Mull, B., Bhullar, N., Ahmad, T., & Gage, B. (1999). Anticipation and execution of a simple reading task enhance corticospinal excitability. Clinical Neurophysiology, 110(3), 424–429. http://doi.org/10.1016/S1388-2457(98)00019-4

Shibasaki, H., Sadato, N., Lyshkow, H., Yonekura, Y., Honda, M., Nagamine, T., … Konishi, J. (1993). Both primary motor cortex and supplementary motor area play an important role in complex finger movement. Brain, 116(6), 1387–1398. http://doi.org/10.1093/brain/116.6.1387

Shum, M., Shiller, D. M., Baum, S. R., & Gracco, V. L. (2011). Sensorimotor integration for speech motor learning involves the inferior parietal cortex. The European Journal of Neuroscience, 34(11), 1817–22. http://doi.org/10.1111/j.1460-9568.2011.07889.x

121

Shuster, L. I., & Lemieux, S. K. (2005). An fMRI investigation of covertly and overtly produced mono- and multisyllabic words. Brain and Language, 93(1), 20–31. http://doi.org/10.1016/j.bandl.2004.07.007

Simmonds, A., Leech, R., Collins, C., Redjep, O., & Wise, R. (2014). Sensory-motor integration during speech production localizes to both left and right plana temporale. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 34(39), 12963–72. http://doi.org/10.1523/JNEUROSCI.0336-14.2014

Simon, D. A., Lewis, G., & Marantz, A. (2012). Disambiguating form and lexical frequency effects in MEG responses using homonyms. Language and Cognitive Processes, 27(2), 275–287. http://doi.org/10.1080/01690965.2011.607712

Singh, K. D., Barnes, G. R., & Hillebrand, A. (2003). Group imaging of task-related changes in cortical synchronisation using nonparametric permutation testing. NeuroImage, 19(4), 1589–1601. http://doi.org/10.1016/S1053-8119(03)00249-0

Smith, A., Sadagopan, N., Walsh, B., & Weber-Fox, C. (2010). Increasing phonological complexity reveals heightened instability in inter-articulatory coordination in adults who stutter. Journal of Fluency Disorders, 35(1), 1–18. http://doi.org/10.1016/j.jfludis.2009.12.001

Smits-Bandstra, S., & De Nil, L. (2009). Speech skill learning of persons who stutter and fluent speakers under single and dual task conditions. Clinical Linguistics & Phonetics, 23(1), 38– 57. http://doi.org/10.1080/02699200802394914

Smits-Bandstra, S., & De Nil, L. F. (2007). Sequence skill learning in persons who stutter: implications for cortico-striato-thalamo-cortical dysfunction. Journal of Fluency Disorders, 32(4), 251–78. http://doi.org/10.1016/j.jfludis.2007.06.001

Smits-Bandstra, S., & De Nil, L. F. (2013). Early-stage chunking of finger tapping sequences by persons who stutter and fluent speakers. Clinical Linguistics & Phonetics, 27(1), 72–84. http://doi.org/10.3109/02699206.2012.746397

Smits-Bandstra, S., De Nil, L., & Rochon, E. (2006). The transition to increased automaticity during finger sequence learning in adult males who stutter. Journal of Fluency Disorders, 31(1), 22–42; quiz 39–40. http://doi.org/10.1016/j.jfludis.2005.11.004

Sommer, M., Knappmeyer, K., Hunter, E. J., von Gudenberg, A. W., Spindler, N., & Paulus, W. (2002). Normal interhemispheric inhibition in persistent developmental stuttering Control subjects Stuttering subjects Test pulse Left hemisphere Test pulse Right hemisphere, 158(1996), 2002.

122

Sowman, P. F., Crain, S., Harrison, E., & Johnson, B. W. (2012). Reduced activation of left orbitofrontal cortex precedes blocked vocalization: a magnetoencephalographic study. Journal of Fluency Disorders, 37(4), 359–65. http://doi.org/10.1016/j.jfludis.2012.05.001

Spielberger, C., & Gorsuch, R. . (1983). State-Trait Anxiety Inventory for Adults: Manual and Sample: Manual, Instrument and Scoring Guide. Palo Alto CA.: Consulting Psychologists Press.

Stager, S. V, Jeffries, K. J., & Braun, A. R. (2003). Common features of fluency-evoking conditions studied in stuttering subjects and controls: an PET study. Journal of Fluency Disorders, 28(4), 319–336. http://doi.org/10.1016/j.jfludis.2003.08.004

Stančák, A., Riml, A., & Pfurtscheller, G. (1997). The effects of external load on movement- related changes of the sensorimotor EEG rhythms. Electroencephalography and Clinical Neurophysiology, 102(6), 495–504.

Suresh, R., Ambrose, N., Roe, C., Pluzhnikov, A., Wittke-thompson, J. K., Ng, M. C., … Cox, N. J. (2006). New Complexities in the Genetics of Stuttering : Significant Sex-Specific Linkage Signals, 78(April), 554–563.

Takai, O., Brown, S., & Liotti, M. (2010). Representation of the speech effectors in the human motor cortex: somatotopy or overlap? Brain and Language, 113(1), 39–44. http://doi.org/10.1016/j.bandl.2010.01.008

Tan, H., Maldjian, J. A., Pollock, J. M., Burdette, J. H., Yang, L. Y., Deibler, A. R., … Carolina, N. (2012). NIH Public Access, 29(5), 1134–1139. http://doi.org/10.1002/jmri.21721.

Tanji, J. U. N., & Evarts, V. (1976). Anticipatory Activity of Motor Cortex Neurons in Relation to Direction of an Intended Movement. Journal of Neurophysiology, 39(5), 1062–1068.

Terband, H., Maassen, B., Guenther, F. H., & Brumberg, J. (2014). Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development. Journal of Communication Disorders, 47, 17–33. http://doi.org/10.1016/j.jcomdis.2014.01.001

Thach, T., Keating, G., Thach, W. T., Goodkin, H. P., & Keating, J. G. (1992). The cerebellum and the adaptive coordination of movement. Annual Review of Neuroscience, 15, 403–42. http://doi.org/10.1146/annurev.ne.15.030192.002155

Tornick, G. B., & Bloodstein, O. (1976). Stuttering and sentence length. Journal of Speech, Language, and Hearing Research, 19(4), 651–654.

Tourville, J. A, Reilly, K. J., & Guenther, F. H. (2008). Neural mechanisms underlying auditory feedback control of speech. NeuroImage, 39(3), 1429–43. http://doi.org/10.1016/j.neuroimage.2007.09.054 123

Toyomura, A., Fujii, T., & Kuriki, S. (2011). Effect of external auditory pacing on the neural activity of stuttering speakers. NeuroImage, 57(4), 1507–16. http://doi.org/10.1016/j.neuroimage.2011.05.039

Tremblay, P., & Small, S. L. (2011). On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception. NeuroImage, 57(4), 1561–71. http://doi.org/10.1016/j.neuroimage.2011.05.067

Tzagarakis, C., Ince, N. F., Leuthold, A. C., & Pellizzer, G. (2010). Beta-band activity during motor planning reflects response uncertainty. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 30(34), 11270–7. http://doi.org/10.1523/JNEUROSCI.6026-09.2010

Ullsperger, M. (2006). Performance monitoring in neurological and psychiatric patients. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 59(1), 59–69. http://doi.org/10.1016/j.ijpsycho.2005.06.010

Uludağ, K., Dubowitz, D. J., Yoder, E. J., Restom, K., Liu, T. T., & Buxton, R. B. (2004). Coupling of cerebral blood flow and oxygen consumption during physiological activation and deactivation measured with fMRI. NeuroImage, 23(1), 148–55. http://doi.org/10.1016/j.neuroimage.2004.05.013

Van Geemen, K., Herbet, G., Moritz-Gasser, S., & Duffau, H. (2014). Limited plastic potential of the left ventral premotor cortex in speech articulation: Evidence From intraoperative awake mapping in glioma patients. Human Brain Mapping, 35(4), 1587–1596. http://doi.org/10.1002/hbm.22275

Van Lieshout, P., Hulstijn, W., & Peters, F. (1996). Speech Production in People Who StutterTesting the Motor Plan Assembly Hypothesis. Journal of Speech, Language, and Hearing Research, 39(1), 76–92.

Van Lieshout, P., Hulstijn, W., & Peters, F. (2004). Speech motor control in normal and disordered speech.

Ventura, M. I., Nagarajan, S. S., & Houde, J. F. (2009). Speech target modulates speaking induced suppression in auditory cortex. BMC Neuroscience, 10, 58. http://doi.org/10.1186/1471-2202-10-58

Verstynen, T., Diedrichsen, J., Albert, N., Aparicio, P., & Ivry, R. B. (2005). Ipsilateral motor cortex activity during unimanual hand movements relates to task complexity. Journal of Neurophysiology, 93(3), 1209–22. http://doi.org/10.1152/jn.00720.2004

124

Wasserthal, C., Brechmann, A., Stadler, J., Fischl, B., & Engel, K. (2014). Localizing the human primary auditory cortex in vivo using structural MRI. NeuroImage, 93 Pt 2, 237–51. http://doi.org/10.1016/j.neuroimage.2013.07.046

Watkins, K. E., Strafella, A. P., & Paus, T. (2003). Seeing and hearing speech excites the motor system involved in speech production. Neuropsychologia, 41(8), 989–994. http://doi.org/10.1016/S0028-3932(02)00316-0

Watkins, K., Smith, S., Davis, S., & Howell, P. (2008). Structural and functional abnormalities of the motor system in developmental stuttering. Brain : A Journal of Neurology, 131(Pt 1), 50–9. http://doi.org/10.1093/brain/awm241

Wechsler, D. (1997). WAIS-III: Administration and scoring manual: Wechsler adult intelligence scale.

Weeks, R., Horwitz, B., Aziz-sultan, A., Tian, B., Wessinger, C. M., Cohen, L. G., … Rauschecker, J. P. (2000). A Positron Emission Tomographic Study of Auditory Localization in the Congenitally Blind, 20(7), 2664–2672.

Wildgruber, D., Ackermann, H., & Grodd, W. (2001). Differential contributions of motor cortex, basal ganglia, and cerebellum to speech motor control: effects of syllable repetition rate evaluated by fMRI. NeuroImage, 13(1), 101–9. http://doi.org/10.1006/nimg.2000.0672

Williams, D., Silverman, F., & Kools, J. (1969). Disfluency behavior of elementary-school stutterers and nonstutterers: The consistency effect. Journal of Speech, Language, and Hearing Research, 12(2), 301–307.

Wilson, S. M., Saygin, A. P., Sereno, M. I., & Iacoboni, M. (2004). Listening to speech activates motor areas involved in speech production. Nature Neuroscience, 7(7), 701–2. http://doi.org/10.1038/nn1263

Wise, R. J. S., Greene, J., Büchel, C., & Scott, S. K. (1999). Early report Brain regions involved in articulation. The Lancet, 353, 1057–1061.

Wu, T., & Hallett, M. (2005). The influence of normal human ageing on automatic movements. The Journal of Physiology, 562(Pt 2), 605–15. http://doi.org/10.1113/jphysiol.2004.076042

Wymbs, N. F., Ingham, R. J., Ingham, J. C., Paolini, K. E., & Grafton, S. T. (2013). Individual differences in neural regions functionally related to real and imagined stuttering. Brain and Language, 124(2), 153–64. http://doi.org/10.1016/j.bandl.2012.11.013

Yairi, N., & Ambrose, N. (1999). Early Childhood Stuttering Persistency and Recovery Rates. Journal of Speech, Language, and Hearing Research, 42(5), 1097–1112.

125

Yap, M. J., & Balota, D. A. (2009). Visual word recognition of multisyllabic words. Journal of Memory and Language, 60(4), 502–529. http://doi.org/10.1016/j.jml.2009.02.001

Zimerman, M., Heise, K.F., Gerloff, C., Cohen, L. G., & Hummel, F. C. (2014). Disrupting the ipsilateral motor cortex interferes with training of a complex motor task in older adults. Cerebral Cortex (New York, N.Y. : 1991), 24(4), 1030–6. http://doi.org/10.1093/cercor/bhs385

126

APPENDIX A

TABLES

Table A

Final trial numbers EMG-locked and stimulus-locked data sets

AWS ID EMG STIM FS ID EMG STIM S02 201 202 F03 214 212 S06 180 181 F04 171 162 **S07 217 167 F05 204 204 S09 193 199 F06 214 213 S10 202 191 F07 138 142 **S15 195 170 **F08 163 134 S18 191 191 F09 208 208 *S21 178 191 F10 216 206 **S23 145 173 **F11 183 209 S24 198 195 *F12 193 211 S25 182 190 F13 210 215 S27 196 203 *F14 196 211

MAX 217 203 MAX 216 215 MIN 145 167 MIN 138 134

Note. Asterisks indicate a noticeable difference between the trial numbers.

127

Table B

Number of trials stuttered during the MEG task

Stuttered AWS ID Trials S07 0 S10 0 S21 0 S27 6 S24 18 S09 19 S25 22 S23 34 S15 35 S18 40 S02 59 S06 81 TOTAL 314

Table C

Common phonemes in HLS and LLS categories

AWS HIGH LOW S02 m-p-r-t-g a-d-l-o-w S06 d-l-r-w-p a-h-i-o-u S07 l,m,w,n,p a-h-i-o-u S09 b-d-m-p-w a-c-h-i-o S10 p-m-h-b c-d-m-p S15 b-d-p m-n-c-a-l S18 b-c-m-p-r a-h-o-u-i S21 r-p-m-d b-c-m-p S23 b-d-l-r-m a-c-h-e-t S24 b-m-p-r-w u-o-i-h-a S25 c-d-m-t-g a-u-h-r-o S27 b-d-m-p-g a-h-w-o-i

128

Table D

Number of trials where EMG onset preceded the speech cue

# AWS ID # Trials FS ID Trials S02 28 F03 18 S06 13 F04 9 S07 6 F05 66 S09 32 F06 0 S10 39 F07 1 S15 67 F08 51 S18 23 F09 13 S21 67 F10 24 S23 16 F11 20 S24 49 F12 52 S25 22 F13 2 S27 13 F14 53

Table E

Single subject left-hemisphere Talairach coordinates of beta ERD.

Adults who stutter - LEFT STIM-locked EMG-locked LEFT x y z pseudo-T LEFT x y z pseudo-T S02 LBA6 -46 -2 31 -9.28 LBA6 -46 -6 32 -10.69 S06 LBA8 -53 10 42 -3.78 LBA6 -50 6 42 -3.81 S07 LBA6 -42 -10 32 -5.36 LBA6 -38 -10 32 -3.95 S09 LBA2 -42 -18 32 -9.16 LBA6 -42 -10 28 -8.54 S10 LBA9 -38 6 35 -6.26 LBA6 -50 -6 35 -5.15 S15 LBA6 -42 -7 24 -12.78 LBA6 -42 -7 24 -11.07 S18 LBA13 -42 -4 6 -2.86 LBA13 -38 -4 6 -2.99 S21 LBA6 -50 2 35 -10.07 LBA6 -46 2 39 -7.43 S23 LBA6 -38 -10 28 -3.53 LBA6 -38 -6 28 -3.48 S24 LBA2 -46 -18 32 -3.51 LBA2 -46 -14 32 -3.87 S25 LBA6 -50 1 28 -6.45 LBA6 -46 1 28 -6.44

129

S27 LBA6 -42 -10 35 -5.46 LBA6 -42 -14 32 -6.03 MEAN -44.3 -5 30 MEAN -44 -6 30 SD 4.8 8.8 8.8 SD 4 6 9 GRP LBA6 -46 -2 28 -4.65 LBA6 -42 -6 28 -4.2

Fluent speakers - LEFT STIM-locked EMG-locked LEFT x y z pseudo-T LEFT x y z pseudo-T F03 LBA6 -50 -6 32 -8.51 LBA6 -50 -6 32 -7.08 F04 F05 LBA43 -46 -14 21 -2.87 LBA13 -38 -22 18 -3.36 F06 LBA6 -46 -2 31 -3.28 LBA6 -46 1 28 -4.39 F07 LBA3 -34 -25 47 -2.88 F08 LBA13 -34 -3 13 -2.66 LBA6 -30 2 46 -1.54 F09 LBA6 -46 -3 24 -2.6 LBA6 -38 -5 46 -2.72 F10 LBA6 -42 -2 28 -2.5 LBA6 -34 1 28 -2.85 F11 LBA6 -42 2 31 -4.33 LBA6 -42 -2 35 -5.49 F12 LBA6 -50 -3 24 -2.21 F13 LBA6 -50 -2 31 -6.07 LBA6 -46 2 31 -6.99 F14 LBA6 -50 -6 28 -5.81 LBA6 -50 -6 28 -5.2 MEAN -44.5 -5.8 28.2 MEAN -42 -4 32 SD 6 7.5 8.4 SD 7 8 9 GRP LBA6 -50 -2 31 -2.74 LBA6 -46 -2 28 -2.75

Note. The MEAN and standard deviation (SD) across subject coordinates are shown, as well as the SAM-localized group average peak (GRP). Blank spaces correspond to subjects where a peak was not observed within reasonable proximity.

130

Table F

Single subject right-hemisphere Talairach coordinates of beta ERD

Adults who stutter - RIGHT STIM-locked EMG-locked RIGHT x y z pseudo-T RIGHT x y z pseudo-T S02 RBA6 50 1 28 -8.6 RBA6 53 2 31 -6.78 S06 RBA8 46 14 42 -1.11 RBA8 50 14 42 -1.37 S07 RBA6 46 2 35 -3.7 RBA6 46 2 31 -3.19 S09 RBA6 38 5 24 -5.37 RBA6 38 1 24 -4.6 S10 RBA9 50 5 31 -8.6 RBA9 53 5 31 -7.7 S15 RBA6 50 1 28 -1.9 RBA6 S18 RBA6 50 -6 32 -2.61 RBA9 50 9 24 -1.81 S21 RBA6 57 6 35 -7.18 RBA6 57 2 35 -6.5 S23 RBA6 38 -13 47 -5.92 RBA4 38 -13 47 -4.85 S24 RBA6 46 -2 35 -4.22 RBA6 50 -2 35 -6.17 S25 RBA6 57 -2 35 -6.34 RBA6 57 -2 35 -6.32 S27 RBA6 50 -2 31 -6.2 RBA6 50 -2 31 -7.65 MEAN 48.2 0.8 33.6 MEAN 49 1 33 SD 6 6.7 6.2 SD 6 7 7 GRP RBA6 50 2 31 -3.87 RBA6 53 -2 35 -3.49

Fluent speakers - RIGHT STIM-locked EMG-locked RIGHT x y z pseudo-T RIGHT x y z pseudo-T F03 RBA6 50 -6 28 -5.28 RBA6 53 -6 28 -3.18 F04 F05 RBA13 42 -7 24 -3.66 RBA6 42 1 24 -3.81 F06 F07 RBA6 50 1 28 -2.07 RBA6 50 -2 28 -2.19 F08 F09 RBA6 38 -9 54 -5.95 RBA6 38 -9 54 -5.46 131

F10 RBA13 42 -14 25 -1.23 F11 RBA13 42 -14 21 -1.5 F12 F13 F14 RBA6 38 -2 28 -2.95 RBA6 42 -2 28 -3.37 MEAN 43.1 -7.3 29.7 MEAN 45 -4 32 SD 5 5.6 11 SD 6 4 12 GRP RBA6 42 -6 28 -1.35 RBA6 46 -3 24 -1.21

Note. The MEAN and standard deviation (SD) across subject coordinates are shown, as well as the SAM-localized group average peak (GRP). Blank spaces correspond to subjects where a peak was not observed within reasonable proximity.

Table G

Single subject left-hemisphere Talairach coordinates of alpha ERD

Adults who stutter - LEFT STIM-locked EMG-locked pseudo- LEFT x y z pseudo-T LEFT x y z T S02 LBA6 -46 -10 28 -5.92 LBA6 -46 -10 28 -4.26 S06 LBA13 -46 -38 18 -2.12 LBA13 -46 -38 15 -2.14 S07 LBA22 -46 -54 16 -3.79 LBA39 -38 -50 12 -1.91 S09 LBA41 -46 -38 11 -4.99 LBA41 -46 -38 11 -5.86 S10 LBA40 -50 -34 26 -2.77 LBA39 -42 -60 38 -2.35 S15 LBA41 -42 -23 10 -5.7 LBA41 -46 -31 11 -3.42 S18 LBA40 -42 -33 33 -0.98 LBA22 -50 -8 2 -1.38 S21 LBA40 -50 -37 30 -4.26 S23 LBA41 -50 -23 7 -1.84 LBA41 -50 -23 10 -1.47 S24 LBA41 -34 -34 11 -3.16 LBA22 -53 -18 29 -3.33 S25 S27 LBA13 -34 -19 18 -4.63 LBA13 -34 -19 18 -5.09 132

MEAN -44 -31 18.9 MEAN -45 -30 17 SD 5.8 11.8 9 SD 6 17 11 GROUP LBA13 -42 -34 22 -2.09 LBA13 -42 -30 18 -1.76

Fluent speakers - LEFT STIM-locked EMG-locked LEFT x y z pseudo-T LEFT x y z pseudo-T F03 LBA22 -46 -11 6 -2.94 LBA13 -46 -11 10 -4.03 F04 LBA41 -50 -19 14 -1.44 F05 LBA22 -57 -25 4 -4.59 LBA22 -53 -39 4 -19 F06 F07 F08 F09 LBA13 -42 -7 17 -1.99 LBA13 -42 -11 17 -2.22 F10 LBA40 -57 -42 22 -2.74 LBA40 -57 -42 22 -2.3 F11 F12 LBA41 -57 -23 10 -4.57 LBA41 -57 -23 10 -3.7 F13 LBA40 -57 -30 18 -3.69 LBA40 -53 -26 18 -3.99 F14 MEAN -53 -24.3 14.2 MEAN -51 -25 14 SD 6.2 11.6 6.4 SD 6 13 7 GROUP LBA41 -53 -23 10 -1.31 LBA41 -50 -23 10 -1.3

Note. The MEAN and standard deviation (SD) across subject coordinates are shown, as well as the SAM-localized group average peak (GRP). Blank spaces correspond to subjects where a peak was not observed within reasonable proximity.

133

Table H

Single subject right-hemisphere Talairach coordinates of alpha ERD

Adults who stutter - RIGHT STIM-locked EMG-locked pseudo- RIGHT x y z pseudo-T RIGHT x y z T S02 RBA40 53 -11 10 -6.53 RBA43 57 -11 13 -4.86 S06 RBA22 42 -57 16 -1.35 RBA39 46 -61 23 -2.15 S07 RBA41 50 -26 18 -2.35 RBA41 46 -38 7 -1.81 S09 RBA41 46 -26 14 -1.67 RBA41 46 -26 14 -1.67 S10 RBA13 42 -18 25 -2.97 RBA13 42 -46 15 -2.14 S15 RBA22 46 -4 -1 -2.68 S18 RBA13 42 -34 22 -1.01 RBA21 61 -24 -7 -0.82 S21 RBA13 53 -18 21 -2.12 S23 RBA13 38 -19 10 -3.45 RBA13 42 -19 6 -3.39 S24 RBA42 61 -26 14 -2.14 RBA40 61 -23 14 -1.67 S25 S27 RBA41 53 -23 10 -5.04 RBA41 53 -19 10 -4.74 MEAN 47.8 -24 14.5 MEAN 50 -30 11 SD 6.8 13.7 7.2 SD 8 16 8 GROUP RBA41 50 -19 14 -1.55 RBA41 53 -19 14 -1.26

Fluent speakers - RIGHT STIM-locked EMG-locked RIGHT x y z pseudo-T RIGHT x y z pseudo-T F03 RBA41 53 -27 11 -3.5 RBA22 57 -4 -1 -2.78 F04 RBA22 57 -7 6 -3.08 RBA22 61 -7 6 -2.43 F05 RBA22 57 -19 -1 -1.72 RBA22 57 -19 -1 -2.05 F06 RBA22 46 -12 2 -1.9 F07 RBA41 57 -23 10 -1.69 RBA42 61 -23 10 -1.46 F08

134

F09 RBA41 57 -23 10 -1.13 F10 RBA13 46 -14 21 -1.69 F11 RBA22 57 -23 -1 -1.52 RBA22 57 -23 3 -2.89 F12 F13 F14 MEAN 54.8 -16.8 3.2 MEAN 57 -16 7 SD 4.9 7.1 4.8 SD 5 8 8 GROUP RBA41 50 -23 7 -1.01 RBA41 53 -23 7 -1.02

Note. The MEAN and standard deviation (SD) across subject coordinates are shown, as well as the SAM-localized group average peak (GRP). Blank spaces correspond to subjects where a peak was not observed within reasonable proximity

135

Table I

STIM locked alpha ERD onsets and first ERD latencies in the auditory cortex

ERD ONSET LATENCIES ERD PEAK LATENCIES ID AWS FS AWS FS AWS FS LEFT RIGHT LEFT RIGHT LEFT RIGHT LEFT RIGHT 2 3 235 103 0 73 450 378 616 443 6 4 0 0 251 387 480 583 7 5 0 -334 0 -38 293 487 485 9 6 196 -173 126 -189 449 404 406 10 7 -104 -381 151 43 892 509 301 419 15 8 165 82 167 -106 386 796 282 18 9 0 0 -56 0 519 21 10 188 63 344 -209 464 422 506 243 23 11 -231 -122 239 299 409 396 542 478 24 12 378 246 -33 0 463 469 479 25 13 -105 219 149 0 463 481 471 27 14 -189 0 0 123 364 519 444 MEAN 44 -25 112 32 463 447 520 420 SD 187 196 129 176 161 54 125 104

136

APPENDIX B

FIGURES

Figure A. SAM localization compared between 15-25 Hz and 20-30 Hz bands. Time window: -400 to -200ms prior to EMG onset. Images are scaled to the same maximum and the same thereshold was applied.

137

Figure B. Time-courses of 15-25 Hz and 20-30 Hz modulation in the precentral gyrus (BA6) of AWS. No difference between the frequency bands was observed.

138

APPENDIX C

ABBREVIATIONS

ACC – anterior cingulate cortex

AWS – Adults who stutter

CWS – Children who stutter

DTI – Diffusion Tensor Imaging

ECD – Equivalent current dipole

EEG - Electroencephalography

ERD – Event related desynchronization

ERP – Event related potential

ERS – Event related synchronization

EXEC – speech execution stage fMRI - functional Magnetic Resonance Imaging

FS – Fluent speakers

HLS – High likelihood of stuttering

IFG – Inferior Frontal Gyrus

LLS – Low likelihood of stuttering

MEG - Magnetoencephalography

PET – Positron Emission Tomography

PREP – speech preparation stage

139

SAM – Synthetic Aperture Magnetometry

TMS – Transcranical Magnetic Stimulation

140