Stuttering Severity and Responses to Social-Communicative Challenge In

Total Page:16

File Type:pdf, Size:1020Kb

Stuttering Severity and Responses to Social-Communicative Challenge In Thank you for using our service! Interlibrary Services The Ohio State University Libraries (614)292-9077 [email protected] Article Express documents are delivered 24/7 directly to your ILLiad account from scanning libraries around the world. If there is a problem with a PDF you receive, please contact our office so we might report it to the scanning location for resolution. NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS The copyright law of the United States (Title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material. Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specified conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use,” that user may be liable for copyright infringement. This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would involve violation of copyright law. Stuttering severity and responses to social- communicative challenge in preschool-age children who stutter Danra Kazenski1, Barry Guitar1, Rebecca McCauley1,2, William Falls3, Lindsay Stallings Dutko1,4 1Department of Communication Sciences and Disorders, University of Vermont, Pomeroy Hall, 489 Main Street, Burlington, VT 05405, USA, 2Present Address: Department of Speech and Hearing Science, The Ohio State University, 134a Pressey Hall, 1070 Carmack Road, Columbus, OH 43210-1002, USA, 3Department of Psychology, University of Vermont, John Dewey Hall, 2 Colchester Avenue, Burlington, VT 05405-0134, USA, 4Present Address: Department of Speech Pathology & Audiology, Duke University Hospital, DUMC 3887, Durham, NC 27710, USA Purpose: This study assessed indices of autonomic arousal and vocal tension during challenge in preschool- age children who do stutter (CWS) and do not stutter (CWNS). Method: Participants were preschool-age CWS (n = 10) and gender- and age-matched CWNS (n = 10) who performed in two speaking conditions: (1) ‘low’ challenge – naming age-appropriate pictures in a familiar room with the same examiner who administered his/her speech-language prescreening test (2) ‘high’ challenge – recalling the pictures named in condition (1) in a different room with an unfamiliar examiner while wearing acoustic startle electrodes on his/her face. Immediately following the ‘high’ challenge speaking task, the participants’ acoustic startle eyeblink response (ASEB) was measured. Dependent variables were ASEB in the ‘high’ challenge condition and the acoustic measure of mean fundamental frequency (Fo) in both challenge conditions. Results: Findings indicated no significant between-group (CWS vs. CWNS) differences in Fo or ASEB responses. However, CWS-severe (n = 5), when compared with CWS-mild/moderate (n = 5), exhibited a statistically significant increase in Fo in the ‘high’ challenge relative to the ‘low’ challenge condition. Conclusions: Results were taken to suggest that preschool-age CWS-severe exhibit vocal tension while speaking in conditions of social (e.g. speaking to unfamiliar examiner) and communicative (e.g. recalling from memory names of pictures previously shown) challenge, and such vocal behavior is possibly associated with this subgroup’s difficulty establishing normally fluent speech. Keywords: Preschool, Stuttering, Acoustic startle, Fundamental frequency, Challenge Autonomic arousal has long been speculated to be control for both adults and children who do not associated with stuttering (e.g. Brutten and stutter (Kleinow and Smith, 2006). A recent prelimi- Shoemaker, 1967; Conture and Walden, 2012; nary study utilizing skin conductance response as a Guitar, 2013; Smith, 1999), with such speculation primary measure (Arenas and Zebrowski, 2013), indi- receiving some empirical support. For example, cated that AWS had a significantly greater autonomic Weber and Smith (1990) reported that extreme auto- conditioned response to an aversive stimulus relative to nomic arousal in a stressor condition was positively adults who do not stutter (AWNS). correlated with stuttering severity and frequency in Unfortunately, less is known about these variables adults who stutter (AWS), suggesting that such in preschool-age children who stutter (CWS) com- arousal may affect speech motor control. pared to children who do not stutter (CWNS) closer Furthermore, autonomic arousal has been shown to to the onset of stuttering. Lack of information regard- be associated with more variable speech motor ing preschool-age CWS is salient because young CWS typically have minimal experience with stuttering and hence less opportunity to develop learned reactions Correspondence to: Danra Kazenski, Department of Communication to stuttering. One exception to this dearth of infor- Sciences and Disorders, University of Vermont, Pomeroy Hall, 489 Main Street, Burlington, VT 05405, USA. Email: [email protected] mation relates to empirical studies employing ©W.S.Maney&SonLtd2014 142 DOI 10.1179/2050572813Y.0000000032 Speech, Language and Hearing 2014 VOL. 17 NO. 3 Kazenski et al. Stuttering severity and responses to social-communicative challenge in CWS behavioral measures of expressed emotion, the latter between-group differences in the overall variation in believed to be an indirect index of autonomic activity. mean Fo. For instance, Hall and Yairi (1992) demon- Specifically, findings based on behavioral indices of strated a low positive correlation between stuttering preschool-age CWS and CWNS’s expressed emotional severity and mean Fo in preschool-age boys who arousal (e.g. Arnold et al., 2011; Johnson et al., 2010; stutter. In contrast, other researchers have reported Ntourou et al., in press; Walden et al., 2012) appear to that the mean Fo of AWS does not significantly support the notion that increases in emotional arousal differ from AWNS (Healey, 1982; Sacco and Metz, are associated with childhood stuttering. For example, 1989). In this study, Fo will be referred to as an indirect preschool-age CWS’s stuttering has been reported to index of vocal tension. increase when increased emotional reactivity is associ- ated with decreased emotional regulation (Walden Acoustic startle eyeblink response et al., 2012). As intriguing as these behavioral findings ASEB is a robust non-speech measure of physiological may be regarding preschool-age CWS, the impli- reactivity (Grillon, 2002; Grillon and Baas, 2003; cations might be more comprehensive if they were cor- Mauss and Robinson, 2009). Startle is a defensive, roborated by converging lines of evidence, particularly reflexive response consisting of a rapid contraction of in the form of (non-speech) physiological as well as the orbicularis oculi muscle, which is innervated by speech indices of autonomic arousal. Given empirical the facial nerve. The ASEB response is a measure of evidence suggesting that autonomic arousal affects the magnitude and latency of a reflexive eyeblink speech motor control in AWS (Kelly et al., 1995; response to randomly administered brief pulses of Kleinow and Smith, 2006; Peters and Hulstijn, 1984; high-intensity white noise. Latency (in ms) is the Weber and Smith, 1990), measuring physiological time from the onset of the startle stimulus to the first responses in preschool-age CWS and CWNS closer sign of eyeblink response, with shorter latencies reflect- to the onset of stuttering should provide useful infor- ing higher levels of neurophysiological reactivity. The mation about the association of speech motor amplitude of startle response has been shown to control and autonomic arousal during the early increase when the person is in an aroused emotional stages of developmental stuttering. state such as fear (Lang et al., 1990) or anticipatory It seems possible that the association between anxiety (Grillon, et al., 1991; Sabatinelli et al., 2001). changes in autonomic or emotional variables and To date, the only published studies of ASEB that speech motor control activity is apt to be subtle as these authors are aware of has involved AWS and well as brief in nature, particularly in young children AWNS. For example, Bradley and Lang (2000) near the onset of stuttering. Thus, as a preliminary found that AWNS exhibited an increase in magnitude attempt to non-invasively measure such possibly of ASEB response when exposed to unpleasant com- subtle autonomic reactions in a preschool-age popu- pared to pleasant or neutral affective sounds. lation, both speech (mean fundamental frequency) Further, Guitar (2003) reported a statistically signifi- and non-speech (acoustic startle eyeblink response cant difference (81% higher) mean amplitude of (ASEB)) indices of autonomic arousal were employed. startle response for 14 AWS compared to matched Following is a brief discussion of the general back- AWNS. In contrast, Alm and Risberg (2007) found ground of each measure as well as each measure’s that startle amplitude was not significantly associated specific application to stuttering. with anxiety or reactive temperament even though the ‘trait anxiety’ scores for the AWS group were sig- Vocal measure fundamental frequency nificantly higher than for the controls in their study. Vocal fundamental frequency (Fo), or pitch, is the Ellis et al. (2008) also measured the latency and peak vibratory rate of the vocal folds measured
Recommended publications
  • The Physical Significance of Acoustic Parameters and Its Clinical
    www.nature.com/scientificreports OPEN The physical signifcance of acoustic parameters and its clinical signifcance of dysarthria in Parkinson’s disease Shu Yang1,2,6, Fengbo Wang3,6, Liqiong Yang4,6, Fan Xu2,6, Man Luo5, Xiaqing Chen5, Xixi Feng2* & Xianwei Zou5* Dysarthria is universal in Parkinson’s disease (PD) during disease progression; however, the quality of vocalization changes is often ignored. Furthermore, the role of changes in the acoustic parameters of phonation in PD patients remains unclear. We recruited 35 PD patients and 26 healthy controls to perform single, double, and multiple syllable tests. A logistic regression was performed to diferentiate between protective and risk factors among the acoustic parameters. The results indicated that the mean f0, max f0, min f0, jitter, duration of speech and median intensity of speaking for the PD patients were signifcantly diferent from those of the healthy controls. These results reveal some promising indicators of dysarthric symptoms consisting of acoustic parameters, and they strengthen our understanding about the signifcance of changes in phonation by PD patients, which may accelerate the discovery of novel PD biomarkers. Abbreviations PD Parkinson’s disease HKD Hypokinetic dysarthria VHI-30 Voice Handicap Index H&Y Hoehn–Yahr scale UPDRS III Unifed Parkinson’s Disease Rating Scale Motor Score Parkinson’s disease (PD), a chronic, progressive neurodegenerative disorder with an unknown etiology, is asso- ciated with a signifcant burden with regards to cost and use of societal resources 1,2. More than 90% of patients with PD sufer from hypokinetic dysarthria3. Early in 1969, Darley et al. defned dysarthria as a collective term for related speech disorders.
    [Show full text]
  • Separation of Vocal and Non-Vocal Components from Audio Clip Using Correlated Repeated Mask (CRM)
    University of New Orleans ScholarWorks@UNO University of New Orleans Theses and Dissertations Dissertations and Theses Summer 8-9-2017 Separation of Vocal and Non-Vocal Components from Audio Clip Using Correlated Repeated Mask (CRM) Mohan Kumar Kanuri [email protected] Follow this and additional works at: https://scholarworks.uno.edu/td Part of the Signal Processing Commons Recommended Citation Kanuri, Mohan Kumar, "Separation of Vocal and Non-Vocal Components from Audio Clip Using Correlated Repeated Mask (CRM)" (2017). University of New Orleans Theses and Dissertations. 2381. https://scholarworks.uno.edu/td/2381 This Thesis is protected by copyright and/or related rights. It has been brought to you by ScholarWorks@UNO with permission from the rights-holder(s). You are free to use this Thesis in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights- holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/or on the work itself. This Thesis has been accepted for inclusion in University of New Orleans Theses and Dissertations by an authorized administrator of ScholarWorks@UNO. For more information, please contact [email protected]. Separation of Vocal and Non-Vocal Components from Audio Clip Using Correlated Repeated Mask (CRM) A Thesis Submitted to the Graduate Faculty of the University of New Orleans in partial fulfillment of the requirements for the degree of Master of Science in Engineering – Electrical By Mohan Kumar Kanuri B.Tech., Jawaharlal Nehru Technological University, 2014 August 2017 This thesis is dedicated to my parents, Mr.
    [Show full text]
  • The Fingerprints of Pain in Human Voice
    1 The Open University of Israel Department of Mathematics and Computer Science The Fingerprints of Pain in Human Voice Thesis submitted in partial fulfillment of the requirements towards an M.Sc. degree in Computer Science The Open University of Israel Computer Science Division By Yaniv Oshrat Prepared under the supervision of Dr. Anat Lerner, Dr. Azaria Cohen, Dr. Mireille Avigal The Open University of Israel July 2014 2 Contents Abstract .......................................................................................................................... 5 1. Introduction ...................................................................................................................... 6 1.1. Related work .......................................................................................................... 6 1.2. The scope and goals of our study .......................................................................... 8 2. Methods ........................................................................................................................... 10 2.1. Participants .......................................................................................................... 10 2.2. Data Collection .................................................................................................... 10 2.3. Sample processing ............................................................................................... 11 2.4. Pain-level analysis ..............................................................................................
    [Show full text]
  • Telmate Biometric Solutions
    TRANSFORMING INMATE COMMUNICATIONS TELMATE BIOMETRIC SOLUTIONS Telmate’s advanced and comprehensive image Telmate’s and voice biometric solutions verify the identity image and voice biometric of every contact, measuring and analyzing unique solutions physical and behavioral characteristics. increase facility security, prevent Image Biometrics: Do You Really Know Who is Visiting? fraudulent calling, Security is a concern with every inmate interaction. When inmates conduct video visits, it is essential to ensure that the inmate that booked the visit is the one that and thwart the conducts the visit, from start to finish. extortion of calling funds. With Telmate, every video visit is comprehensively analyzed for rule violations, including communications from unauthorized inmates. The Telmate system actively analyzes every video visit, comparing every face with a known and verified photo of the approved participants. When a mismatch is identified, the Telmate system instantly compares the unrecognized facial image with other verified inmate photos housed in the same location in an attempt to identify the unauthorized inmate participant. Next, Telmate timestamps any identified violation in the video and flags the live video visit for immediate review by facility investigators, along with a shortcut link that allows staff to quickly jump to and INMATES review the suspicious section of video and see the potential identity of the perpetrator. Telmate has the most comprehensive, and only 100% accurate, unapproved video visit participant detection system in the industry today. Telmate biometric solutions result REVIEW REVIEW in a unique methodology that: TELMATE COMMAND IDENTIFY & FLAG TELMATE • Prevents repeat occurrences of all unapproved inmate video visit BIOMETRICS extortion, or using visitation funds as a participants.
    [Show full text]
  • A Study on Application of Digital Signal Processing in Voice Disorder Diagnosis 1N
    Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Study on Application of Digital Signal Processing in Voice Disorder Diagnosis 1N. A. Sheela Selvakumari*, 2Dr. V. Radha 1 Asst. Prof., Department of Computer Science, PSGR Krishnammal College for Women, Coimbatore, Tamilnadu, India 2 Professor, Department of Computer Science, Avinashiligam Institute for Home Science and Higher Education for Women, Coimbatore, Tamilnadu, India Abstract — The Investigation of the human voice has become an important area of study for its numerous applications in medical as well as industrial disciplines. People suffering from pathologic voices have to face many difficulties in their daily lives. The voice pathologic disorders are associated with respiratory, nasal, neural and larynx diseases. Thus, analysis and diagnosis of vocal disorders have become an important medical procedure. This has inspired a great deal of research in voice analysis measure for developing models to evaluate human verbal communication capabilities. Voice analysis mainly deals with extraction of some parameters from voice signals, which is used to process the voice in appropriate for the particular application by using suitable techniques. The use of these techniques combined with classification methods provides the development of expert aided systems for the detection of voice pathologies. This Study paper states the certain common medical conditions which affect voice patterns of patients and the tests which are used as diagnosis for voice disorder. Keywords— Digital Signal Processing, Voice Disorder, Pattern Classification, Pathological Voices I. INTRODUCTION Digital signal processing (DSP) is the numerical manipulation of signals, usually with the intention to Measure, Filter, Produce or Compress continuous Analog Signals.
    [Show full text]
  • Running Head: ACOUSTIC PARAMETERS of SPEECH 1 Acoustic Parameters of Speech and Attitudes Towards Speech in Childhood Stuttering
    Running Head: ACOUSTIC PARAMETERS OF SPEECH 1 Acoustic Parameters of Speech and Attitudes Towards Speech in Childhood Stuttering: Predicting Persistence and Recovery Rachel Gerald Vanderbilt University This project was completed in partial fulfillment of Honors at Vanderbilt University under the direction of Tedra Walden. ACOUSTIC PARAMETERS OF SPEECH 2 Abstract The relations between the acoustic parameters of jitter and fundamental frequency and children’s experience with stuttering were explored. Sixty-five children belonging to four talker groups will be studied. Children were categorized as stuttering (CWS) or non-stuttering (CWNS), and were grouped based on their diagnosis of stuttering/not stuttering at two time points in a longitudinal study: persistent stutterers (CWSàCWS), recovered stutterers (CWSàCWNS), borderline stutters (CWNSàCWS), and never stuttered (CWNSàCWNS). The children performed a social-communicative stress task during which they were audio-recorded to provide speech samples from which the acoustic parameters were measured. There were no significant relations between talker group and acoustic parameters, nor were children’s attitudes towards their speech different across talker groups. Therefore, acoustic parameters nor children’s attitudes towards their speech did not determining their prognosis with stuttering. ACOUSTIC PARAMETERS OF SPEECH 3 Stuttering is a type of speech disfluency in which words or phrases are repeated or sounds are prolonged. Yairi (1993) reported that stuttering affects about 1% of adults worldwide. The onset of stuttering typically begins in preschool children, with about 5% of children stuttering at some point in their lives. About three-fourths of these children recover, whereas the remaining children become the 1% of adults who stutter.
    [Show full text]
  • Fast, Accurate Pitch Detection Tools for Music Analysis
    Fast, Accurate Pitch Detection Tools for Music Analysis Philip McLeod a thesis submitted for the degree of Doctor of Philosophy at the University of Otago, Dunedin, New Zealand. 30 May 2008 Abstract Precise pitch is important to musicians. We created algorithms for real-time pitch detection that generalise well over a range of single ‘voiced’ musical instruments. A high pitch detection accuracy is achieved whilst maintain- ing a fast response using a special normalisation of the autocorrelation (SNAC) function and its windowed version, WSNAC. Incremental versions of these functions provide pitch values updated at every input sample. A robust octave detection is achieved through a modified cepstrum, utilising properties of human pitch perception and putting the pitch of the current frame within the context of its full note duration. The algorithms have been tested thoroughly both with synthetic waveforms and sounds from real instruments. A method for detecting note changes using only pitch is also presented. Furthermore, we describe a real-time method to determine vibrato param- eters - higher level information of pitch variations, including the envelopes of vibrato speed, height, phase and centre offset. Some novel ways of visu- alising the pitch and vibrato information are presented. Our project ‘Tartini’ provides music students, teachers, performers and researchers with new visual tools to help them learn their art, refine their technique and advance their fields. ii Acknowledgements I would like to thank the following people: Geoff Wyvill for creating an environment for knowledge to thrive. • Don Warrington for your advice and encouragement. • Professional musicians Kevin Lefohn (violin) and Judy Bellingham • (voice) for numerous discussions and providing us with samples of good sound.
    [Show full text]
  • On the Use of Voice Signals for Studying Sclerosis Disease
    computers Article On the Use of Voice Signals for Studying Sclerosis Disease Patrizia Vizza 1,*, Giuseppe Tradigo 2, Domenico Mirarchi 1,* ID , Roberto Bruno Bossio 3 and Pierangelo Veltri 1,* ID 1 Department of Medical and Surgical Sciences, Magna Graecia University, 88100 Catanzaro, Italy 2 Department of Computer Engineering, Modelling, Electronics and Systems (DIMES), University of Calabria, 87036 Rende, Italy; [email protected] 3 Neurological Operative Unit, Center of Multiple Sclerosis, Provincial Health Authority of Cosenza, 87100 Cosenza, Italy; [email protected] * Correspondence: [email protected] (P.V.); [email protected] (D.M.); [email protected] (P.V.) Received: 16 October 2017; Accepted: 23 November 2017; Published: 28 November 2017 Abstract: Multiple sclerosis (MS) is a chronic demyelinating autoimmune disease affecting the central nervous system. One of its manifestations concerns impaired speech, also known as dysarthria. In many cases, a proper speech evaluation can play an important role in the diagnosis of MS. The identification of abnormal voice patterns can provide valid support for a physician in the diagnosing and monitoring of this neurological disease. In this paper, we present a method for vocal signal analysis in patients affected by MS. The goal is to identify the dysarthria in MS patients to perform an early diagnosis of the disease and to monitor its progress. The proposed method provides the acquisition and analysis of vocal signals, aiming to perform feature extraction and to identify relevant patterns useful to impaired speech associated with MS. This method integrates two well-known methodologies, acoustic analysis and vowel metric methodology, to better define pathological compared to healthy voices.
    [Show full text]
  • Dysphonia in Adults with Developmental Stuttering: a Descriptive Study
    South African Journal of Communication Disorders ISSN: (Online) 2225-4765, (Print) 0379-8046 Page 1 of 7 Original Research Dysphonia in adults with developmental stuttering: A descriptive study Authors: Background: Persons with stuttering (PWS) often present with other co-occurring conditions. 1 Anél Botha The World Health Organization’s (WHO) International Classification of Functioning, Disability Elizbé Ras1 Shabnam Abdoola1 and Health (ICF) proposes that it is important to understand the full burden of a health Jeannie van der Linde1 condition. A few studies have explored voice problems among PWS, and the characteristics of voices of PWS are relatively unknown. The importance of conducting future research has been Affiliations: emphasised. 1Department of Speech- Language Pathology and Objectives: This study aimed to describe the vocal characteristics of PWS. Audiology, University of Pretoria, South Africa Method: Acoustic and perceptual data were collected during a comprehensive voice assessment. The severity of stuttering was also determined. Correlations between the stuttering Corresponding author: Shabnam Abdoola, severity instrument (SSI) and the acoustic measurements were evaluated to determine the [email protected] significance. Twenty participants were tested for this study. Dates: Result: Only two participants (10%) obtained a positive Dysphonia Severity Index (DSI) score Received: 10 Nov. 2016 of 1.6 or higher, indicating that no dysphonia was present, while 90% of participants (n = 18) Accepted: 20 Mar. 2017 scored lower than 1.6, indicating that those participants presented with dysphonia. Some Published: 26 June 2017 participants presented with weakness (asthenia) of voice (35%), while 65% presented with a How to cite this article: slightly strained voice quality.
    [Show full text]
  • Machine-Learning Analysis of Voice Samples Recorded Through Smartphones: the Combined Effect of Ageing and Gender
    sensors Article Machine-Learning Analysis of Voice Samples Recorded through Smartphones: The Combined Effect of Ageing and Gender 1, 2, 2 1 Francesco Asci y , Giovanni Costantini y, Pietro Di Leo , Alessandro Zampogna , Giovanni Ruoppolo 3, Alfredo Berardelli 1,4, Giovanni Saggio 2 and Antonio Suppa 1,4,* 1 Department of Human Neurosciences, Sapienza University of Rome, 00185 Rome, Italy; [email protected] (F.A.); [email protected] (A.Z.); [email protected] (A.B.) 2 Department of Electronic Engineering, University of Rome Tor Vergata, 00133 Rome, Italy; [email protected] (G.C.).; [email protected] (P.D.L.); [email protected] (G.S.) 3 Department of Sense Organs, Otorhinolaryngology Section, Sapienza University of Rome, 00185 Rome, Italy; [email protected] 4 IRCCS Neuromed, 86077 Pozzilli (IS), Italy * Correspondence: [email protected]; Tel.: +39-06-49914544 These authors have equally contributed to the manuscript. y Received: 20 July 2020; Accepted: 2 September 2020; Published: 4 September 2020 Abstract: Background: Experimental studies using qualitative or quantitative analysis have demonstrated that the human voice progressively worsens with ageing. These studies, however, have mostly focused on specific voice features without examining their dynamic interaction. To examine the complexity of age-related changes in voice, more advanced techniques based on machine learning have been recently applied to voice recordings but only in a laboratory setting. We here recorded voice samples in a large sample of healthy subjects. To improve the ecological value of our analysis, we collected voice samples directly at home using smartphones. Methods: 138 younger adults (65 males and 73 females, age range: 15–30) and 123 older adults (47 males and 76 females, age range: 40–85) produced a sustained emission of a vowel and a sentence.
    [Show full text]
  • Spectral Processing of the Singing Voice
    SPECTRAL PROCESSING OF THE SINGING VOICE A DISSERTATION SUBMITTED TO THE DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGIES FROM THE POMPEU FABRA UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR PER LA UNIVERSITAT POMPEU FABRA Alex Loscos 2007 i © Copyright by Alex Loscos 2007 All Rights Reserved ii THESIS DIRECTION Dr. Xavier Serra Department of Information and Communication Technologies Universitat Pompeu Fabra, Barcelona ______________________________________________________________________________________ This research was performed at the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. Primary support was provided by YAMAHA corporation and the EU project FP6-IST-507913 SemanticHIFI http://shf.ircam.fr/ iii Dipòsit legal: B.42903-2007 ISBN: 978-84-691-1201-4 Abstract This dissertation is centered on the digital processing of the singing voice, more concretely on the analysis, transformation and synthesis of this type of voice in the spectral domain, with special emphasis on those techniques relevant for music applications. The digital signal processing of the singing voice became a research topic itself since the middle of last century, when first synthetic singing performances were generated taking advantage of the research that was being carried out in the speech processing field. Even though both topics overlap in some areas, they present significant differentiations because of (a) the special characteristics of the sound source they deal and (b) because of the applications that can be built around them. More concretely, while speech research concentrates mainly on recognition and synthesis; singing voice research, probably due to the consolidation of a forceful music industry, focuses on experimentation and transformation; developing countless tools that along years have assisted and inspired most popular singers, musicians and producers.
    [Show full text]
  • Voice Stress Analysis Technology
    The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report: Document Title: Investigation and Evaluation of Voice Stress Analysis Technology Author(s): Darren Haddad, Sharon Walter, Roy Ratley, Megan Smith Document No.: 193832 Date Received: March 20, 2002 Award Number: 98-LB-VX-A013 This report has not been published by the U.S. Department of Justice. To provide better customer service, NCJRS has made this Federally- funded grant final report available electronically in addition to traditional paper copies. Opinions or points of view expressed are those of the author(s) and do not necessarily reflect the official position or policies of the U.S. Department of Justice. Investigation and Evaluation of Voice Stress Analysis Technology Final Report February 13,2002 Darren Haddad Roy Ratley Sharon Walter Megan Smith AFRUIFEC ACS Defense Rome Research Site Rome, NY This project is supported under Interagency Agreement 98-LB-R-013 from the Office of Justice Programs, National Institute of Justice, Department of Justice. Points of view in this document are those of the author(s) and do not necessarily represent the official position of the U.S. Department of Justice. Points of view or opinions stated in this report are those of the authors and do not necessarily represent the official positions or policies of the United States Department of Justice. Table of Contents 1.0 EXECUTIVE SUMMARY ......................................................... 1 2.0EFFORTOBJECTIVE ............................................................ 1 3.0INTRODUCTION ................................................................ 1 4.0HISTORYOFVSATECHNOLOGY ................................................. 2 5.0 AVAILABLE VSA SYSTEMS ...................................................... 5 5.1 PSYCHOLOGICAL STRESS EVALUATOR (PSE) ..........................................
    [Show full text]