Master Class: Deep Learning for Signals

Total Page:16

File Type:pdf, Size:1020Kb

Master Class: Deep Learning for Signals Master Class: Deep Learning for Signals Abhijit Bhattacharjee Senior Application Engineer MathWorks © 2015 The MathWorks, Inc.1 Music Genre Classification 2 3 4 Agenda Why deep learning? Deep learning with signal data (Demo) Speech Command Recognition (Demo) LSTM Networks Enabling Features in MATLAB Deploying deep learning 5 What is Deep Learning? 6 Deep learning is a type of machine learning in which a model learns to perform tasks directly from image, time-series or text data. Deep learning is usually implemented using a neural network architecture. 7 What is Deep Learning? ▪ Subset of machine learning with automatic feature extraction – Can learn features directly from data – More Data = better model Machine Deep Learning Learning Deep Learning 8 Types of Datasets Tabular Time Series/ Image Data Signal Data ML or LSTM CNN LSTM or CNN 9 Image Example: Object recognition using deep learning Training Millions of images from 1000 (GPU) different categories Real-time object recognition using Prediction a webcam connected to a laptop 10 Signals Example: Analyzing signal data using deep learning Signal Classification using LSTMs Speech Recognition using CNNs 11 Deep Learning Workflow ACCESS AND EXPLORE LABEL AND PREPROCESS DEVELOP PREDICTIVE INTEGRATE MODELS WITH DATA DATA MODELS SYSTEMS Files Data Augmentation/ Hardware-Accelerated Desktop Apps Transformation Training Databases Labeling Automation Hyperparameter Tuning Enterprise Scale Systems Sensors Import Reference Network Visualization Embedded Devices and Models Hardware 12 How Does a Convolutional Neural Network Work? Edges Shapes Objects 13 Speech Command Recognition Using Convolutional Neural Networks No Up Right Off 14 Example: Speech Command Recognition Using Deep Learning CNN Network for Audio Classification audioDatastore 1. Train Stop Up Yes auditorySpectrogram <none> Google speech command dataset 2. Predict audioDeviceReader "Up" auditorySpectrogram 15 Command Recognition Demo 16 17 18 19 20 21 22 23 24 25 26 https://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf 27 28 29 30 31 32 33 34 35 MATLAB Based Algorithm Wins the 2017 PhysioNet/CinC Challenge to Automatically Detect Atrial Fibrillation Challenge Design an algorithm that uses machine learning to detect atrial fibrillation and other abnormal heart rhythms in noisy, single-lead ECG recordings Solution Use MATLAB to analyze ECG data, extract features using signal processing and wavelet techniques, and evaluate Block diagram for Black Swan’s atrial fibrillation detection different machine learning algorithms to train and algorithm. implement a best-in-class classifier to detect AF “I don’t think MATLAB has any strong competitors for Results signal processing and wavelet analysis. When you add in ▪ First place in PhysioNet/CinC Challenge achieved its statistics and machine learning capabilities, it’s easy to ▪ ECG data visualized in multiple domains see why nonprogrammers enjoy using MATLAB, ▪ Feature extraction accelerated with parallel particularly for projects that require combining all these processing methods.” - Ali Bahrami Rad, Aalto University Link to user story 36 Agenda Why deep learning? Deep learning with signal data (Demo) Speech Command Recognition (Demo) LSTM Networks Enabling Features in MATLAB Deploying deep learning 37 I was born in France. I speak __________ ? 38 Recurrent Neural Networks ▪ Take previous data into account when making new predictions ▪ Signals, text, time series 39 I was born in France… [2000 words] … I speak __________ ? 40 Long Short Term Memory Networks ▪ RNN that carries a memory cell throughout the process ▪ Sequence Problems c0 C1 Ct 41 LSTM Demo 42 Time Series Classification (Human Activity Recognition) Long short-term memory networks ▪ Dataset is accelerometer and gyroscope signals captured with a smartphone ▪ Data is a collection of time series with 9 channels sequence to one Deep Learning 43 44 Agenda Why deep learning? Deep learning with signal data (Demo) Speech Command Recognition (Demo) LSTM Networks Enabling Features in MATLAB Deploying deep learning 45 Deep Learning on CPU, GPU, Multi-GPU and Clusters H OW TO TARGE T ? Single Single CPU CPU Single GPU Single CPU, Multiple GPUs On-prem server with Cloud GPUs GPUs (AWS) 46 Audio Datastore ads = audioDatastore('.\Dataset', ... 'IncludeSubfolders',true, ... 'FileExtensions','.wav', ... 'LabelSource','foldernames') ads = ▪ Programmatic interface to large Datastore with properties: Files: { collections of audio files ' ...\Temp\speech_commands_v0.01\_background_noise_ ' ...\Local\Temp\speech_commands_v0.01\_background_noise_ ' ...\Local\Temp\speech_commands_v0.01\_background_noise_ ▪ ... and 64724 more (Optional) auto-generation of } Labels: [_background_noise_; _background_noise_; _background_noise_ ... and 64724 more categorical] labels from folder names ReadMethod: 'File' OutputDataType: 'double' ads = getSubsetDatastore(ads,isCommand|isUnknown); ▪ Label-based indexing and countEachLabel(ads) ans = partitioning 11×2 table Label Count _______ _____ ▪ Random file sampling down 2359 go 2372 left 2353 no 2375 ▪ Automatic sequential file reading off 2357 on 2367 right 2367 stop 2380 ▪ Parallel access for pre- unknown 4143 up 2375 processing, feature extraction, or yes 2377 data augmentation [adsTrain,adsValidation,adsTest] = splitData(ads,datafolder); 47 “I love to label and preprocess my data” ~ Said no engineer, ever. 48 Audio Labeler ▪ Work on collections of recordings or record new audio directly within the app ▪ Navigate dataset and playback interactively ▪ Define and apply labels to – Entire files – Regions within files ▪ Import and export audio folders, label definitions and datastores 49 50 Signal Analyzer App Analyze multiple signals in time, frequency and time-frequency domains • Navigate through signals • Extract regions of interest • Generate MATLAB scripts 51 Feature Extraction: Time-Frequency Analysis Reassigned spectrogram Persistent spectrum Wavelet scalogram Constant Q transform and synchrosqueezing Empirical mode decomposition Kurtogram Instantaneous frequency Hilbert-Huang transform Instantaneous BW 52 Enabling Features Summary (For Audio Applications) Audio System Toolbox Neural Network TB (For Signal Applications) Signal Processing Toolbox Stats & ML TB Prototype! Ground Process Record & Read & Extract Truth & Train Predict store Partition Features Labeling augment ads = Datastore with properties: Files: { ' ...\Temp\speech_commands_v0.01\_background_noise_\doing_the_dishes.wav'; ' ...\Local\Temp\speech_commands_v0.01\_background_noise_\dude_miaowing.wav'; ' ...\Local\Temp\speech_commands_v0.01\_background_noise_\exercise_bike.wav' ... and 64724 more } Labels: [_background_noise_; _background_noise_; _background_noise_ ... and 64724 more categorical] ReadMethod: 'File' OutputDataType: 'double' 53 Agenda Why deep learning? Deep learning with signal data (Demo) Speech Command Recognition (Demo) LSTM Networks Enabling Features in MATLAB Deploying deep learning 54 Deploying Deep Learning Models for Inference NVIDIA TensorRT & cuDNN Libraries ARM Code Compute Generation Library CNN Intel MKL-DNN Library 55 GPU Coder Fills a Gap in the Deep Learning Solution Training Inference Access Data Preprocess Select Network Train Deploy Image Neural GPU Image Acq. PCT Processing Network Coder Computer Vision 56 With GPU Coder, MATLAB is fast GPU Coder is faster than TensorFlow, MXNet and Pytorch TensorFlow MXNet GPU Coder PyTorch Intel® Xeon® CPU 3.6 GHz - NVIDIA libraries: CUDA9 - cuDNN 7 - Frameworks: TensorFlow 1.8.0, MXNet 1.2.1, PyTorch 0.3.1 57 MathWorks can help you do Deep Learning Free resources More options ▪ Guided evaluations with a ▪ Consulting services MathWorks deep learning ▪ Training courses engineer ▪ Technical support ▪ Proof-of-concept projects ▪ Advanced customer support ▪ Deep learning hands-on ▪ Installation, enterprise, and cloud workshop deployment ▪ Seminars and technical deep ▪ MATLAB for Deep Learning dives ▪ Deep learning onramp course 58 Thank you! 59 Extra Slides 60 Machine learning vs. deep learning Deep learning performs end-to-end learning by learning features, representations and tasks directly from images, text, sound, and signals Machine Learning Machine Learning Deep Learning Deep Learning Subset of machine learning with automatic feature extraction 61 Feature Extraction: Spectral Analysis Welch periodogram BW measurements Lomb periodogram Octave spectrum Harmonic distortion Spectral statistics 62 Features Extraction and Signal Segmentation For Machine and Deep Learning ▪ Mel Frequency Cepstral Coefficients (MFCC) ▪ Pitch ▪ Voice Activity Detection 63 Mel Frequency Cepstral Coefficients (MFCC) ▪ Variation of "perceptually-adjusted" spectrum content over time ▪ Most common voice and speech features for Machine Learning applications ▪ Option of: – mfcc function, – cepstralFeatureExtractor System object – Cepstral Feature Extractor block (Simulink) ▪ Returns: – MFCC – "Delta" (= mfcc[n]-mfcc[n-1]) – "Delta delta" (= delta[n]-delta[n-1]) 64 Pitch Extraction ▪ Estimate audio pitch over time ▪ Popular feature for machine learning in voice and speech processing ▪ Choice of 5 different popular algorithms – 'NCF' – Normalized Correlation Function – 'PEF' – Pitch Estimation Filter – 'CEP' – Cepstrum Pitch Determination – 'LHS' – Log-Harmonic Summation – 'SRH' – Summation of Residual Harmonics [ft, idx] = pitch(audioIn, fs); 65 Voice Activity Detection (VAD) ▪ Standard algorithm to segment voice and speech signals ▪ voiceActivityDetector System object for MATLAB ▪
Recommended publications
  • Speech Signal Basics
    Speech Signal Basics Nimrod Peleg Updated: Feb. 2010 Course Objectives • To get familiar with: – Speech coding in general – Speech coding for communication (military, cellular) • Means: – Introduction the basics of speech processing – Presenting an overview of speech production and hearing systems – Focusing on speech coding: LPC codecs: • Basic principles • Discussing of related standards • Listening and reviewing different codecs. הנחיות לשימוש נכון בטלפון, פלשתינה-א"י, 1925 What is Speech ??? • Meaning: Text, Identity, Punctuation, Emotion –prosody: rhythm, pitch, intensity • Statistics: sampled speech is a string of numbers – Distribution (Gamma, Laplace) Quantization • Information Theory: statistical redundancy – ZIP, ARJ, compress, Shorten, Entropy coding… ...1001100010100110010... • Perceptual redundancy – Temporal masking , frequency masking (mp3, Dolby) • Speech production Model – Physical system: Linear Prediction The Speech Signal • Created at the Vocal cords, Travels through the Vocal tract, and Produced at speakers mouth • Gets to the listeners ear as a pressure wave • Non-Stationary, but can be divided to sound segments which have some common acoustic properties for a short time interval • Two Major classes: Vowels and Consonants Speech Production • A sound source excites a (vocal tract) filter – Voiced: Periodic source, created by vocal cords – UnVoiced: Aperiodic and noisy source •The Pitch is the fundamental frequency of the vocal cords vibration (also called F0) followed by 4-5 Formants (F1 -F5) at higher frequencies.
    [Show full text]
  • Benchmarking Audio Signal Representation Techniques for Classification with Convolutional Neural Networks
    sensors Article Benchmarking Audio Signal Representation Techniques for Classification with Convolutional Neural Networks Roneel V. Sharan * , Hao Xiong and Shlomo Berkovsky Australian Institute of Health Innovation, Macquarie University, Sydney, NSW 2109, Australia; [email protected] (H.X.); [email protected] (S.B.) * Correspondence: [email protected] Abstract: Audio signal classification finds various applications in detecting and monitoring health conditions in healthcare. Convolutional neural networks (CNN) have produced state-of-the-art results in image classification and are being increasingly used in other tasks, including signal classification. However, audio signal classification using CNN presents various challenges. In image classification tasks, raw images of equal dimensions can be used as a direct input to CNN. Raw time- domain signals, on the other hand, can be of varying dimensions. In addition, the temporal signal often has to be transformed to frequency-domain to reveal unique spectral characteristics, therefore requiring signal transformation. In this work, we overview and benchmark various audio signal representation techniques for classification using CNN, including approaches that deal with signals of different lengths and combine multiple representations to improve the classification accuracy. Hence, this work surfaces important empirical evidence that may guide future works deploying CNN for audio signal classification purposes. Citation: Sharan, R.V.; Xiong, H.; Berkovsky, S. Benchmarking Audio Keywords: convolutional neural networks; fusion; interpolation; machine learning; spectrogram; Signal Representation Techniques for time-frequency image Classification with Convolutional Neural Networks. Sensors 2021, 21, 3434. https://doi.org/10.3390/ s21103434 1. Introduction Sensing technologies find applications in detecting and monitoring health conditions.
    [Show full text]
  • Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications
    Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications Cosmin Munteanu Nikhil Sharma Abstract University of Toronto Google, Inc. Traditional interfaces are continuously being replaced Mississauga [email protected] by mobile, wearable, or pervasive interfaces. Yet when [email protected] Frank Rudzicz it comes to the input and output modalities enabling Pourang Irani Toronto Rehabilitation Institute our interactions, we have yet to fully embrace some of University of Manitoba University of Toronto the most natural forms of communication and [email protected] [email protected] information processing that humans possess: speech, Sharon Oviatt Randy Gomez language, gestures, thoughts. Very little HCI attention Incaa Designs Honda Research Institute has been dedicated to designing and developing spoken [email protected] [email protected] language and multimodal interaction techniques, Matthew Aylett Keisuke Nakamura especially for mobile and wearable devices. In addition CereProc Honda Research Institute to the enormous, recent engineering progress in [email protected] [email protected] processing such modalities, there is now sufficient Gerald Penn Kazuhiro Nakadai evidence that many real-life applications do not require University of Toronto Honda Research Institute 100% accuracy of processing multimodal input to be [email protected] [email protected] useful, particularly if such modalities complement each Shimei Pan other. This multidisciplinary, two-day workshop
    [Show full text]
  • Introduction to Digital Speech Processing Schafer Rabiner and Ronald W
    SIGv1n1-2.qxd 11/20/2007 3:02 PM Page 1 FnT SIG 1:1-2 Foundations and Trends® in Signal Processing Introduction to Digital Speech Processing Processing to Digital Speech Introduction R. Lawrence W. Rabiner and Ronald Schafer 1:1-2 (2007) Lawrence R. Rabiner and Ronald W. Schafer Introduction to Digital Speech Processing highlights the central role of DSP techniques in modern speech communication research and applications. It presents a comprehensive overview of digital speech processing that ranges from the basic nature of the speech signal, through a variety of methods of representing speech in digital form, to applications in voice communication and automatic synthesis and recognition of speech. Introduction to Digital Introduction to Digital Speech Processing provides the reader with a practical introduction to Speech Processing the wide range of important concepts that comprise the field of digital speech processing. It serves as an invaluable reference for students embarking on speech research as well as the experienced researcher already working in the field, who can utilize the book as a Lawrence R. Rabiner and Ronald W. Schafer reference guide. This book is originally published as Foundations and Trends® in Signal Processing Volume 1 Issue 1-2 (2007), ISSN: 1932-8346. now now the essence of knowledge Introduction to Digital Speech Processing Introduction to Digital Speech Processing Lawrence R. Rabiner Rutgers University and University of California Santa Barbara USA [email protected] Ronald W. Schafer Hewlett-Packard Laboratories Palo Alto, CA USA Boston – Delft Foundations and Trends R in Signal Processing Published, sold and distributed by: now Publishers Inc.
    [Show full text]
  • Models of Speech Synthesis ROLF CARLSON Department of Speech Communication and Music Acoustics, Royal Institute of Technology, S-100 44 Stockholm, Sweden
    Proc. Natl. Acad. Sci. USA Vol. 92, pp. 9932-9937, October 1995 Colloquium Paper This paper was presented at a colloquium entitled "Human-Machine Communication by Voice," organized by Lawrence R. Rabiner, held by the National Academy of Sciences at The Arnold and Mabel Beckman Center in Irvine, CA, February 8-9,1993. Models of speech synthesis ROLF CARLSON Department of Speech Communication and Music Acoustics, Royal Institute of Technology, S-100 44 Stockholm, Sweden ABSTRACT The term "speech synthesis" has been used need large amounts of speech data. Models working close to for diverse technical approaches. In this paper, some of the the waveform are now typically making use of increased unit approaches used to generate synthetic speech in a text-to- sizes while still modeling prosody by rule. In the middle of the speech system are reviewed, and some of the basic motivations scale, "formant synthesis" is moving toward the articulatory for choosing one method over another are discussed. It is models by looking for "higher-level parameters" or to larger important to keep in mind, however, that speech synthesis prestored units. Articulatory synthesis, hampered by lack of models are needed not just for speech generation but to help data, still has some way to go but is yielding improved quality, us understand how speech is created, or even how articulation due mostly to advanced analysis-synthesis techniques. can explain language structure. General issues such as the synthesis of different voices, accents, and multiple languages Flexibility and Technical Dimensions are discussed as special challenges facing the speech synthesis community.
    [Show full text]
  • Audio Processing Laboratory
    AC 2010-1594: A GRADUATE LEVEL COURSE: AUDIO PROCESSING LABORATORY Buket Barkana, University of Bridgeport Page 15.35.1 Page © American Society for Engineering Education, 2010 A Graduate Level Course: Audio Processing Laboratory Abstract Audio signal processing is a part of the digital signal processing (DSP) field in science and engineering that has developed rapidly over the past years. Expertise in audio signal processing - including speech signal processing- is becoming increasingly important for working engineers from diverse disciplines. Audio signals are stored, processed, and transmitted using digital techniques. Creating these technologies requires engineers that understand real-time application issues in the related areas. With this motivation, we designed a graduate level laboratory course which is Audio Processing Laboratory in the electrical engineering department in our school two years ago. This paper presents the status of the audio processing laboratory course in our school. We report the challenges that we have faced during the development of this course in our school and discuss the instructor’s and students’ assessments and recommendations in this real-time signal-processing laboratory course. 1. Introduction Many DSP laboratory courses are developed for undergraduate and graduate level engineering education. These courses mostly focus on the general signal processing techniques such as quantization, filter design, Fast Fourier Transformation (FFT), and spectral analysis [1-3]. Because of the increasing popularity of Web-based education and the advancements in streaming media applications, several web-based DSP Laboratory courses have been designed for distance education [4-8]. An internet-based signal processing laboratory that provides hands-on learning experiences in distributed learning environments has been developed by Spanias et.al[6] .
    [Show full text]
  • ECE438 - Digital Signal Processing with Applications 1
    Purdue University: ECE438 - Digital Signal Processing with Applications 1 ECE438 - Laboratory 9: Speech Processing (Week 1) October 6, 2010 1 Introduction Speech is an acoustic waveform that conveys information from a speaker to a listener. Given the importance of this form of communication, it is no surprise that many applications of signal processing have been developed to manipulate speech signals. Almost all speech processing applications currently fall into three broad categories: speech recognition, speech synthesis, and speech coding. Speech recognition may be concerned with the identification of certain words, or with the identification of the speaker. Isolated word recognition algorithms attempt to identify individual words, such as in automated telephone services. Automatic speech recognition systems attempt to recognize continuous spoken language, possibly to convert into text within a word processor. These systems often incorporate grammatical cues to increase their accuracy. Speaker identification is mostly used in security applications, as a person’s voice is much like a “fingerprint”. The objective in speech synthesis is to convert a string of text, or a sequence of words, into natural-sounding speech. One example is the Speech Plus synthesizer used by Stephen Hawking (although it unfortunately gives him an American accent). There are also similar systems which read text for the blind. Speech synthesis has also been used to aid scientists in learning about the mechanisms of human speech production, and thereby in the treatment of speech-related disorders. Speech coding is mainly concerned with exploiting certain redundancies of the speech signal, allowing it to be represented in a compressed form. Much of the research in speech compression has been motivated by the need to conserve bandwidth in communication sys- tems.
    [Show full text]
  • Speech Processing Based on a Sinusoidal Model
    R.J. McAulay andT.F. Quatieri Speech Processing Based on a Sinusoidal Model Using a sinusoidal model of speech, an analysis/synthesis technique has been devel­ oped that characterizes speech in terms of the amplitudes, frequencies, and phases of the component sine waves. These parameters can be estimated by applying a simple peak-picking algorithm to a short-time Fourier transform (STFf) of the input speech. Rapid changes in the highly resolved spectral components are tracked by using a frequency-matching algorithm and the concept of"birth" and "death" ofthe underlying sinewaves. Fora givenfrequency track, a cubic phasefunction is applied to a sine-wave generator. whose outputis amplitude-modulated andadded to the sine waves generated for the other frequency tracks. and this sum is the synthetic speech output. The resulting waveform preserves the general waveform shape and is essentially indistin­ gUishable from the original speech. Furthermore. inthe presence ofnoise the perceptual characteristics ofthe speech and the noise are maintained. Itwas also found that high­ quality reproduction was obtained for a large class of inputs: two overlapping. super­ posed speech waveforms; music waveforms; speech in musical backgrounds; and certain marine biologic sounds. The analysis/synthesis system has become the basis for new approaches to such diverse applications as multirate coding for secure communications, time-scale and pitch-scale algorithms for speech transformations, and phase dispersion for the enhancement ofAM radio broadcasts. Moreover, the technology behind the applications has been successfully transferred to private industry for commercial development. Speech signals can be represented with a Other approaches to analysis/synthesis that· speech production model that views speech as are based on sine-wave models have been the result of passing a glottal excitation wave­ discussed.
    [Show full text]
  • Edinburgh Research Explorer
    Edinburgh Research Explorer Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications Citation for published version: Munteanu, C, Irani, P, Oviatt, S, Aylett, M, Penn, G, Pan, S, Sharma, N, Rudzicz, F, Gomez, R, Nakamura, K & Nakadai, K 2016, Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications. in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. CHI EA '16, ACM, New York, NY, USA, pp. 3612-3619, 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, United States, 7/05/16. https://doi.org/10.1145/2851581.2856506 Digital Object Identifier (DOI): 10.1145/2851581.2856506 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Download date: 28. Sep. 2021 Designing Speech and Multimodal Interactions for Mobile, Wearable, and Pervasive Applications Cosmin Munteanu Nikhil Sharma University of Toronto Google, Inc.
    [Show full text]
  • Speech Processing and Prosody Denis Jouvet
    Speech Processing and Prosody Denis Jouvet To cite this version: Denis Jouvet. Speech Processing and Prosody. TSD 2019 - 22nd International Conference of Text, Speech and Dialogue, Sep 2019, Ljubljana, Slovenia. hal-02177210 HAL Id: hal-02177210 https://hal.inria.fr/hal-02177210 Submitted on 8 Jul 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Speech Processing and Prosody Denis Jouvet Universite´ de Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France {denis.jouvet}@loria.fr Abstract. The prosody of the speech signal conveys information over the lin- guistic content of the message: prosody structures the utterance, and also brings information on speaker’s attitude and speaker’s emotion. Duration of sounds, en- ergy and fundamental frequency are the prosodic features. However their auto- matic computation and usage are not obvious. Sound duration features are usu- ally extracted from speech recognition results or from a force speech-text align- ment. Although the resulting segmentation is usually acceptable on clean native speech data, performance degrades on noisy or not non-native speech. Many algo- rithms have been developed for computing the fundamental frequency, they lead to rather good performance on clean speech, but again, performance degrades in noisy conditions.
    [Show full text]
  • Linear Prediction Analysis of Speech
    Appendix A Linear Prediction Analysis of Speech Linear prediction (LP) is one of the most important tools in speech analysis. The philosophy behind linear prediction is that a speech sample can be approximated as a linear combination of past samples. Then, by minimizing the sum of the squared differences between the actual speech samples and the linearly predicted ones over a finite interval, a unique set of predictor coefficients can be determined [15]. LP analysis decomposes the speech into two highly independent components, the vocal tract parameters (LP coefficients) and the glottal excitation (LP residual). It is assumed that speech is produced by exciting a linear time-varying filter (the vocal tract) by random noise for unvoiced speech segments, or a train of pulses for voiced speech. Figure A.1 shows a model of speech production for LP analysis [163]. It consists of a time varying filter H(z) which is excited by either a quasi periodic or a random noise source. The most general predictor form in linear prediction is the autoregressive moving average (ARMA) model where the speech sample s(n) is modelled as a linear combination of the past outputs and the present and past inputs [164–166]. It can be written mathematically as follows p q s(n)=− ∑ aks(n − k)+G ∑ blu(n − l), b0 = 1(A.1) k=1 l=0 where ak,1 ≤ k ≤ p,bl,1 ≤ l ≤ q and gain G are the parameters of the filter. Equivalently, in frequency domain, the transfer function of the linear prediction speech model is [164] q −l 1 + ∑ blz ( )= l=1 .
    [Show full text]
  • Speech Recognition Using Neural Networks
    SPEECH RECOGNITION USING NEURAL NETWORKS by Pablo Zegers _____________________________ Copyright © Pablo Zegers 1998 A Thesis Submitted to the Faculty of the DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING In Partial Fulfillment of the Requirements For the degree of MASTER OF SCIENCE WITH A MAJOR IN ELECTRICAL ENGINEERING In the Graduate College THE UNIVERSITY OF ARIZONA 1 9 9 8 2 STATEMENT BY AUTHOR This Thesis has been submitted in partial fulfillment of requirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the copyright holder. SIGNED: _________________________________ APPROVAL BY THESIS DIRECTOR This Thesis has been approved on the date shown below: ________________________________ ________________________________ Malur K. Sundareshan Date Professor of Electrical and Computer Engineering DEDICATION To God, “for thou hast made us for thyself and restless is our heart until it comes to rest in thee.” Saint Augustine, Confessions 4 TABLE OF CONTENTS 1. INTRODUCTION .................................................................................................................................. 11 1.1 LANGUAGES.......................................................................................................................................
    [Show full text]