A CNN-Based Approach to Identification of Degradations in Speech Signals

Total Page:16

File Type:pdf, Size:1020Kb

A CNN-Based Approach to Identification of Degradations in Speech Signals Aalborg Universitet A CNN-Based Approach to Identification of Degradations in Speech Signals Saishu, Yuki; Poorjam, Amir Hossein; Christensen, Mads Græsbøll Published in: Eurasip Journal on Audio, Speech, and Music Processing DOI (link to publication from Publisher): 10.1186/s13636-021-00198-4 Publication date: 2021 Document Version Accepted author manuscript, peer reviewed version Link to publication from Aalborg University Citation for published version (APA): Saishu, Y., Poorjam, A. H., & Christensen, M. G. (2021). A CNN-Based Approach to Identification of Degradations in Speech Signals. Eurasip Journal on Audio, Speech, and Music Processing, 2021(1), 1-10. [9]. https://doi.org/10.1186/s13636-021-00198-4 General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. ? Users may download and print one copy of any publication from the public portal for the purpose of private study or research. ? You may not further distribute the material or use it for any profit-making activity or commercial gain ? You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us at [email protected] providing details, and we will remove access to the work immediately and investigate your claim. Downloaded from vbn.aau.dk on: September 28, 2021 Saishu et al. RESEARCH A CNN-Based Approach to Identification of Degradations in Speech Signals Yuki Saishu1, Amir Hossein Poorjam1,2* and Mads Græsbøll Christensen1 Abstract The presence of degradations in speech signals, which causes acoustic mismatch between training and operating conditions, deteriorates the performance of many speech-based systems. A variety of enhancement techniques have been developed to compensate the acoustic mismatch in speech-based applications. To apply these signal enhancement techniques, however, it is necessary to know prior information about the presence and the type of degradations in speech signals. In this paper, we propose a new convolutional neural network (CNN)-based approach to automatically identify the major types of degradations commonly encountered in speech-based applications, namely additive noise, nonlinear distortion, and reverberation. In this approach, a set of parallel CNNs, each detecting a certain degradation type, is applied to the log-mel spectrogram of audio signals. Experimental results using two different speech types, namely pathological voice and normal running speech, show the effectiveness of the proposed method in detecting the presence and the type of degradations in speech signals which outperforms the state-of-the-art method. Using the score weighted class activation mapping, we provide a visual analysis of how the network makes decision for identifying different types of degradation in speech signals by highlighting the regions of the log-mel spectrogram which are more influential to the target degradation. Keywords: Signal enhancement; Convolutional neural network; Identification of degradation; Quality control; Visualization 1 Introduction modeled as follows: Advances in portable devices such as smartphones and tablets, that are equipped with high-quality micro- xn(t) = s(t) + e(t); (1) phones, facilitate capturing and processing speech sig- xr(t) = s(t) ∗ h(t); (2) nals in a wide range of environments. However, the x t s t ; (3) quality of the recordings is not necessarily as expected, d( ) = ( ( )) as they might be subject to degradation. In prac- where t is the time index, s t is the clean speech sig- tice, the presence of degradation during the operat- ( ) nal recorded by a microphone in a noise-free and non- ing time can deteriorate the performance of speech- reverberant environment, e t is an additive noise, based systems, such as speech recognition [1], speaker ( ) represents a nonlinear function, h t is a room impulse identification [2], pathological voice analysis (assess- ( ) response (RIR), and the indicates the convolution ment of voice signal of a speaker with a voice disorder) ∗ operation. We note that in reality, these degradations [3,4], mainly due to acoustic mismatch between train- are even more complex. For example, they may be ing and operating conditions. The most common types time-dependent. A variety of effective signal enhance- of degradation typically encountered in speech-based ment techniques have been developed to enhance a de- applications are background noise, reverberation, and graded speech signal such as noise reduction [5,6], nonlinear distortion. dereverberation [7,8], and restoration of some types A speech signal degraded by additive noise, rever- of nonlinear distortion [9, 10]. Most of these enhance- beration, and nonlinear distortion can be, respectively, ment algorithms have been designed to deal with a *Correspondence: [email protected] specific type of degradation in a signal, although re- 1 Audio Analysis Lab, CREATE, Aalborg University, Rendsburggade 14, cent research in comprehensive speech enhancement, 9000 Aalborg, Denmark Full list of author information is available at the end of the article dealing with both additive noise and reverberation, is Saishu et al. Page 2 of 10 promising [11, 12, 13]. Nevertheless, to properly com- meeting all requirements at the same time might not pensate for the effects of degradations, it is necessary be feasible in some cases. to know or obtain information about the presence and As an alternative solution, Poorjam et al. proposed a the type of degradations in speech signals. Since man- data-driven approach which uses a set of parallel Gaus- ual inspection of the signals is very time consuming, sian mixture models (GMMs) to detect three types costly, and even impossible in many speech-based ap- of degradation in pathological voice signals, namely plications, an accurate degradation detection system background noise, reverberation, and nonlinear distor- would be useful to automatically identify the presence tion [4]. All detectors in this approach are similar in and type of degradations. terms of the complexity, underlying assumptions, and There are a variety of approaches to identify dif- the acoustic features except that they are trained using ferent types of degradation in speech signals. For ex- different degraded signals. This approach is focused on ample, Ma et al. in [14] proposed a hidden Markov pathological voices and, particularly, on the sustained model based approach to distinguish different types of vowels. noise in speech signals. In another study by Desmond In this paper, we propose a more accurate convo- et al. [15], the reverberant signals are detected us- lutional neural network (CNN)-based approach which ing a channel-specific statistical model. In [16, 17], can identify degradations not only in sustained vowels, clipping in speech signals, as an example of nonlin- but also in normal running speech. CNNs are computa- ear distortion, is detected. Although effective, these tionally efficient deep neural networks that are able to approaches are focused on detecting a single, specific learn complex patterns in the spectrogram of a speech type of degradation. The use of a multiclass classifi- signal. In this approach, we apply a set of parallel cation, on the other hand, can be used to detect dif- CNNs to the log-mel spectrograms of the signals. Each ferent types of degradations. In [18, 19], Poorjam et CNN model, trained with signals corrupted by a spe- al. proposed two generalized multiclass classification- cific degradation type, is responsible for detecting the based approaches detecting various types of degrada- corresponding degradation in a test signal. The pre- tion, which investigated only on pathological voice sig- diction scores of an unseen test sample can be used to nals and the accuracy was still inadequate. Moreover, associate multiple degradation labels to an observation there is no control over the class assignment in these and can be interpreted as the degree of contribution approaches when a new type of degradation is observed of each degradation in a degraded signal. Moreover, for which the classifier has not been trained. For exam- using the score class activation mapping (score-CAM) ple, clipping, packet-loss, dynamic range compression, technique [20], we visually explain on what basis the automatic gain control, and distortions due to using CNN models make a specific decision in detecting dif- low quality or improperly configured equipment are ferent types of degradation by finding the regions in considered as new types of degradation for a multi- the mel-scale spectrograms of a degraded signal that class classifier trained only with noisy and reverberant are most influential to the scores of the target class. In signals. this technique, different activation maps are applied to To overcome the limitations of the multiclass-based the input spectrogram, each perturbing a region of the approaches, one can use a multilable classification ap- spectrogram. Then, the effect of each activation map proach in which more than one class labels may be on the prediction scores is observed. The importance assigned to each sample. Compared to the multiclass- of each activation map is determined by the predic- based methods, this approach can better deal with tion score on the target class. Finally, a saliency map some challenging cases such as the presence of a new is generated by a weighted linear combination of all ac- degradation type and when more than one degrada- tivation maps to visualize the internal representation tion coexists. In the former case, the sample may be in a CNN [20]. Since this technique does not require classified as none of the target classes. In the latter any modifications to the architecture of the network, case, more than one detector can accept a signal sub- it can be applied to a wide variety of CNN models. ject to a mixture of degradations. One possible solu- The rest of this paper is organized as follows.
Recommended publications
  • A Thesis Entitled Nocturnal Bird Call Recognition System for Wind Farm
    A Thesis entitled Nocturnal Bird Call Recognition System for Wind Farm Applications by Selin A. Bastas Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Master of Science Degree in Electrical Engineering _______________________________________ Dr. Mohsin M. Jamali, Committee Chair _______________________________________ Dr. Junghwan Kim, Committee Member _______________________________________ Dr. Sonmez Sahutoglu, Committee Member _______________________________________ Dr. Patricia R. Komuniecki, Dean College of Graduate Studies The University of Toledo December 2011 Copyright 2011, Selin A. Bastas. This document is copyrighted material. Under copyright law, no parts of this document may be reproduced without the expressed permission of the author. An Abstract of Nocturnal Bird Call Recognition System for Wind Farm Applications by Selin A. Bastas Submitted to the Graduate Faculty as partial fulfillment of the requirements for the Master of Science Degree in Electrical Engineering The University of Toledo December 2011 Interaction of birds with wind turbines has become an important public policy issue. Acoustic monitoring of birds in the vicinity of wind turbines can address this important public policy issue. The identification of nocturnal bird flight calls is also important for various applications such as ornithological studies and acoustic monitoring to prevent the negative effects of wind farms, human made structures and devices on birds. Wind turbines may have negative impact on bird population. Therefore, the development of an acoustic monitoring system is critical for the study of bird behavior. This work can be employed by wildlife biologist for developing mitigation techniques for both on- shore/off-shore wind farm applications and to address bird strike issues at airports.
    [Show full text]
  • The Articulatory and Acoustic Characteristics of Polish Sibilants and Their Consequences for Diachronic Change
    The articulatory and acoustic characteristics of Polish sibilants and their consequences for diachronic change Veronique´ Bukmaier & Jonathan Harrington Institute of Phonetics and Speech Processing, University of Munich, Germany [email protected] [email protected] The study is concerned with the relative synchronic stability of three contrastive sibilant fricatives /s ʂɕ/ in Polish. Tongue movement data were collected from nine first-language Polish speakers producing symmetrical real and non-word CVCV sequences in three vowel contexts. A Gaussian model was used to classify the sibilants from spectral information in the noise and from formant frequencies at vowel onset. The physiological analysis showed an almost complete separation between /s ʂɕ/ on tongue-tip parameters. The acoustic analysis showed that the greater energy at higher frequencies distinguished /s/ in the fricative noise from the other two sibilant categories. The most salient information at vowel onset was for /ɕ/, which also had a strong palatalizing effect on the following vowel. Whereas either the noise or vowel onset was largely sufficient for the identification of /s ɕ/ respectively, both sets of cues were necessary to separate /ʂ/ from /s ɕ/. The greater synchronic instability of /ʂ/ may derive from its high articulatory complexity coupled with its comparatively low acoustic salience. The data also suggest that the relatively late stage of /ʂ/ acquisition by children may come about because of the weak acoustic information in the vowel for its distinction from /s/. 1 Introduction While there have been many studies in the last 30 years on the acoustic (Evers, Reetz & Lahiri 1998, Jongman, Wayland & Wong 2000,Nowak2006, Shadle 2006, Cheon & Anderson 2008, Maniwa, Jongman & Wade 2009), perceptual (McGuire 2007, Cheon & Anderson 2008,Li et al.
    [Show full text]
  • Detection and Classification of Whale Acoustic Signals
    Detection and Classification of Whale Acoustic Signals by Yin Xian Department of Electrical and Computer Engineering Duke University Date: Approved: Loren Nolte, Supervisor Douglas Nowacek (Co-supervisor) Robert Calderbank (Co-supervisor) Xiaobai Sun Ingrid Daubechies Galen Reeves Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the Graduate School of Duke University 2016 Abstract Detection and Classification of Whale Acoustic Signals by Yin Xian Department of Electrical and Computer Engineering Duke University Date: Approved: Loren Nolte, Supervisor Douglas Nowacek (Co-supervisor) Robert Calderbank (Co-supervisor) Xiaobai Sun Ingrid Daubechies Galen Reeves An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the Graduate School of Duke University 2016 Copyright c 2016 by Yin Xian All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase sig- nals, we can represent the whale calls by their polynomial phase coefficients.
    [Show full text]
  • Music Information Retrieval Using Social Tags and Audio
    A Survey of Audio-Based Music Classification and Annotation Zhouyu Fu, Guojun Lu, Kai Ming Ting, and Dengsheng Zhang IEEE Trans. on Multimedia, vol. 13, no. 2, April 2011 presenter : Yin-Tzu Lin (阿孜孜^.^) 2011/08 Types of Music Representation • Music Notation – Scores – Like text with formatting symbolic • Time-stamped events – E.g. Midi – Like unformatted text • Audio – E.g. CD, MP3 – Like speech 2 Image from: http://en.wikipedia.org/wiki/Graphic_notation Inspired by Prof. Shigeki Sagayama’s talk and Donald Byrd’s slide Intra-Song Info Retrieval Composition Arrangement probabilistic Music Theory Symbolic inverse problem Learning Score Transcription MIDI Conversion Accompaniment Melody Extraction Performer Structural Segmentation Synthesize Key Detection Chord Detection Rhythm Pattern Tempo/Beat Extraction Modified speed Onset Detection Modified timbre Modified pitch Separation Audio 3 Inspired by Prof. Shigeki Sagayama’s talk Inter-Song Info Retrieval • Generic-similar – Music Classification • Genre, Artist, Mood, Emotion… Music Database – Tag Classification(Music Annotation) – Recommendation • Specific-similar – Query by Singing/Humming – Cover Song Identification – Score Following 4 Classification Tasks • Genre Classification • Mood Classification • Artist Identification • Instrument Recognition • Music Annotation 5 Paper Outline • Audio Features – Low-level features – Middle-level features – Song-level feature representations • Classifiers Learning • Classification Task • Future Research Issues 6 Audio Features 7 Low-level Features
    [Show full text]
  • Selecting Proper Features and Classifiers for Accurate Identification of Musical Instruments
    International Journal of Machine Learning and Computing, Vol. 3, No. 2, April 2013 Selecting Proper Features and Classifiers for Accurate Identification of Musical Instruments D. M. Chandwadkar and M. S. Sutaone y The performance of the learning procedure is evaluated Abstract—Selection of effective feature set and proper by classifying new sound samples (cross-validation). classifier is a challenging task in problems where machine One of the most crucial aspects in the above procedure of learning techniques are used. In automatic identification of instrument classification is to find the right features [2] and musical instruments also it is very crucial to find the right set of classifiers. Most of the research on audio signal processing features and accurate classifier. In this paper, the role of various features with different classifiers on automatic has been focusing on speech recognition and speaker identification of musical instruments is discussed. Piano, identification. Few features used for these can be directly acoustic guitar, xylophone and violin are identified using applied to solve the instrument classification problem. various features and classifiers. Spectral features like spectral In this paper, feature extraction and selection for centroid, spectral slope, spectral spread, spectral kurtosis, instrument classification using machine learning techniques spectral skewness and spectral roll-off are used along with autocorrelation coefficients and Mel Frequency Cepstral is considered. Four instruments: piano, acoustic guitar, Coefficients (MFCC) for this purpose. The dependence of xylophone and violin are identified using various features instrument identification accuracy on these features is studied and classifiers. A number of spectral features, MFCCs, and for different classifiers. Decision trees, k nearest neighbour autocorrelation coefficients are used.
    [Show full text]
  • The Subjective Size of Melodic Intervals Over a Two-Octave Range
    JournalPsychonomic Bulletin & Review 2005, ??12 (?),(6), ???-???1068-1075 The subjective size of melodic intervals over a two-octave range FRANK A. RUSSO and WILLIAM FORDE THOMPSON University of Toronto, Mississauga, Ontario, Canada Musically trained and untrained participants provided magnitude estimates of the size of melodic intervals. Each interval was formed by a sequence of two pitches that differed by between 50 cents (one half of a semitone) and 2,400 cents (two octaves) and was presented in a high or a low pitch register and in an ascending or a descending direction. Estimates were larger for intervals in the high pitch register than for those in the low pitch register and for descending intervals than for ascending intervals. Ascending intervals were perceived as larger than descending intervals when presented in a high pitch register, but descending intervals were perceived as larger than ascending intervals when presented in a low pitch register. For intervals up to an octave in size, differentiation of intervals was greater for trained listeners than for untrained listeners. We discuss the implications for psychophysi- cal pitch scales and models of music perception. Sensitivity to relations between pitches, called relative scale describes the most basic model of pitch. Category pitch, is fundamental to music experience and is the basis labels associated with intervals assume a logarithmic re- of the concept of a musical interval. A musical interval lation between pitch height and frequency; that is, inter- is created when two tones are sounded simultaneously or vals spanning the same log frequency are associated with sequentially. They are experienced as large or small and as equivalent category labels, regardless of transposition.
    [Show full text]
  • Data-Driven Cepstral and Neural Learning of Features for Robust Micro-Doppler Classification
    Data-driven cepstral and neural learning of features for robust micro-Doppler classification Baris Erola, Mehmet Saygin Seyfioglub, Sevgi Zubeyde Gurbuzc, and Moeness Amina aCenter for Advanced Communication, Villanova University, Villanova, PA 19085, USA bDept. of Electrical-Electronics Eng., TOBB Univ. of Economics and Technology, Ankara, Turkey cDept. of Electrical and Computer Engineering, University of Alabama, AL 35487, USA ABSTRACT Automatic target recognition (ATR) using micro-Doppler analysis is a technique that has been a topic of great research over the past decade, with key applications to border control and security, perimeter defense, and force protection. Patterns in the movements of animals, humans, and drones can all be accomplished through classification of the target’s micro-Doppler signature. Typically, classification is based on a set of fixed, pre-defined features extracted from the signature; however, such features can perform poorly under low signal-to-noise ratio (SNR), or when the number and similarity of classes increases. This paper proposes a novel set of data-driven frequency-warped cepstral coefficients (FWCC) for classification of micro-Doppler signatures, and compares performance with that attained from the data-driven features learned in deep neural networks (DNNs). FWCC features are computed by first filtering the discrete Fourier Transform (DFT) of the input signal using a frequency-warped filter bank, and then computing the discrete cosine transform (DCT) of the logarithm. The filter bank is optimized for radar using genetic algorithms (GA) to adjust the spacing, weight, and width of individual filters. For a 11-class case of human activity recognition, it is shown that the proposed data-driven FWCC features yield similar classification accuracy to that of DNNs, and thus provides interesting insights on the benefits of learned features.
    [Show full text]
  • Psychoacoustics of Multichannel Audio
    THE PSYCHOACOUSTICS OF MULTICHANNEL AUDIO J. ROBERT STUART Meridian Audio Ltd Stonehill, Huntingdon, PE18 6ED England ABSTRACT This is a tutorial paper giving an introduction to the perception of multichannel sound reproduction. The important underlying psychoacoustic phenomena are reviewed – starting with the behaviour of the auditory periphery and moving on through binaural perception, central binaural phenomena and cognition. The author highlights the way the perception of a recording can be changed according to the number of replay channels used. The paper opens the question of relating perceptual and cognitive responses to directional sound or to sound fields. 1. INTRODUCTION 3.1 Pitch Multichannel systems are normally intended to present a The cochlea aids frequency selectivity by dispersing three-dimensional sound to the listener. In general, the excitation on the basilar membrane on a frequency- more loudspeakers that can be applied, the more accurately dependent basis. Exciting frequencies are mapped on a the sound field can be reproduced. Since all multichannel pitch scale roughly according to the dispersion (position) systems do not have a 1:1 relationship between transmitted and the integral of auditory filtering function. Several channels and loudspeaker feeds, a deep understanding of scales of pitch have been used; the most common being the human binaural system is necessary to avoid spatial, mel, bark and E.. Fig. 1 shows the inter-relationship loudness or timbral discrepancies. between these scales. This paper reviews some of the basic psychoacoustic The mel scale derives from subjective pitch testing and was mechanisms that are relevant to this topic. defined so that 1000mel º 1kHz.
    [Show full text]
  • Analyzing Noise Robustness of Mfcc and Gfcc Features in Speaker Identification
    ANALYZING NOISE ROBUSTNESS OF MFCC AND GFCC FEATURES IN SPEAKER IDENTIFICATION Xiaojia Zhao1and DeLiang Wang1,2 1Department of Computer Science and Engineering, The Ohio State University, USA 2Center for Cognitive Science, The Ohio State University, USA {zhaox, dwang}@cse.ohio-state.edu ABSTRACT prior to speaker recognition. The human ability to perform speaker recognition in Automatic speaker recognition can achieve a high level of noisy conditions has motivated studies of robust speaker performance in matched training and testing conditions. recognition from the perspective of computational auditory However, such performance drops significantly in mis- scene analysis. In one such study [4], we showed that a new matched noisy conditions. Recent research indicates that a speaker feature, gammatone frequency cepstral coefficients new speaker feature, gammatone frequency cepstral coeffi- (GFCC), shows superior noise robustness to commonly used cients (GFCC), exhibits superior noise robustness to com- mel-frequency cepstral coefficients (MFCC) in speaker iden- monly used mel-frequency cepstral coefficients (MFCC). To tification (SID) tasks (see also [5]). As for the reason, we gain a deep understanding of the intrinsic robustness of speculated that the front-end of GFCC, the gammatone GFCC relative to MFCC, we design speaker identification filterbank, might be more noise-robust than that of MFCC, experiments to systematically analyze their differences and the mel-filterbank. In particular, the frequency scales em- similarities. This study reveals that the nonlinear rectifica- ployed in the two filterbanks were believed to be the key tion accounts for the noise robustness differences primarily. difference although no convincing evidence was presented Moreover, this study suggests how to enhance MFCC ro- to support this hypothesis.
    [Show full text]
  • Lecture 6: Frequency Scales: Semitones, Mels, and Erbs
    Review Semitones Mel ERB Summary Lecture 6: Frequency Scales: Semitones, Mels, and ERBs Mark Hasegawa-Johnson ECE 417: Multimedia Signal Processing, Fall 2020 Review Semitones Mel ERB Summary 1 Review: Equivalent Rectangular Bandwidth 2 Musical Pitch: Semitones, and Constant-Q Filterbanks 3 Perceived Pitch: Mels, and Mel-Filterbank Coefficients 4 Masking: Equivalent Rectangular Bandwidth Scale 5 Summary Review Semitones Mel ERB Summary Outline 1 Review: Equivalent Rectangular Bandwidth 2 Musical Pitch: Semitones, and Constant-Q Filterbanks 3 Perceived Pitch: Mels, and Mel-Filterbank Coefficients 4 Masking: Equivalent Rectangular Bandwidth Scale 5 Summary Review Semitones Mel ERB Summary Fletcher's Model of Masking, Reviewed 1 The human ear pre-processes the audio using a bank of bandpass filters. 2 The power of the noise signal, in the filter centered at fc , is Z FS =2 2 Nfc = 2 R(f )jHfc (f )j df 0 3 2 The power of the tone is Tfc = A =2, if the tone is at frequency fc . 4 If there is any band in which Nfc + Tfc 10 log10 > 1dB Nfc then the tone is audible. Otherwise, not. Review Semitones Mel ERB Summary Equivalent rectangular bandwidth (ERB) The frequency resolution of your ear is better at low frequencies. In fact, the dependence is roughly linear (Glasberg and Moore, 1990): b ≈ 0:108f + 24:7 These are often called (approximately) constant-Q filters, because the quality factor is f Q = ≈ 9:26 b The dependence of b on f is not quite linear. A more precise formula is given in (Moore and Glasberg, 1983) as: f 2 f b = 6:23 + 93:39 + 28:52 1000 1000 Review Semitones Mel ERB Summary Equivalent rectangular bandwidth (ERB) By Dick Lyon, public domain image 2009, https://commons.wikimedia.org/wiki/File:ERB_vs_frequency.svg Review Semitones Mel ERB Summary The Gammatone Filter The shape of the filter is not actually rectangular.
    [Show full text]
  • Discrimination-Emphasized Mel-Frequency-Warping for Time-Varying Speaker Recognition
    APSIPA ASC 2011 Xi’an Discrimination-Emphasized Mel-Frequency-Warping for Time-Varying Speaker Recognition 1Linlin Wang, 2Thomas Fang Zheng, 3Chenhao Zhang and 4Gang Wang Center for Speech and Language Technologies, Division of Technical Innovation and Development, Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, 100084 E-mail: { 1wangll, 3zhangchh, 4wanggang }@cslt.riit.tsinghua.edu.cn, [email protected] Tel/Fax: +86-10-62796393 Abstract— Performance degradation with time varying is a voice just as that of fingerprints, they put forward this issue at generally acknowledged phenomenon in speaker recognition and the same time [3]. There was no evidence regarding the it is widely assumed that speaker models should be updated from stability of speaker-specific information throughout time. In time to time to maintain representativeness. However, it is costly, 1997, Sadaoki Furui summarized advances in automatic user-unfriendly, and sometimes, perhaps unrealistic, which speaker recognition in decades and also left the way to deal hinders the technology from practical applications. From a pattern recognition point of view, the time-varying issue in with long-term variability in people’s voice as an open speaker recognition requires such features that are speaker- question [1]. A similar idea was expressed in [4], where the specific, and as stable as possible across time-varying sessions. authors argued that a big challenge to uniquely characterize a Therefore, after searching and analyzing the most stable parts of person’s voice was that voice changes over time. feature space, a Discrimination-emphasized Mel-frequency- Performance degradation has also been observed in warping method is proposed.
    [Show full text]
  • Solution of the Three-Dimensional Inverse Acoustic Scattering Problem on the Basis of the Novikovðhenkin Algorithm N
    Acoustical Physics, Vol. 51, No. 4, 2005, pp. 367–375. Translated from AkusticheskiÏ Zhurnal, Vol. 51, No. 4, 2005, pp. 437–446. Original Russian Text Copyright © 2005 by Alekseenko, Burov, Rumyantseva. Solution of the Three-Dimensional Inverse Acoustic Scattering Problem on the Basis of the Novikov–Henkin Algorithm N. V. Alekseenko, V. A. Burov, and O. D. Rumyantseva Moscow State University, Vorob’evy gory, Moscow, 119992 Russia e-mail: [email protected] Received May 5, 2004 Abstract—A theoretical study of the practical abilities of the Novikov–Henkin algorithm, which is one of the most promising algorithms for solving three-dimensional inverse monochromatic scattering problems by func- tional analysis methods, is carried out. Numerical simulations are performed for model scatterers of different strengths in an approximation simplifying the reconstruction process. The resulting estimates obtained with the approximate algorithm prove to be acceptable for middle-strength scatterers. For stronger scatterers, an ade- quate reconstruction is possible on the basis of a rigorous solution. © 2005 Pleiades Publishing, Inc. METHODS FOR SOLVING INVERSE number of computing operations than in the iteration SCATTERING PROBLEMS methods. The functional analytical methods for solving one- The inverse scattering problem consists in the deter- dimensional inverse scattering problems appeared as mination of the characteristics of a scatterer located in ᑬ early as in the 1950s (Gel’fand, Levitan, Marchenko, a spatial domain and described by a function v(r), and others). The first comprehensive studies of the mul- ∈ ᑬ where r , from the measured scattered field. In the tidimensional inverse scattering problem were carried case of an infinite domain ᑬ, the function v(r) is out by Berezanskii in the 1950s and by Faddeev and assumed to be rapidly decreasing at infinity.
    [Show full text]