
Audio data analysis CES Data Scientist Slim ESSID Audio Data Analysis and Signal Processing team [email protected] http://www.telecom-paristech.fr/~essid Credits O. GILLET, C. JODER, N. MOREAU, G. RICHARD, F. VALLET, … Slim Essid About “audio”… ►Audio frequency: the range of audible frequencies (20 to 20,000 Hz) Threshold of pain Audible sound Minimal audition threshold CC Attribution 2.5 Generic Frequencies (kHz) 2 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Audio content categories To ad -minis -ter medi Speech Music Environmental 3 CES Data Science – Audio data analysis Slim Essid About “audio”… ►An important distinction: speech vs non-speech Speech signals Music & non-speech (environmental) “Simple” production model: No generic production model: the source-filter model “timbre”, “pitch”, “loudness”, … Image: Edward Flemming, course materials for 24.910 Topics in Linguistic Theory: Laboratory Phonology, Spring 2007. MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology. Downloaded on 05 May 2012 4 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Different research communities Music Information Speech Research Signal representations Music Speech classification recognition (genre, mood, …) Audio coding Speaker Transcription Source recognition separation Speech Rhythm Sound enhancement analysis synthesis … … … Machine Listening / Computer audition 5 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Research fields Acoustics Linguistics Psychology Psychoacoustics Audio content Musicology analysis Signal Knowledge processing engineering Machine learning Databases Statistics 6 CES Data Science – Audio data analysis Slim Essid About “audio”… ►Research fields Acoustics Linguistics Psychology Psychoacoustics Audio content Musicology analysis Signal Knowledge processing engineering Machine learning Databases Statistics 7 CES Data Science – Audio data analysis Slim Essid Why analyse audio data? . For archive management, indexing » Broadcast content segmentation and classification: speech/music/jingles…, speakers » Music autotagging: genre (classical, jazz, rock,…), mood, usage… » Search engines . For broadcasters » Music/effects/speech excerpt search » Playlist generation, Djing 9 CES Data Science – Audio data analysis Slim Essid Why analyse audio data? . For designers and producers » Audio sample search » Music transcription (beat, rhythm, chords, notes) » Broadcast content monitoring, plagiarism detection, hit prediction . For end-users » Content-based search (shazam++) » Non-linear and interactive content consuming (“skip intro”, “replay the chorus”, Karaoke: “remove the vocals”…) » Recommendation » Personalised playlist generation 10 CES Data Science – Audio data analysis Slim Essid Audio-driven multimedia analysis . Motivation for audio-driven content analysis » critical information is conveyed by the audio content » audio and visual information play complementary roles for the detection of key concepts/events . Video examples 11 CES Data Science – Audio data analysis Slim Essid Audio-driven multimedia analysis ►Video examples Use audio-based laughter detection • Applause detection • Keyword spotting: “Goal!” • Cheering detection • Sound loudness • Applause/cheering detection 12 CES Data Science – Audio data analysis Slim Essid Audio-driven multimedia analysis ►Key audio-based components Speech activity Jingle Emotion detection detection recognition Speech/Music/ Applause/Laughter/ Speaker Social signal … identification processing: detection roles, interactions,… Speaker diarization: “Who spoke when?” Speech-to-text transcription Natural Music language classification processing genre, mood, … At the heart of all components: a classification task (supervised or unsupervised) 13 CES Data Science – Audio data analysis Slim Essid General classification architecture ►Overview Development Audio segment Feature vectors database Feature Classifier extraction training TRAINING/DEVELOPMENT PHASE (AT THE LAB.) Decision functions Testing database Feature extraction Classification Segment identified TESTING/EVALUATION PHASE (EXPLOITATION) 15 CES Data Science – Audio data analysis Slim Essid Classification architecture ►Feature extraction process Feature Temporal Preprocessing computation integration Motivation: Exercise In Python: . signal denoising/enhancement . information rate reduction, eg. subsampling - load an audio file; - normalise it; . normalisation, eg.: - visualise it. Use librosa 16 CES Data Science – Audio data analysis Slim Essid Classification architecture ►Feature extraction process Feature Temporal Preprocessing computation integration Relies on audio signal processing techniques 17 CES Data Science – Audio data analysis Slim Essid Audio signal analysis ►Short-Term analysis windows Drawing by J. Laroche, modified 18 CES Data Science – Audio data analysis Slim Essid Signal framing Attack Sustain Release N H m m+1 m+2 » Static temporal segmentation » Dynamic temporal segmentation 19 CES Data Science – Audio data analysis Slim Essid Feature types . Temporal features: extracted directly from the waveform samples . Spectral features: extracted from a frequential representation of the signal . Perceptual features: extracted using a perceptual representation based on psychoacoustic considerations 20 CES Data Science – Audio data analysis Slim Essid Temporal features - ZCR Zero Crossing Rates Characterises noisy and transient sections 21 Spectral analysis ►Discrete Fourier Transform In practice: computed using the Fast Fourier Transform (FFT) 23 CES Data Science – Audio data analysis Slim Essid Discrete Fourier Transform (DFT) ►Important properties . Being a discrete time Fourier Transform, the DFT is periodic, with 푓 period 1 (in reduced frequency 푓 = ; 푓푠 : sampling frequency) 푓푠 . For signals 푥(푛) and 푦 푛 ; 푛 ∈ {0, … , 푁 − 1} Property Numerical series DFT Linearity {푎푥 푛 + 푏푦 푛 } {푎푋 푘 + 푏푌 푘 } ∗ Hermitian symmetry 푥 푛 real 푋 푘 = 푋 (−푘) 2푗휋푘 Time translation 푥(푛 − 푛0) − 푛0 푋 푘 푒 푁 Convolution 푥 푛 ⋆ 푦 푛 푋 푘 푌(푘) ≜ 푥 푘 푦(푛 − 푘) 푘 ∗ ∗ Conjugation {푥 푛 } {푋 −푘 } 24 CES Data Science – Audio data analysis Slim Essid Spectral analysis ►Spectral analysis by Short-Term Fourier Transform (STFT) Drawing by J. Laroche 25 CES Data Science – Audio data analysis Slim Essid Spectral analysis ►Violin excerpt: 20-ms overlapping windows ( 푠푟 = 44.1kHz ; 푁 = 882 samples ) 26 CES Data Science – Audio data analysis Slim Essid Spectral analysis ►Spectrogram C note (262 Hz) produced by a piano and a violin Temporal signal Spectrogram From M. Mueller & al. « Signal Processing for Music Analysis, IEEE Trans. On Selected topics in Signal Processing », October 2011. 27 CES Data Science – Audio data analysis Slim Essid Spectral analysis ►Exercise In Python: . Compute short-term spectra of an audio signal using FTT . Compute and display spectrogram . Use » scipy.fftpack » librosa 28 CES Data Science – Audio data analysis Slim Essid Spectral analysis ►Limitations of the spectrogram representation . Large representation » Typically 512 coefs every 10 ms » High dimensionality . Much detail » Redundant representation » High-level features (pitch, vibrato, timbre) are not highlighted Still a low-level representation, not yet a model 29 CES Data Science – Audio data analysis Slim Essid The source-filter model . Distinction between: » source: excitation fine spectral structure » filter: resonator coarse structure Source signal Speech (Vocal folds) Resonator (Vocal tract) Filter X(f) H(f) Y(f) 30 CES Data Science – Audio data analysis Slim Essid Cepstrum ►Principle . Source-filter model: . In the frequency domain: . By inverse DFT: where : real cepstrum definition deconvolution is thus achieved: filter is separated from excitation . First few cepstral coefficients » low quefrency: “slow iDFT waves” » represent the filter spectral envelope . Next coefficients represent the source fine spectral structure 31 CES Data Science – Audio data analysis Slim Essid Cepstral representations ►MFCC: Mel Frequency Cepstral Coefficients Audio frame Mel scale DFT Magnitude spectrum Triangular filter banc in Mel scale 32 CES Data Science – Audio data analysis Slim Essid Cepstral representations ►MFCC: Mel Frequency Cepstral Coefficients Audio frame DFT Magnitude spectrum Triangular filter banc in Mel scale 33 CES Data Science – Audio data analysis Slim Essid Cepstral representations ►MFCC: Mel Frequency Cepstral Coefficients Audio frame DFT Discrete Cosine Transform: • nice decorrelation properties Magnitude spectrum (like PCA) • yields diagonal covariance Triangular filter banc matrices in Mel scale Log DCT 13 first coefs (in general) 34 CES Data Science – Audio data analysis Slim Essid Cepstral representations ►MFCC: Mel Frequency Cepstral Coefficients Audio frame DFT First and second derivatives: speed and acceleration Magnitude spectrum Coarse temporal modelling Triangular filter banc in Mel scale Log DCT 39-coefficient 13 first coefs dt feature vector (in general) dt² 35 CES Data Science – Audio data analysis Slim Essid About MFCCs ►… very popular! . In speech applications: » Well justified: source-filter model makes sense » Nice properties from a statistical modelling viewpoint: decorrelation » Effective: state-of-the-art features for speaker and speech tasks . In general audio classification: » “Source-filter” model does not always hold » Still, MFCCs work well in practice! they are the default choice 36 CES Data Science – Audio data analysis Slim Essid MFCC ►Exercise
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages42 Page
-
File Size-