Wavecrn: an Efficient Convolutional Recurrent Neural Network for End

Wavecrn: an Efficient Convolutional Recurrent Neural Network for End

SPL-29109-2020.R1 1 WaveCRN: An Efficient Convolutional Recurrent Neural Network for End-to-end Speech Enhancement Tsun-An Hsieh, Hsin-Min Wang, Xugang Lu, and Yu Tsao, Member, IEEE Abstract—Due to the simple design pipeline, end-to-end (E2E) noisy spectral features to clean ones. In the meanwhile, some neural models for speech enhancement (SE) have attracted great approaches are derived by combining different types of deep interest. In order to improve the performance of the E2E learning models (e.g., CNN and RNN) to more effectively model, the local and sequential properties of speech should be efficiently taken into account when modelling. However, in most capture the local and sequential correlations [17]–[20]. More current E2E models for SE, these properties are either not fully recently, a SE system that was built based on stacked simple considered or are too complex to be realized. In this paper, we recurrent units (SRUs) [21], [22] has shown denoising per- propose an efficient E2E SE model, termed WaveCRN. Compared formance comparable to that of the LSTM-based SE system, with models based on convolutional neural networks (CNN) or while requiring much less computational costs for training. long short-term memory (LSTM), WaveCRN uses a CNN module to capture the speech locality features and a stacked simple Although the above-mentioned approaches can already provide recurrent units (SRU) module to model the sequential property outstanding performance, the enhanced speech signal cannot of the locality features. Different from conventional recurrent reach its perfection owing to the lack of accurate phase neural networks and LSTM, SRU can be efficiently parallelized information. To tackle this problem, some SE approaches in calculation, with even fewer model parameters. In order to adopt complex-ratio-masking and complex-spectral-mapping more effectively suppress noise components in the noisy speech, we derive a novel restricted feature masking approach, which to enhance distorted speech [23]–[25]. In [26], the phase performs enhancement on the feature maps in the hidden layers; estimation was formulated as a classification problem and was this is different from the approaches that apply the estimated used in a source separation task. ratio mask to the noisy spectral features, which is commonly used Another class of SE methods proposes to directly perform in speech separation methods. Experimental results on speech enhancement on the raw waveform [27]–[31], which are gener- denoising and compressed speech restoration tasks confirm that with the SRU and the restricted feature map, WaveCRN performs ally called waveform-mapping-based approaches. Among the comparably to other state-of-the-art approaches with notably deep learning models, fully convolutional networks (FCNs) reduced model complexity and inference time. have been widely used to directly perform waveform map- Index Terms—Compressed speech restoration, simple recur- ping [28], [32]–[34]. The WaveNet model, which was origi- rent unit, raw waveform speech enhancement, convolutional nally proposed for text-to-speech tasks, was also used in the recurrent neural networks waveform-mapping-based SE systems [35], [36]. Compared to a fully connected architecture, fully convolution layers retain I. INTRODUCTION better local information, and thus can more accurately model the frequency characteristics of speech waveforms. More Speech related applications, such as automatic speech recog- recently, a temporal convolutional neural network (TCNN) nition (ASR), voice communication, and assistive hearing [29] was proposed to accurately model temporal features and devices, play an important role in modern society. However, perform SE in the time domain. In addition to the point- most of these applications are not robust when noises are to-point loss (such as l1 and l2 norms) for optimization, arXiv:2004.04098v3 [eess.AS] 26 Nov 2020 involved. Therefore, speech enhancement (SE) [1]–[8], which some waveform-mapping based SE approaches [37], [38] aims to improve the quality and intelligibility of the original utilized adversarial loss or perceptual loss to capture high- speech signal, has been widely used in these applications. level distinctions between predictions and their targets. In recent years, deep learning algorithms have been widely For the above waveform-mapping-based SE approaches, used to build SE systems. A class of SE systems carry out en- an effective characterization of sequential and local patterns hancement on the frequency-domain acoustic features, which is an important consideration for the final SE performance. is generally called spectral-mapping-based SE approaches. In Although the combination of CNN and RNN/LSTM may be these approaches, speech signals are analyzed and recon- a feasible solution, the computational cost and model size structed using the short-time Fourier transform (STFT) and of RNN/LSTM are high, which may considerably limit its inverse STFT, respectively [9]–[13]. Then, the deep learning applicability. In this study, we propose an E2E waveform- models, such as fully connected deep denoising auto-encoder mapping-based SE method using an alternative CRN, termed [3], convolutional neural networks (CNNs) [14], and recurrent WaveCRN1, which combines the advantages of CNN and neural networks (RNNs) and long short-term memory (LSTM) [15], [16], are used as a transformation function to convert 1The implementation of WaveCRN is available at https://github.com/ Thanks to MOST of Taiwan for funding. aleXiehta/WaveCRN. SPL-29109-2020.R1 2 module in both directions. For each batch, the feature map N×C×T SRU SRU F 2 R is passed to the SRU-based recurrent feature extractor. The hidden states extracted in both directions are Deconv1D Conv1D ...... concatenated to form the encoded features. ...... ...... ...... C. Restricted Feature Mask SRU SRU The optimal ratio mask (ORM) has been widely used in Xi Fi Mi Fi' Yi SE and speech separation tasks [45]. As ORM is a time- frequency mask, it cannot be directly applied to waveform- Fig. 1. The architecture of the proposed WaveCRN model. Unlike spectral mapping-based SE approaches. In this study, an alternative CRN, WaveCRN integrates 1D CNN with bidirectional SRU. mask restricted feature mask (RFM) with all elements in the SRU to attain improved efficiency. As compared to spectral- range of -1 to 1 is applied to mask the feature map F: mapping-based CRN [17]–[20], the proposed WaveCRN di- 0 rectly estimates feature masks from unprocessed waveforms F = M ◦ F: (1) through highly parallelizable recurrent units. Two tasks are where M 2 RN×C×T , is the RFM, F0 is the masked feature used to test the proposed WaveCRN approach: (1) speech map estimated by element-wise multiplying the mask M and denoising and (2) compressed speech restoration. For speech the feature map F. It should be noted that the main difference denoising, we evaluate our method using an open-source between ORM and RFM is that the former is applied to dataset [39] and obtain high perceptual evaluation of speech spectral features, whereas the latter is used to transform the quality (PESQ) scores [40] which is comparable to the state- feature maps. of-the-art method while using a relatively simple architecture and l loss function. For compressed speech restoration, unlike 1 D. Waveform Generation in [41], [42] that used acoustic features, we simply pass the speech to a sign function for compression. This task is As described in Section II-A, the sequence length is reduced evaluated on the TIMIT database [43]. The proposed Wave- from L (for the waveform) to T (for the feature map) due CRN model recovers extremely compressed speech with a to the stride in the convolution process. Length restoration is notable relative short-time objective intelligibility (STOI) [44] essential for generating an output waveform with the same improvement of 75.51% (from 0.49 to 0.86). length as the input. Given the input length, output length, stride, and padding as Lin, Lout, S, and P , the relationship II. METHODOLOGY between Lin and Lout can be formulated as: In this section, we describe the details of our WaveCRN- Lout = (Lin − 1) × S − 2 × P + (K − 1) + 1: (2) based SE system. The architecture is a fully differentiable Let L = T , S = K=2, P = K=2, we have L = L. That E2E neural network that does not require pre-processing and in out is, the output waveform and the input waveform are guaranteed handcrafted features. Benefiting from the advantages of CNN to have the same length. and SRU, it jointly models local and sequential information. The overall architecture of WaveCRN is shown in Fig. 1. E. Model Structure Overview A. 1D Convolutional Input Module As shown in Fig. 1, our model leverages the benefits of CNN and SRU. Given the i-th noisy speech utterance X 2 R1×L, As mentioned in the previous section, for the spectral- i i = 0; :::; N − 1, in a batch, a 1D CNN first maps X into mapping-based SE approaches, speech waveforms are first i a feature map F for local feature extraction. Bi-SRU then converted to spectral-domain by STFT. To implement i computes an RFM M , which element-wisely multiplies F to waveform-mapping SE, WaveCRN uses a 1D CNN input i i generate a masked feature map F0 . Finally, a transposed 1D module to replace the STFT processing. Benefiting from the i convolution layer recovers the enhanced speech waveforms, nature of neural networks, the CNN module is fully trainable. X , from the masked features, F0 . For each batch, the input noisy audio X (X 2 RN×1×L) is i i In [21], SRU has been proved to yield a performance convolved with a two-dimensional tensor W (W 2 RC×K ) to comparable to that of LSTM, but with better parallelism. The extract the feature map F 2 RN×C×T , where N; C; K; T; L dependency between gates in LSTM leads to slow training are the batch size, number of channels, kernel size, time steps, and inference.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us