
INTERSPEECH 2020 October 25–29, 2020, Shanghai, China Multi-task Learning for End-to-end Noise-robust Bandwidth Extension Nana Hou1, Chenglin Xu1;4, Joey Tianyi Zhou3, Eng Siong Chng1;2, Haizhou Li4;5 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Temasek Laboratories, Nanyang Technological University, Singapore 3Institute of High Performance Computing (IHPC), A*STAR, Singapore 4Department of Electrical and Computer Engineering, National University of Singapore, Singapore 5Machine Listening Lab, University of Bremen, Germany [email protected] Abstract noise-robust bandwidth extension ideal bandwidth extension Bandwidth extension aims to reconstruct wideband speech sig- nals from narrowband inputs to improve perceptual quality. step step Prior studies mostly perform bandwidth extension under the as- 1 2 sumption that the narrowband signals are clean without noise. The use of such extension techniques is greatly limited in prac- noisy narrowband signals clean narrowband signals clean wideband signals tice when signals are corrupted by noise. To alleviate such problem, we propose an end-to-end time-domain framework Figure 1: The work flow of noise-robust bandwidth extension. for noise-robust bandwidth extension, that jointly optimizes a In Step 1, the noisy narrowband signal is enhanced to remove mask-based speech enhancement and an ideal bandwidth exten- noise. In Step 2, the enhanced narrowband signal is bandwidth- sion module with multi-task learning. The proposed framework extended to generate the clean wideband signal. avoids decomposing the signals into magnitude and phase spec- tra, therefore, requires no phase estimation. Experimental re- sults show that the proposed method achieves 14.3% and 15.8% With the advent of deep learning, recent studies suggest relative improvements over the best baseline in terms of percep- [17] an unified approach that combines speech enhancement tual evaluation of speech quality (PESQ) and log-spectral dis- and bandwidth extension (UEE) in a joint training neural net- tortion (LSD), respectively. Furthermore, our method is 3 times work. As shown in Figure 2(a), the UEE approach firstly ap- more compact than the best baseline in terms of the number of plies a bi-directional long-short-term-memory (BLSTM) layer parameters. as the speech enhancement module to map the noisy narrow- Index Terms: Noise-robust bandwidth extension, multi-task band input to enhanced narrowband features. Then, another learning, time-domain masking, temporal convolutional net- BLSTM layer is applied as the ideal bandwidth extension mod- work ule [18] to recover the missing high-frequency information from the enhanced narrowband features. The speech enhancement and bandwidth extension module are first trained separately as 1. Introduction the pre-training, which are then fine-tuned with a single mean Speech signals with broader bandwidth provide higher percep- square error (MSE) loss between the clean wideband ground- tual quality and intelligibility. Bandwidth extension aims to re- truth and enhanced-plus-extended output. Overall, the UEE ap- cover the high-frequency information from narrowband signals, proach is implemented with a two-stage training scheme, and it which is found useful in hearing aids design [1,2], speech recog- also faces phase estimation difficulty just like other frequency nition [3–5] and speaker verification [6, 7]. domain techniques. Speech bandwidth extension methods, such as deep neu- In this paper, we propose an end-to-end time-domain ral networks (DNN) [8,9], fully convolutional network [10,11], framework for noise-robust bandwidth extension, which is generative adversarial network (GAN) [12], and wavenet [13], achieved by jointly optimizing mask-based speech enhance- mostly perform extension under ideal conditions with clean nar- ment and ideal bandwidth extension modules with a multi- rowband signals as inputs. This is called ideal bandwidth exten- task learning (MTL-MBE). As a time-domain technique, the sion. However, in practice, speech signals are always corrupted proposed method inherently avoids phase estimation issues. by channel or ambient noise, for example, the received pilot Specifically, the noisy narrowband signal is firstly encoded into speech via ultra high frequency (UHF) radio for air traffic con- acoustic features instead of the short time Fourier transform trol. Without addressing the noise issue, ideal bandwidth exten- (STFT). The speech enhancement module takes the acoustic sion techniques are greatly limited in real-world applications. features to estimate a mask and obtains the enhanced narrow- A typical way to address the noise problem is to perform band features for subsequent bandwidth extension. Two speech speech enhancement on the noisy narrowband signal first (Step decoders are trained to reconstruct the enhanced narrowband 1), and ideal bandwidth extension next (Step 2), as illustrated and enhanced-plus-extended features into time-domain signals, in Figure 1. For example, there was a study to apply the itera- in a similar way like what inverse STFT (iSTFT) does. The net- tive Vector Taylor Series (VTS) approximation algorithm [14] work is optimized with a multi-task learning [19–21] over both for feature enhancement, which is followed by a Gaussian mix- narrowband and wideband signals. To the best of our knowl- ture models or maximum a posterior models to reconstruct the edge, this is the first work to explore noise-robust bandwidth wideband signals [15, 16]. extension in the time domain. Copyright © 2020 ISCA 4069 http://dx.doi.org/10.21437/Interspeech.2020-2022 output sigmoid Decoder Decoder Decoder Decoder Decoder conv (iSTFT) (Deconv 1D) (Deconv 1D) (Deconv 1D) (Deconv 1D) r extension extension extension extension tcb- ... module module module module conv prelu+norm tcb- D-conv tcb- prelu+norm enhancement enhancement Mask Mask conv module module estimation estimation conv Encoder Encoder Encoder Encoder (STFT) (conv 1D) (conv 1D) (conv 1D) norm input (a) (b) (c) (d) Figure 3: Block diagram of temporal convolutional network (TCN). “tcb-2b−1” denotes a temporal convolutional block Figure 2: Block diagrams of (a) frequency-domain noise-robust (TCB) with the dilation of 2b−1, where b is the total number of bandwidth extension, (b) time-domain noise-robust bandwidth the TCB. “D-conv” is the dilated convolutional layers stacked extension, (c) time-domain mask-based noise-robust bandwidth in several TCBs to exponentially increase the dilation factors. extension (MBE), and (d) time-domain mask-based noise-robust L is the residual connection. bandwidth extension with multi-task learning (MTL-MBE). ⊗ is an operator that refers to the element-wise multiplication. is not the first time to be explored in speech enhancement. Prior work [24] utilized TCN as a regression module to map noisy 2. Enhancement and Extension Multi-Task input to clean signals, but their mapping-based framework is Learning not suitable as an enhancement module here because it still suf- We now propose a time-domain masking for noise-robust band- fers from the same problem as two-stage training schemes do. width extension with multi-task learning (MTL-MBE), which Therefore, we utilize TCN as a mask estimation module, which is illustrated in Figure 2 (d). is a unique architecture different from the extension module. We first examine a noise-robust bandwidth extension net- As shown in Figure 3, the encoder representation A is firstly work in the time domain, which consists of a 1-D convolutional normalized by its mean and variance on channel dimension encoder to extract acoustic features from input speech, and a scaled by the trainable bias and gain parameters [25]. Then, 1-D de-convolutional decoder to reconstruct waveforms from a 1 × 1 CNN with N(= 128) filters is performed to adjust the enhanced-plus-extended features, as shown in Figure 2(b). Such number of channels for the inputs. To capture the long-range convolutional encoder-decoder-like structure is widely used in temporal information of the speech with a manageable number enhancement and separation tasks [22, 23]. The enhancement of parameters, dilated convolutional layers are stacked in sev- and extension are implemented as a pipeline of two similar re- eral temporal convolutional blocks (TCB) by exponentially in- gression, or mapping-based, neural networks. If trained jointly, creasing the dilation factor. Each TCB, as shown in dot box of their individual functions of the respective network are not clear. Figure 3, consists of two 1 × 1 CNNs and one dilated convolu- If trained separately, we face the same issue as other two-stage tional layer with a parametric rectified linear unit (PReLU) [26] training schemes do. activation function and normalization operation. The first 1 × 1 CNN (with 512 filters and 1 × 1 kernel size) determines the 2.1. Time-domain masking number of the input channels and the second 1 × 1 CNN (with 128 filters and 1 × 1 kernel size) adjusts the output channels To address the problem in the pipeline scheme of Figure 2(b), from the dilated convolutional layer (with 512 filters and 1 × 3 we propose a time-domain masking module to replace the kernel size). We form b(= 8) TCBs as a batch and repeat the mapping-based enhancement module, as shown in Figure 2(c), batch for r(= 3) times in the TCN of mask estimation module. which has a unique architecture different from the extension In each batch, the dilation factors of the deptwise convolutions module and is called MBE. in the b TCBs will be increased as [20;:::; 2b−1]. To keep the The time-domain masking aims to reduce the additive noise estimated mask W in a consistent dimension with the encoder in noisy narrowband signals prior to extension. As shown in representations A, one 1 × 1 CNN (with 512 filters and 1 × 1 1×T Figure 2(c), the input narrowband signal x(t) 2 R is en- kernel size) is applied with a sigmoid activation function for K×M coded to a representation A 2 R by a 1-D CNN with ensuring that the estimated mask W ranges within [0; 1].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-