INTERSPEECH 2020 October 25–29, 2020, Shanghai, China Unsupervised Audio Source Separation using Generative Priors Vivek Narayanaswamy1, Jayaraman J. Thiagarajan2, Rushil Anirudh2 and Andreas Spanias1 1SenSIP Center, School of ECEE, Arizona State University, Tempe, AZ 2Lawrence Livermore National Labs, 7000 East Avenue, Livermore, CA
[email protected],
[email protected],
[email protected],
[email protected] Abstract given set of sources and the mixing process, consequently re- quiring complete re-training when those assumptions change. State-of-the-art under-determined audio source separation sys- This motivates a strong need for the next generation of unsuper- tems rely on supervised end to end training of carefully tailored vised separation methods that can leverage the recent advances neural network architectures operating either in the time or the in data-driven modeling, and compensate for the lack of labeled spectral domain. However, these methods are severely chal- data through meaningful priors. lenged in terms of requiring access to expensive source level Utilizing appropriate priors for the unknown sources has labeled data and being specific to a given set of sources and been an effective approach to regularize the ill-conditioned na- the mixing process, which demands complete re-training when ture of source separation. Examples include non-Gaussianity, those assumptions change. This strongly emphasizes the need statistical independence, and sparsity [13]. With the emergence for unsupervised methods that can leverage the recent advances of deep learning methods, it has been shown that choice of in data-driven modeling, and compensate for the lack of la- the network architecture implicitly induces a structural prior for beled data through meaningful priors.