Conditioning Autoencoder Latent Spaces for Real-Time Timbre Interpolation and Synthesis

Conditioning Autoencoder Latent Spaces for Real-Time Timbre Interpolation and Synthesis

Conditioning Autoencoder Latent Spaces for Real-Time Timbre Interpolation and Synthesis Joseph T Colonel Sam Keene Queen Mary University of London Cooper Union London, UK New York, USA [email protected] [email protected] Abstract—We compare standard autoencoder topologies’ per- Moreover, these training computations take place on the end formances for timbre generation. We demonstrate how different user’s browser. activation functions used in the autoencoder’s bottleneck dis- Our previous work has tried to strike this three-way bal- tributes a training corpus’s embedding. We show that the choice of sigmoid activation in the bottleneck produces a more bounded ance as well [7] [8], by utilizing feed-forward neural net- and uniformly distributed embedding than a leaky rectified linear work autoencoder architectures trained on Short-Time Fourier unit activation. We propose a one-hot encoded chroma feature Transform (STFT) magnitude frames. This work demonstrated vector for use in both input augmentation and latent space how choice of activation functions, corpora, and augmentations conditioning. We measure the performance of these networks, to the autoencoder’s input could improve performance for and characterize the latent embeddings that arise from the use of this chroma conditioning vector. An open source, real-time timbre generation. However, we found upon testing that the timbre synthesis algorithm in Python is outlined and shared. autoencoder’s latent space proved difficult to control and Index Terms—neural network, autoencoder, timbre synthesis, characterize. Also, we found that our use of a five-octave real-time audio MicroKORG corpus encouraged the autoencoder to produce high-pitched, often uncomfortable tones. I. INTRODUCTION This paper introduces a chroma-based input augmentation Timbre refers to the perceptual qualities of a musical sound and skip connection to help improve our autoencoder’s recon- distinct from its amplitude and pitch. It is timbre that allows struction performance with little additional training time. A a listener to distinguish between a guitar and a violin both one-octave MicroKORG corpus as well as a violin-based cor- producing a concert C note. Moreover, a musician’s ability to pus are used to train and compare various architectural tweaks. create, control, and exploit new timbres has led, in part, to the Moreover, we show how this skip connection conditions the surge in popularity of pop, electronic, and hip hop music. autoencoder’s latent space so that a musician can shape a New algorithms for timbre generation and sound synthesis timbre around a desired note class. A full characterization have accompanied the rise to prominence of artificial neural of the autoencoder’s latent space is provided by sampling networks. GANSynth [10], trained on the NSynth dataset [12], from meshes that span the latent space. Finally, a real-time, uses generative adversarial networks to output high-fidelity, lo- responsive implementation of this architecture is outlined and cally coherent outputs. Furthermore, GANSynth’s global latent made available in Python. conditioning allows for interpolations between two timbres. Other works have found success in using Variational Autoen- coders (VAEs) [13] [5] [4], which combine autoencoders and II. AUTOENCODING NEURAL NETWORKS probabilistic inference to generate new audio. Most recently, An autoencoding neural network (i.e. autoencoder) is a ma- differential digital signal processing has shown promise by chine learning algorithm that is typically used for unsupervised arXiv:2001.11296v1 [eess.AS] 30 Jan 2020 casting common modules used in signal processing into a learning of an encoding scheme for a given input domain, differentiable form [11], where they can be trained with neural and is comprised of an encoder and a decoder [25]. For the networks using stochastic optimization. purposes of this work, the encoder is forced to shrink the The complexity of these models require powerful computing dimension of an input into a latent space using a discrete hardware to train, hardware often out of reach for musicians number of values, or “neurons.” The decoder then expands and creatives. When designing neural networks for creative the dimension of the latent space to that of the input, in a purposes one must strike a three-way balance between the manner that reconstructs the original input. expressivity of the system, the freedom given to a user to In a single layer model, the encoder maps an input vector train and interface with the network, and the computational x 2 d to the hidden layer y 2 e, where d > e. Then, the overhead needed for sound synthesis. One successful example R R decoder maps y to x^ 2 d. In this formulation, the encoder we point to in the field of music composition is MidiMe R maps x ! y via [9], which allows a composer to train a VAE with their own scores on a subspace of a larger, more powerful model. y = f(W x + b) (1) where W 2 R(e×d), b 2 Re, and f(· ) is an activation function Chroma-based features capture the harmonic information of that imposes a non-linearity in the neural network. The decoder an input sound by projecting the input’s frequency content has a similar formulation: onto a set of chroma values [22]. Assuming a twelve-interval equal temperment Western music scale, these chroma values x^ = f(W y + b ) (2) out out form the set fC, C#, D, D#, E, F, F#, G, G#, A, A#, Bg.A (d×e) d with Wout 2 R , bout 2 R . chromagram can be calculated by decomposing an input sound A multi-layer autoencoder acts in much the same way as a into 88 frequency bands corresponding to the musical notes single-layer autoencoder. The encoder contains n > 1 layers A0 to C8. Summing the short-time mean-square power across and the decoder contains m > 1 layers. Using Equation 1 N frames for each sub-band across each note (i.e. A0-A7) for each mapping, the encoder maps x ! x1 ! ::: ! xn. yields a 12 × N chromagram. Treating xn as y in Equation 2, the decoder maps xn ! In this work, a one-hot encoded chroma representation is xn+1 ! ::: ! xn+m =x ^. calculated for each training example by taking its chroma- The autoencoder trains the weights of the W ’s and b’s gram, setting the maximum chroma value to 1, and setting to minimize some cost function. This cost function should every other chroma value to 0. While this reduces to note minimize the distance between input and output values. The conditioning in the case of single-note audio, this generalizes choice of activation functions f(· ) and cost functions depends to the dominant frequency of a chord or polyphonic mixture. on the domain of a given task. Furthermore this feature can be calculated on an arbitrary corpus, which eliminates the tedious process of annotating by III. NETWORK DESIGN AND TOPOLOGY hand. We build off of previous work to present our current network architecture. C. Hidden Layers and Bottleneck Skip Connection This work uses a slight modification of the geometrically A. Activations decreasing/increasing autoencoder topology proposed in [8]. The sigmoid function All layers aside from the bottleneck and output layers use the LReLU activation function. The output layer uses the ReLU, 1 f(x) = (3) as all training examples in the corpus take strictly non-negative 1 + e−x values. For the bottleneck layer, models are trained separately and rectified linear unit (ReLU) with all-LReLU and all-sigmoid activations to compare how each activation constructs a latent space. 0 ; x < 0 f(x) = (4) The 2048-point first-order difference input augmentation is x ; x ≥ 0 replaced with the 12-point one-hot encoded chroma feature are often used to impose the nonlinearities f(· ) in an explained above. Furthermore, in this work three separate autoencoding neural network. A hybrid autoencoder topology topologies are explored by varying the bottleneck layer’s width cobmining both sigmoid and ReLU activations was shown – one with two neurons, one with three neurons, and one with to outperform all-sigmoid and all-ReLU models in a timbre eight neurons. encoding task [7]. However, this hybrid model often would Residual and skip connections are used in autoencoder de- not converge for a deeper autoencoder [8]. sign to help improve performance as network depth increases More recently, the leaky rectified linear unit (LReLU) [21] [14]. In this work, the 12-point one-hot encoded chroma feature input augmentation is passed directly to the autoen- αx ; x < 0 f(x) = (5) coder’s bottleneck layer. Models with and without this skip x ; x ≥ 0 connection are trained to compare how the skip connection has been shown to avoid both the vanishing gradient prob- affects the autoencoder’s latent space. Figure 1 depicts our lem introduced by using the sigmoid activation [16] and architecture with the chroma skip connection and eight neuron the dying neuron problem introduced by using the ReLU latent space. Note that for our two neuron model, the 8 × N activation [19]. The hyperparameter α is typically small, and latent embedding would become a 2 × N latent embedding, in this work fixed at 0:1. and similarly would become a 3 × N for our three neuron model. B. Chroma Based Input Augmentation The work presented in [8] showed how appending descrip- IV. CORPORA tive features to the input of an autoencoder can improve In this work a multi-layer neural network autoencoder is reconstruction performance, at the cost of increased training trained to learn representations of musical timbre. The aim time. More specifically, appending the first-order difference is to train the autoencoder to contain high level descriptive of the training example to the input was shown to give the features in a low dimensional latent space that can be easily best reconstruction performance, at the cost of doubling the manipulated by a musician.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us