Recurrent neural networks learn robust representations by dynamically balancing compression and expansion
Matthew Farrell, Stefano Recanatesi, Timothy Moore, Guillaume Lajoie and Eric Shea-Brown
Supplemental Information (SI)
Addition model details We consider a recurrent neural network (RNN) model with one hidden layer trained to perform a delayed classification task. The equation for the hidden unit activations ht is