Deep Learning · Recurrent Networks

TitleAuthors
Can SGD Learn Recurrent Neural Networks with Provable Generalization?Zeyuan Allen-Zhu · Yuanzhi Li
Input-Cell Attention Reduces Vanishing Saliency of Recurrent Neural NetworksAya Abdelsalam Ismail · Mohamed Gunady · Luiz Pessoa · Hector Corrada Bravo · Soheil Feizi
Input-Output Equivalence of Unitary and Contractive RNNsMelikasadat Emami · Mojtaba Sahraee Ardakan · Sundeep Rangan · Alyson Fletcher
Kernel-Based Approaches for Sequence Modeling: Connections to Neural MethodsKevin Liang · Guoyin Wang · Yitong Li · Ricardo Henao · Lawrence Carin
Legendre Memory Units: Continuous-Time Representation in Recurrent Neural NetworksAaron Voelker · Ivana Kajić · Chris Eliasmith
Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamicsGiancarlo Kerg · Kyle Goyette · Maximilian Puelma Touzel · Gauthier Gidel · Eugene Vorontsov · Yoshua Bengio · Guillaume Lajoie
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamicsNiru Maheswaranathan · Alex Williams · Matthew Golub · Surya Ganguli · David Sussillo
Root Mean Square Layer NormalizationBiao Zhang · Rico Sennrich
Universal Approximation of Input-Output Maps by Temporal Convolutional NetsJoshua Hanson · Maxim Raginsky