[1801.00632] Character-level Recurrent Neural Networks in Practice: Comparing Training and Sampling Schemes
An advantage of singleloss training, however, is that we encourage the network to make predictions on a long-term basis, since we only backpropagate one loss defined at the end of a sequence

Abstract: Recurrent neural networks are nowadays successfully used in an abundance of
applications, going from text, speech and image processing to recommender
systems. Backpropagation through time is the algorithm that is commonly used to
train these networks on specific tasks. Many deep learning frameworks have
their own implementation of training and sampling procedures for recurrent
neural networks, while there are in fact multiple other possibilities to choose
from and other parameters to tune. In existing literature this is very often
overlooked or ignored. In this paper we therefore give an overview of possible
training and sampling schemes for character-level recurrent neural networks to
solve the task of predicting the next token in a given sequence. We test these
different schemes on a variety of datasets, neural network architectures and
parameter settings, and formulate a number of take-home recommendations. The
choice of training and sampling scheme turns out to be subject to a number of
trade-offs, such as training stability, sampling time, model performance and
implementation effort, but is largely independent of the data. Perhaps the most
surprising result is that transferring hidden states for correctly initializing
the model on subsequences often leads to unstable training behavior depending
on the dataset.

‹Fig. 1: Unrolling a recurrent neural network in time. Functions f(·) and g(·) and their parameters are shared across all time steps. (Character-level Recurrent Neural Networks)Fig. 2: Example of trunctated backpropagation through time for k1 = 2 and k2 = 3. The thick arrows indicate one backpropagation through time update. (Character-level Recurrent Neural Networks)Fig. 3: Graphical visualization of scheme 1. Fig. 4: Graphical visualization of scheme 2. Fig. 5: Graphical visualization of scheme 3. Fig. 6: Graphical visualization of scheme 4. The shaded circle is the remembered hidden state. (High-level overview)