Goto

Collaborating Authors

 dimensionality









Supplementary Materials

Neural Information Processing Systems

Finally, the data was subsampled by a factor of 2. Data augmentation TX features were augmented by adding two types of artificial noise. Each session day has its own affine transform layer. RNN training hyperparameters The hyperparameters for RNN training are listed in Table 1. Table 1: RNN training hyperparameters Description Hyperparameter Learning rate 0.01 Batch size 48 Number of training batches 20000 Number of hidden units in the GRU 512 Number of GRU layers 2 Dropout rate in the GRU 0.4 Optimizer Adam Learning rate decay schedule Linear L2 weight regularization 1e-5 Maximum gradient norm for clipping 10 1.2 Language model training details Out-of-vocabulary words were mapped to a special token. In our case, T contains all 26 English letters, 5 punctuation marks, and the CTC blank symbol.