sa-lstm
Semi-supervised Sequence Learning
We present two approaches to use unlabeled data to improve Se quence Learning with recurrent networks. The first approach is to predict wha t comes next in a sequence, which is a language model in NLP . The second approa ch is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. These two algorithms can be used as a "pretraining" algorithm for a later supervised sequence learning algorit hm. In other words, the parameters obtained from the pretraining step can then be us ed as a starting point for other supervised training models. In our experiments, w e find that long short term memory recurrent networks after pretrained with the tw o approaches become more stable to train and generalize better. With pretra ining, we were able to achieve strong performance in many classification tasks, su ch as text classification with IMDB, DBpedia or image recognition in CIFAR-10.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
Semi-supervised Sequence Learning
We present two approaches to use unlabeled data to improve Sequence Learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a language model in NLP. The second approach is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. These two algorithms can be used as a "pretraining" algorithm for a later supervised sequence learning algorithm. In other words, the parameters obtained from the pretraining step can then be used as a starting point for other supervised training models. In our experiments, we find that long short term memory recurrent networks after pretrained with the two approaches become more stable to train and generalize better. With pretraining, we were able to achieve strong performance in many classification tasks, such as text classification with IMDB, DBpedia or image recognition in CIFAR-10.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Mesoscale Traffic Forecasting for Real-Time Bottleneck and Shockwave Prediction
Chekroun, Raphael, Wang, Han, Lee, Jonathan, Toromanoff, Marin, Hornauer, Sascha, Moutarde, Fabien, Monache, Maria Laura Delle
Accurate real-time traffic state forecasting plays a pivotal role in traffic control research. In particular, the CIRCLES consortium project necessitates predictive techniques to mitigate the impact of data source delays. After the success of the MegaVanderTest experiment, this paper aims at overcoming the current system limitations and develop a more suited approach to improve the real-time traffic state estimation for the next iterations of the experiment. In this paper, we introduce the SA-LSTM, a deep forecasting method integrating Self-Attention (SA) on the spatial dimension with Long Short-Term Memory (LSTM) yielding state-of-the-art results in real-time mesoscale traffic forecasting. We extend this approach to multi-step forecasting with the n-step SA-LSTM, which outperforms traditional multi-step forecasting methods in the trade-off between short-term and long-term predictions, all while operating in real-time.
- North America > United States > Tennessee > Davidson County > Nashville (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > France (0.04)
- (2 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Infrastructure & Services (0.93)
Semi-supervised Sequence Learning
We present two approaches to use unlabeled data to improve Sequence Learning with recurrent networks. The first approach is to predict what comes next in a sequence, which is a language model in NLP. The second approach is to use a sequence autoencoder, which reads the input sequence into a vector and predicts the input sequence again. These two algorithms can be used as a "pretraining" algorithm for a later supervised sequence learning algorithm. In other words, the parameters obtained from the pretraining step can then be used as a starting point for other supervised training models. In our experiments, we find that long short term memory recurrent networks after pretrained with the two approaches become morestable to train and generalize better. With pretraining, we were able to achieve strong performance in many classification tasks, such as text classification with IMDB, DBpedia or image recognition in CIFAR-10.