Goto

Collaborating Authors

 st-lstm



Bayesian Optimization and Deep Learning forsteering wheel angle prediction

Riboni, Alessandro, Ghioldi, Nicolò, Candelieri, Antonio, Borrotti, Matteo

arXiv.org Artificial Intelligence

Given the current momentum and progress, ADS can be expected to continue to advance as variety of ADS products are going to become commercially available in the space of a decade (Chan, 2017). It is envisioned that automated driving technology will lead to a paradigm shift in transportation systems in terms of user experience, mode choices and business models. Nowadays, a greater number of industrialists are increasing their investments in self-driving cars technologies and, more generally, in the automotive sector. ADS research and an increasing number of industrial implementations have been catalyzed by the accumulated knowledge in vehicle dynamics in the wake of breakthroughs in computer vision caused by the advent of deep learning (Krizhevsky, Sutskever, and Hinton, 2012; Bojarski, Yeres, Choromanaska, Choromanski, Firner, Jackel, and Muller, 2017; Kocić, Jovičić, and Drndarević, 2019; Li, Yang, Qu, Cao, and Li, 2021a) and the availability of new sensor modalities such as lidar (Schwarz, 2010). Deep Learning (DL) has been widely used for the implementation of ADSs.


Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics

Wang, Yunbo, Zhang, Jianjin, Zhu, Hongyu, Long, Mingsheng, Wang, Jianmin, Yu, Philip S

arXiv.org Machine Learning

Natural spatiotemporal processes can be highly non-stationary in many ways, e.g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting. From Cramer's Decomposition, any non-stationary process can be decomposed into deterministic, time-variant polynomials, plus a zero-mean stochastic term. By applying differencing operations appropriately, we may turn time-variant polynomials into a constant, making the deterministic component predictable. However, most previous recurrent neural networks for spatiotemporal prediction do not use the differential signals effectively, and their relatively simple state transition functions prevent them from learning too complicated variations in spacetime. We propose the Memory In Memory (MIM) networks and corresponding recurrent blocks for this purpose. The MIM blocks exploit the differential signals between adjacent recurrent states to model the non-stationary and approximately stationary properties in spatiotemporal dynamics with two cascaded, self-renewed memory modules. By stacking multiple MIM blocks, we could potentially handle higher-order non-stationarity. The MIM networks achieve the state-of-the-art results on three spatiotemporal prediction tasks across both synthetic and real-world datasets. We believe that the general idea of this work can be potentially applied to other time-series forecasting tasks.