Goto

Collaborating Authors

 rlc


A Proofs

Neural Information Processing Systems

Further taking the usual assumption that X is compact. Let us start with Proposition 3, a central observation needed in Theorem 2. Put into words Now, we can proceed to prove the universality part of Theorem 2. Since the task admits a smooth separator, By Fubini's theorem and Proposition 3, we have F The reader can think of λ as a uniform distribution over G. (as in Theorem 2). The result follows directly from the combination of de Finetti's theorem [ Combining this with Kallenberg's noise transfer theorem we have that the weights and Assumption 1 or ii) is an inner-product decision graph problem as in Definition 3. Further, the task has infinitely (as in Theorem 2). Finally, we follow Proposition 2's proof by simply replacing de Finetti's with Aldous-Hoover's theorem. Define an RLC that samples the linear coefficients as follows.




A Proofs

Neural Information Processing Systems

Further taking the usual assumption that X is compact. Let us start with Proposition 3, a central observation needed in Theorem 2. Put into words Now, we can proceed to prove the universality part of Theorem 2. Since the task admits a smooth separator, By Fubini's theorem and Proposition 3, we have F The reader can think of λ as a uniform distribution over G. (as in Theorem 2). The result follows directly from the combination of de Finetti's theorem [ Combining this with Kallenberg's noise transfer theorem we have that the weights and Assumption 1 or ii) is an inner-product decision graph problem as in Definition 3. Further, the task has infinitely (as in Theorem 2). Finally, we follow Proposition 2's proof by simply replacing de Finetti's with Aldous-Hoover's theorem. Define an RLC that samples the linear coefficients as follows.



Prediction of Lane Change Intentions of Human Drivers using an LSTM, a CNN and a Transformer

De Cristofaro, Francesco, Hofbaur, Felix, Yang, Aixi, Eichberger, Arno

arXiv.org Artificial Intelligence

Lane changes of preceding vehicles have a great impact on the motion planning of automated vehicles especially in complex traffic situations. Predicting them would benefit the public in terms of safety and efficiency. While many research efforts have been made in this direction, few concentrated on predicting maneuvers within a set time interval compared to predicting at a set prediction time. In addition, there exist a lack of comparisons between different architectures to try to determine the best performing one and to assess how to correctly choose the input for such models. In this paper the structure of an LSTM, a CNN and a Transformer network are described and implemented to predict the intention of human drivers to perform a lane change. We show how the data was prepared starting from a publicly available dataset (highD), which features were used, how the networks were designed and finally we compare the results of the three networks with different configurations of input data. We found that transformer networks performed better than the other networks and was less affected by overfitting. The accuracy of the method spanned from $82.79\%$ to $96.73\%$ for different input configurations and showed overall good performances considering also precision and recall.


Probabilistic Invariant Learning with Randomized Linear Classifiers

Neural Information Processing Systems

Designing models that are both expressive and preserve known invariances of tasks is an increasingly hard problem. In this work, we show how to leverage randomness and design models that are both expressive and invariant but use less resources. Inspired by randomized algorithms, our key insight is that accepting probabilistic notions of universal approximation and invariance can reduce our resource requirements. More specifically, we propose a class of binary classification models called Randomized Linear Classifiers (RLCs). We give parameter and sample size conditions in which RLCs can, with high probability, approximate any (smooth) function while preserving invariance to compact group transformations.


Review for NeurIPS paper: Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

Neural Information Processing Systems

First, the main class of losses that the paper introduces, that of relative Lipschitz continuity (Def. In particular, given that the losses are (RLC) then one can recover relative Lipschitz continuity via a direct combination of convexity and Cauchy-Schwartz inequality. Moreover, conversely every relative Lipschitz continuous loss can be seen as (RLC) if one chooses the respective Riemannian metric accordingly; this becomes even more evident for the example that the paper presents, if f(x) x {2} for x\in R, then one can straightforwardly choose the Riemannian metric in such a manner that the respective dual norm would be \ v\ _{x,\ast} v /x and (RLC) follows. That said, this weakens significantly the contributions concerning FTRL and the like, since in Antonakopoulos et. On the other hand, concerning the most intriguing part that of establishing logarithmic regret for the case where the loss functions are in addition relatively strongly convex, there is no obvious way to establish any relevant examples that satisfy simultaneously relative Lipschitz continuity and relative strong convexity, besides of course the euclidean ones.


Probabilistic Invariant Learning with Randomized Linear Classifiers

Neural Information Processing Systems

Designing models that are both expressive and preserve known invariances of tasks is an increasingly hard problem. In this work, we show how to leverage randomness and design models that are both expressive and invariant but use less resources. Inspired by randomized algorithms, our key insight is that accepting probabilistic notions of universal approximation and invariance can reduce our resource requirements. More specifically, we propose a class of binary classification models called Randomized Linear Classifiers (RLCs). We give parameter and sample size conditions in which RLCs can, with high probability, approximate any (smooth) function while preserving invariance to compact group transformations.