Goto

Collaborating Authors

 tnull






The Evolution of Learning Algorithms for Artificial Neural Networks

Baxter, Jonathan

arXiv.org Artificial Intelligence

In this paper we investigate a neural network model in which weights between computational nodes are modified according to a local learning rule. To determine whether local learning rules are sufficient for learning, we encode the network architectures and learning dynamics genetically and then apply selection pressure to evolve networks capable of learning the four boolean functions of one variable. The successful networks are analysed and we show how learning behaviour emerges as a distributed property of the entire network. Finally the utility of genetic algorithms as a tool of discovery is discussed.



Neural Processes with Stability

Neural Information Processing Systems

However, noisy context points introduce challenges to the algorithmic stability that small changes in training data may significantly change the models and yield lower generalization performance.


Supplementary Material T able of Contents

Neural Information Processing Systems

Returning to the variational problem in Equation (A.5), we can now write D (by Lemma 2) Assume |A| < and that the MDP is ergodic. Parts of this proof are adapted from the proof given in Haarnoja et al. Convergence follows from Outcome-Driven Policy Evaluation above. We will use analogous notation for p . The result follows from Lemma 4, Equation (A.128), Equation (A.129), and the definition of f .



Egocentric Conformal Prediction for Safe and Efficient Navigation in Dynamic Cluttered Environments

Shin, Jaeuk, Lee, Jungjin, Yang, Insoon

arXiv.org Artificial Intelligence

Since safe control of ego-vehicles depends on accurately predicting the future states of surrounding dynamic agents, numerous motion forecasting models [1, 2] have been developed to forecast an agent's future motions from historical data. Nevertheless, these predictions remain inherently prone to error, primarily because they lack information about hidden contexts or intents--such as agents' goals, velocity preferences, or even social relationships among human agents. To address these limitations, conformal prediction (CP) [3, 4] has been employed to reliably assess the models' predictive capabilities. The method offers a principled yet straightforward procedure for calibrating the models. At test time, the calibration results can be used to construct a confidence set that contains the true future states of the environment, assuming that the test and calibration data are exchangeable (i.e., their joint distribution is symmetric). Consequently, CP has been successfully applied to a variety of problems, including reinforcement learning [5, 6], linear This work was supported in part by the Information and Communications Technology Planning and Evaluation (IITP) grants funded by MSIT No. 2022-0-00124, No. 2022-0-00480 and No. RS-2021-II211343, Artificial Intelligence Graduate School Program (Seoul National University). The authors are with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul 08826, South Korea,{sju5379, jungbbal, insoonyang }@snu.ac.kr arXiv:2504.00447v1