Goto

Collaborating Authors

 breakpoint



Complexity of One-Dimensional ReLU DNNs

Kogan, Jonathan, Jananthan, Hayden, Kepner, Jeremy

arXiv.org Machine Learning

Abstract--We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance. Deep Neural Networks (DNNs) with Rectified Linear Unit (ReLU) activation functions are piecewise-linear functions whose expressive power can be studied via the number of linear regions that they create [1]-[3]. However, achieving such approximations for complicated functions typically demands substantial computational resources. The Lottery Ticket Hypothesis states that we can often remove many connections while maintaining similar performance, motivating the study of sparse DNNs [7].


A Fast Heuristic Search Approach for Energy-Optimal Profile Routing for Electric Vehicles

Ahmadi, Saman, Jalili, Mahdi

arXiv.org Artificial Intelligence

We study the energy-optimal shortest path problem for electric vehicles (EVs) in large-scale road networks, where recuperated energy along downhill segments introduces negative energy costs. While traditional point-to-point pathfinding algorithms for EVs assume a known initial energy level, many real-world scenarios involving uncertainty in available energy require planning optimal paths for all possible initial energy levels, a task known as energy-optimal profile search. Existing solutions typically rely on specialized profile-merging procedures within a label-correcting framework that results in searching over complex profiles. In this paper, we propose a simple yet effective label-setting approach based on multi-objective A* search, which employs a novel profile dominance rule to avoid generating and handling complex profiles. We develop four variants of our method and evaluate them on real-world road networks enriched with realistic energy consumption data. Experimental results demonstrate that our energy profile A* search achieves performance comparable to energy-optimal A* with a known initial energy level.





Adaptive Proximal Gradient Method for Convex Optimization

Neural Information Processing Systems

In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions.