Goto

Collaborating Authors

 subdivision





OctField: Hierarchical Implicit Functions for 3D Modeling

Neural Information Processing Systems

Recent advances in localized implicit functions have enabled neural implicit representation to be scalable to large scenes.However, the regular subdivision of 3D space employed by these approaches fails to take into account the sparsity of the surface occupancy and the varying granularities of geometric details. As a result, its memory footprint grows cubically with the input volume, leading to a prohibitive computational cost even at a moderately dense decomposition. In this work, we present a learnable hierarchical implicit representation for 3D surfaces, coded OctField, that allows high-precision encoding of intricate surfaces with low memory and computational budget. The key to our approach is an adaptive decomposition of 3D scenes that only distributes local implicit functions around the surface of interest. We achieve this goal by introducing a hierarchical octree structure to adaptively subdivide the 3D space according to the surface occupancy and the richness of part geometry. As octree is discrete and non-differentiable, we further propose a novel hierarchical network that models the subdivision of octree cells as a probabilistic process and recursively encodes and decodes both octree structure and surface geometry in a differentiable manner. We demonstrate the value of OctField for a range of shape modeling and reconstruction tasks, showing superiority over alternative approaches.


Better Neural Network Expressivity: Subdividing the Simplex

Bakaev, Egor, Brunck, Florestan, Hertrich, Christoph, Stade, Jack, Yehudayoff, Amir

arXiv.org Artificial Intelligence

This work studies the expressivity of ReLU neural networks with a focus on their depth. A sequence of previous works showed that $\lceil \log_2(n+1) \rceil$ hidden layers are sufficient to compute all continuous piecewise linear (CPWL) functions on $\mathbb{R}^n$. Hertrich, Basu, Di Summa, and Skutella (NeurIPS'21 / SIDMA'23) conjectured that this result is optimal in the sense that there are CPWL functions on $\mathbb{R}^n$, like the maximum function, that require this depth. We disprove the conjecture and show that $\lceil\log_3(n-1)\rceil+1$ hidden layers are sufficient to compute all CPWL functions on $\mathbb{R}^n$. A key step in the proof is that ReLU neural networks with two hidden layers can exactly represent the maximum function of five inputs. More generally, we show that $\lceil\log_3(n-2)\rceil+1$ hidden layers are sufficient to compute the maximum of $n\geq 4$ numbers. Our constructions almost match the $\lceil\log_3(n)\rceil$ lower bound of Averkov, Hojny, and Merkert (ICLR'25) in the special case of ReLU networks with weights that are decimal fractions. The constructions have a geometric interpretation via polyhedral subdivisions of the simplex into ``easier'' polytopes.


Accelerating ERM for data-driven algorithm design using output-sensitive techniques

Neural Information Processing Systems

Data-driven algorithm design is a promising, learning-based approach for beyond worst-case analysis of algorithms with tunable parameters. An important open problem is the design of computationally efficient data-driven algorithms for combinatorial algorithm families with multiple parameters.





Geometric Integration for Neural Control Variates

Meister, Daniel, Harada, Takahiro

arXiv.org Machine Learning

Thanks to our geometric subdivision, we can integrate the neural network analytically, and use it as a control variate for Monte Carlo integration. The integral of the approximation provides a biased estimate (left), which is corrected by Monte Carlo integration of the residual integrand (center left), obtaining the final unbiased estimate (center right), which can achieve a lower error than vanilla Monte Carlo (right). Abstract Control variates are a variance-reduction technique for Monte Carlo integration. The principle involves approximating the integrand by a function that can be analytically integrated, and integrating using the Monte Carlo method only the residual difference between the integrand and the approximation, to obtain an unbiased estimate. Neural networks are universal approx-imators that could potentially be used as a control variate. However, the challenge lies in the analytic integration, which is not possible in general. In this manuscript, we study one of the simplest neural network models, the multilayered perceptron (MLP) with continuous piecewise linear activation functions, and its possible analytic integration. W e propose an integration method based on integration domain subdivision, employing techniques from computational geometry to solve this problem in 2D. W e demonstrate that an MLP can be used as a control variate in combination with our integration method, showing applications in the light transport simulation. 1. Introduction To synthesize photorealistic images, we need to solve notoriously complex integrals that model the underlying light transport. In general, these integrals do not have an analytic solution, and thus we employ tools of numerical integration to solve them. Among those, Monte Carlo integration is prominent, providing a general and robust solution, efficiently dealing with, for example, high dimensions or discontinuities that other numerical methods may struggle with. Monte Carlo converges to a correct solution with an increasing number of samples; however, it may require a large number of samples to suppress variance, that otherwise exhibits as high-frequency noise in the rendered images, under an acceptable threshold.