Tetrahedron Splatting for 3D Generation Chun Gu1 Zeyu Yang 1 Zijie Pan

Neural Information Processing Systems

As a flexible representation, NeRF has been first adopted for 3D representation. With density-based volumetric rendering, it however suffers both intensive computational overhead and inaccurate mesh extraction. Using a signed distance field and Marching Tetrahedra, DMTet allows for precise mesh extraction and real-time rendering but is limited in handling large topological changes in meshes, leading to optimization challenges. Alternatively, 3D Gaussian Splatting (3DGS) is favored in both training and rendering efficiency while falling short in mesh extraction. In this work, we introduce a novel 3D representation, Tetrahedron Splatting (TeT-Splatting), that supports easy convergence during optimization, precise mesh extraction, and real-time rendering simultaneously. This is achieved by integrating surface-based volumetric rendering within a structured tetrahedral grid while preserving the desired ability of precise mesh extraction, and a tile-based differentiable tetrahedron rasterizer.


253f7b5d921338af34da817c00f42753-AuthorFeedback.pdf

Neural Information Processing Systems

Summary We would like to thank the entire review team for their efforts and insightful comments. DZPS18] ([DZPS18] refers to arXiv:1810.02054) approach zero (i.e., 0) as the sample size n . ImageNet dataset has 14 million images. For those applications, a non-diminishing convergence rate is more desirable. By Eq. (4), we know ลท Response to the concern on fixed second layer.


Appendices Table 1: Explanation of the notations tV uq the value (Q-) function at the beginning of the k-th episode; tV

Neural Information Processing Systems

For any fixed n, we apply Lemma 9 with y " iษ› and x " p2 y logp In the case ฮฑ " 1, it holds that รฟ B.1 Proof of Proposition 4 We prove Q Firstly, the conclusion holds when k " 1. Let ps, a, hq be fixed. We apply Azuma's inequality again to obtain that with probability at least p1 pq, it holds that The proof then is completed by (37). T n ` 2Hฮน n. (42) We now bound ล™ For the second case, by Hoeffding's inequality, with probability p1 pq it holds that Q B.2 Proof of Lemma 5 First, by Hoeffding's inequality, for every k and h, we have that Therefore, we only need to prove (48), and the rest of the proof is devoted to establishing (48). We now bound the first term of (53).



ad71c82b22f4f65b9398f76d8be4c615-AuthorFeedback.pdf

Neural Information Processing Systems

We now respond to the major comments are as follows. Take RL with the linear model as an example. More formally, we believe that the key is to prove an analogue of Lemma 5 for the linear model. We will also discuss the work on policy certificates (Dann et al., 2019) in related work section. We will add this discussion ot the next version of the paper.


The Option Keyboard: Combining Skills in Reinforcement Learning

Neural Information Processing Systems

The ability to combine known skills to create new ones may be crucial in the solution of complex reinforcement learning problems that unfold over extended periods. We argue that a robust way of combining skills is to define and manipulate them in the space of pseudo-rewards (or "cumulants"). Based on this premise, we propose a framework for combining skills using the formalism of options. We show that every deterministic option can be unambiguously represented as a cumulant defined in an extended domain. Building on this insight and on previous results on transfer learning, we show how to approximate options whose cumulants are linear combinations of the cumulants of known options. This means that, once we have learned options associated with a set of cumulants, we can instantaneously synthesise options induced by any linear combination of them, without any learning involved. We describe how this framework provides a hierarchical interface to the environment whose abstract actions correspond to combinations of basic skills. We demonstrate the practical benefits of our approach in a resource management problem and a navigation task involving a quadrupedal simulated robot.



Automated Label Unification for Multi-Dataset Semantic Segmentation with GNNs Rong Ma

Neural Information Processing Systems

Deep supervised models possess significant capability to assimilate extensive training data, thereby presenting an opportunity to enhance model performance through training on multiple datasets. However, conflicts arising from different label spaces among datasets may adversely affect model performance. In this paper, we propose a novel approach to automatically construct a unified label space across multiple datasets using graph neural networks. This enables semantic segmentation models to be trained simultaneously on multiple datasets, resulting in performance improvements.


Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors

Neural Information Processing Systems

Many problems in statistics and machine learning require the reconstruction of a rank-one signal matrix from noisy data. Enforcing additional prior information on the rank-one component is often key to guaranteeing good recovery performance. One such prior on the low-rank component is sparsity, giving rise to the sparse principal component analysis problem. Unfortunately, there is strong evidence that this problem suffers from a computational-to-statistical gap, which may be fundamental. In this work, we study an alternative prior where the low-rank component is in the range of a trained generative network. We provide a nonasymptotic analysis with optimal sample complexity, up to logarithmic factors, for rank-one matrix recovery under an expansive-Gaussian network prior. Specifically, we establish a favorable global optimization landscape for a nonlinear least squares objective, provided the number of samples is on the order of the dimensionality of the input to the generative model. This result suggests that generative priors have no computational-to-statistical gap for structured rank-one matrix recovery in the finite data, nonasymptotic regime. We present this analysis in the case of both the Wishart and Wigner spiked matrix models.


Escaping from saddle points on Riemannian manifolds

Neural Information Processing Systems

We consider minimizing a nonconvex, smooth function f on a Riemannian manifold M. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manifold).