splitting direction
Nonlinearity and Uncertainty Informed Moment-Matching Gaussian Mixture Splitting
Kulik, Jackson, LeGrand, Keith A.
Many problems in navigation and tracking require increasingly accurate characterizations of the evolution of uncertainty in nonlinear systems. Nonlinear uncertainty propagation approaches based on Gaussian mixture density approximations offer distinct advantages over sampling based methods in their computational cost and continuous representation. State-of-the-art Gaussian mixture approaches are adaptive in that individual Gaussian mixands are selectively split into mixtures to yield better approximations of the true propagated distribution. Despite the importance of the splitting process to accuracy and computational efficiency, relatively little work has been devoted to mixand selection and splitting direction optimization. The first part of this work presents splitting methods that preserve the mean and covariance of the original distribution. Then, we present and compare a number of novel heuristics for selecting the splitting direction. The choice of splitting direction is informed by the initial uncertainty distribution, properties of the nonlinear function through which the original distribution is propagated, and a whitening based natural scaling method to avoid dependence of the splitting direction on the scaling of coordinates. We compare these novel heuristics to existing techniques in three distinct examples involving Cartesian to polar coordinate transformation, Keplerian orbital element propagation, and uncertainty propagation in the circular restricted three-body problem.
- North America > United States > Utah > Cache County > Logan (0.04)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- (3 more...)
Adaptive Split Balancing for Optimal Random Forest
Zhang, Yuqian, Ji, Weijie, Bradic, Jelena
While random forests are commonly used for regression problems, existing methods often lack adaptability in complex situations or lose optimality under simple, smooth scenarios. In this study, we introduce the adaptive split balancing forest (ASBF), capable of learning tree representations from data while simultaneously achieving minimax optimality under the Lipschitz class. To exploit higher-order smoothness levels, we further propose a localized version that attains the minimax rate under the H\"older class $\mathcal{H}^{q,\beta}$ for any $q\in\mathbb{N}$ and $\beta\in(0,1]$. Rather than relying on the widely-used random feature selection, we consider a balanced modification to existing approaches. Our results indicate that an over-reliance on auxiliary randomness may compromise the approximation power of tree models, leading to suboptimal results. Conversely, a less random, more balanced approach demonstrates optimality. Additionally, we establish uniform upper bounds and explore the application of random forests in average treatment effect estimation problems. Through simulation studies and real-data applications, we demonstrate the superior empirical performance of the proposed methods over existing random forests.
- Asia > China (0.14)
- South America > Paraguay > Asunción > Asunción (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (3 more...)
K-nearest Neighbor Search by Random Projection Forests
Yan, Donghui, Wang, Yingjie, Wang, Jin, Wang, Honggang, Li, Zhenpeng
K-nearest neighbor (kNN) search refers to the problem of finding K points closest toa given data point on a distance metric of interest. It is an important task in a wide range of applications, including similarity search in data mining [15,19], fast kernel methods in machine learning [17, 30, 38], nonparametric density estimation [5, 29, 31] and intrinsic dimension estimation [6, 26] in statistics, aswell as anomaly detection algorithms [2, 10, 37]. Numerous algorithms have been proposed for kNN search; the readers are referred to [35, 46] and references therein. Our interest is kNN search in emerging applications. Two 1 salient features of such applications are the expected scalability of the algorithms andtheir ability to handle data of high dimensionality. Additionally, such applications often desire more accurate kNN search. For example, robotic route planning [23] and face-based surveillance systems [34] require a high accuracy forthe robust execution of tasks. However, most existing work on kNN search [1, 4, 12, 15] have focused mainly on the fast computation and accuracy isofalessconcern.
- North America > United States > Massachusetts > Bristol County > Dartmouth (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Wisconsin (0.04)
- (3 more...)