Goto

Collaborating Authors

 fast estimation


Fast Estimation of Causal Interactions using Wold Processes

Neural Information Processing Systems

We here focus on the task of learning Granger causality matrices for multivariate point processes. In order to accomplish this task, our work is the first to explore the use of Wold processes. By doing so, we are able to develop asymptotically fast MCMC learning algorithms. With $N$ being the total number of events and $K$ the number of processes, our learning algorithm has a $O(N(\,\log(N)\,+\,\log(K)))$ cost per iteration. This is much faster than the $O(N^3\,K^2)$ or $O(K^3)$ for the state of the art. Our approach, called GrangerBusca, is validated on nine datasets. This is an advance in relation to most prior efforts which focus mostly on subsets of the Memetracker data. Regarding accuracy, GrangerBusca is three times more accurate (in Precision@10) than the state of the art for the commonly explored subsets Memetracker. Due to GrangerBusca's much lower training complexity, our approach is the only one able to train models for larger, full, sets of data.


Fast Estimation of Relative Transformation Based on Fusion of Odometry and UWB Ranging Data

Fu, Yuan, Zhang, Zheng, Zeng, Guangyang, Liu, Chun, Wu, Junfeng, Ren, Xiaoqiang

arXiv.org Artificial Intelligence

In this paper, we investigate the problem of estimating the 4-DOF (three-dimensional position and orientation) robot-robot relative frame transformation using odometers and distance measurements between robots. Firstly, we apply a two-step estimation method based on maximum likelihood estimation. Specifically, a good initial value is obtained through unconstrained least squares and projection, followed by a more accurate estimate achieved through one-step Gauss-Newton iteration. Additionally, the optimal installation positions of Ultra-Wideband (UWB) are provided, and the minimum operating time under different quantities of UWB devices is determined. Simulation demonstrates that the two-step approach offers faster computation with guaranteed accuracy while effectively addressing the relative transformation estimation problem within limited space constraints. Furthermore, this method can be applied to real-time relative transformation estimation when a specific number of UWB devices are installed.


Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model

Ma, Jiaqi, Yi, Xinyang, Tang, Weijing, Zhao, Zhe, Hong, Lichan, Chi, Ed H., Mei, Qiaozhu

arXiv.org Machine Learning

The industry-scale ranking systems are typically applied to millions of items in a personalized way for billions of users. To We investigate the Plackett-Luce (PL) model meet the need of scalability and to exploit a huge based listwise learning-to-rank (LTR) on amount of user feedback data, learning-to-rank (LTR) data with partitioned preference, where a set has been the most popular paradigm for building the of items are sliced into ordered and disjoint ranking system. Existing LTR approaches can be categorized partitions, but the ranking of items within a into three groups: pointwise (Gey, 1994), pairwise partition is unknown. Given N items with (Burges et al., 2005), and listwise (Cao et al., M partitions, calculating the likelihood of 2007; Taylor et al., 2008) methods. The pointwise and data with partitioned preference under the pairwise LTR methods convert the ranking problem PL model has a time complexity of O(N S!), into regression or classification tasks on single or pairs where S is the maximum size of the top M 1 of items respectively.


Fast Estimation of Causal Interactions using Wold Processes

Figueiredo, Flavio, Borges, Guilherme Resende, Melo, Pedro O.S. Vaz de, Assunção, Renato

Neural Information Processing Systems

We here focus on the task of learning Granger causality matrices for multivariate point processes. In order to accomplish this task, our work is the first to explore the use of Wold processes. By doing so, we are able to develop asymptotically fast MCMC learning algorithms. With $N$ being the total number of events and $K$ the number of processes, our learning algorithm has a $O(N(\,\log(N)\, \,\log(K)))$ cost per iteration. This is much faster than the $O(N 3\,K 2)$ or $O(K 3)$ for the state of the art.