Goto

Collaborating Authors

Optimal Schedules for Parallelizing Anytime Algorithms: The Case of Shared Resources

Journal of Artificial Intelligence Research

The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types.


Extending Gossip Algorithms to Distributed Estimation of U-statistics

Neural Information Processing Systems

Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of U-statistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds of O(1 / t) and O(log t / t) for the synchronous and asynchronous cases respectively, where t is the number of iterations, with explicit data and network dependent terms.


A kNN algorithm with a unfixed k? • /r/MachineLearning

#artificialintelligence

I am wondering if there is any research out their about an kNN classifier with a optimized algorithm where a function is trained upon the training data set that maps a point to a value of k. Then, when the algorithm needs to classify a new point, it first looks for the nearest point in this trained function to find what value k it should use. Any thoughts or links to research like this?


Wang

AAAI Conferences

We generally use hit rate to measure the performance of item recommendation algorithms. In addition to hit rate, we consider another two important factors which are ignored by most previous works. First, whether users are satisfied with the recommended items. It is possible that a user has bought an item but dislikes it. Hence high hit rate does not reflect high customer satisfaction.


Lu

AAAI Conferences

This paper proposes the Proximal Iteratively REweighted (PIRE) algorithm for solving a general problem, which involves a large body of nonconvex sparse and structured sparse related problems. Comparing with previous iterative solvers for nonconvex sparse problem, PIRE is much more general and efficient. The computational cost of PIRE in each iteration is usually as low as the state-of-the-art convex solvers. We further propose the PIRE algorithm with Parallel Splitting (PIRE-PS) and PIRE algorithm with Alternative Updating (PIRE-AU) to handle the multi-variable problems. In theory, we prove that our proposed methods converge and any limit solution is a stationary point. Extensive experiments on both synthesis and real data sets demonstrate that our methods achieve comparative learning performance, but are much more efficient, by comparing with previous nonconvex solvers.