Goto

Collaborating Authors

Optimal Schedules for Parallelizing Anytime Algorithms: The Case of Shared Resources

Journal of Artificial Intelligence Research

The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types.


Efficient Inverse-Free Algorithms for Extreme Learning Machine Based on the Recursive Matrix Inverse and the Inverse LDL' Factorization

arXiv.org Machine Learning

The inverse-free extreme learning machine (ELM) algorithm proposed in [4] was based on an inverse-free algorithm to compute the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix. Before that recursive algorithm was applied in [4], its improved version had been utilized in previous literatures [9], [10]. Accordingly from the improved recursive algorithm [9], [10], we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we develop the proposed inverse-free ELM algorithm 1. Moreover, the proposed ELM algorithm 2 further reduces the computational complexity, which computes the output weights directly from the updated inverse, and avoids computing the regularized pseudoinverse. Lastly, instead of updating the inverse, the proposed ELM algorithm 3 updates the LDLT factor of the inverse by the inverse LDLT factorization [11], to avoid numerical instabilities after a very large number of iterations [12]. With respect to the existing ELM algorithm, the proposed ELM algorithms 1, 2 and 3 are expected to require only (8+3)/M , (8+1)/M and (8+1)/M of complexities, respectively, where M is the output node number. In the numerical experiments, the standard ELM, the existing inverse-free ELM algorithm and the proposed ELM algorithms 1, 2 and 3 achieve the same performance in regression and classification, while all the 3 proposed algorithms significantly accelerate the existing inverse-free ELM algorithm


A kNN algorithm with a unfixed k? • /r/MachineLearning

#artificialintelligence

I am wondering if there is any research out their about an kNN classifier with a optimized algorithm where a function is trained upon the training data set that maps a point to a value of k. Then, when the algorithm needs to classify a new point, it first looks for the nearest point in this trained function to find what value k it should use. Any thoughts or links to research like this?


Extending Gossip Algorithms to Distributed Estimation of U-statistics

Neural Information Processing Systems

Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of U-statistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds of O(1 / t) and O(log t / t) for the synchronous and asynchronous cases respectively, where t is the number of iterations, with explicit data and network dependent terms.


Data Science Content Not Found on Google

@machinelearnbot

Here is some great content that you won't find on Google. I hope to add more in the future, and feel free to email me at [email protected] if you want to add some of your links. It is easy to remember this page: the URL is BannedOnGoogle.com. It's not that the articles below are black-listed by Google, but most likely, Google algorithms are not working properly: either they can't find the page or can only find the mobile version (issue with Google's indexation algorithm) or instead, when searching for the article's title, Google returns irrelevant articles, or a copy of the article that is illegaly stolen and hosted elsewhere (issue with Google's web page scoring / ranking / attribution algorithms.) To learn more about these problems (how to design a good search engine or improve Google) click here, and here.