Search


Local search: It's all about mobile - Search Engine Land

#artificialintelligence

Optimizing for local search is important, but if you aren't optimizing for mobile, you're going to miss out on your most important source of local traffic. For years, Google has been improving the relevance of local search, from its "Pigeon Update" to Promoted Pins. And since there are more searches on mobile than desktop, it's no wonder that Google has put a big emphasis on mobile-friendliness in its ranking algorithms. The fact of the matter is, more and more local searches are taking place on mobile. More importantly, many of those local searches come with a high purchase intent, making local mobile searches an incredibly important opportunity for your business.


PhD Dissertation: Generalized Independent Components Analysis Over Finite Alphabets

arXiv.org Machine Learning

Independent component analysis (ICA) is a statistical method for transforming an observable multi-dimensional random vector into components that are as statistically independent as possible from each other. Usually the ICA framework assumes a model according to which the observations are generated (such as a linear transformation with additive noise). ICA over finite fields is a special case of ICA in which both the observations and the independent components are over a finite alphabet. In this thesis we consider a formulation of the finite-field case in which an observation vector is decomposed to its independent components (as much as possible) with no prior assumption on the way it was generated. This generalization is also known as Barlow's minimal redundancy representation and is considered an open problem. We propose several theorems and show that this hard problem can be accurately solved with a branch and bound search tree algorithm, or tightly approximated with a series of linear problems. Moreover, we show that there exists a simple transformation (namely, order permutation) which provides a greedy yet very effective approximation of the optimal solution. We further show that while not every random vector can be efficiently decomposed into independent components, the vast majority of vectors do decompose very well (that is, within a small constant cost), as the dimension increases. In addition, we show that we may practically achieve this favorable constant cost with a complexity that is asymptotically linear in the alphabet size. Our contribution provides the first efficient set of solutions to Barlow's problem with theoretical and computational guarantees. Finally, we demonstrate our suggested framework in multiple source coding applications.


Minimax Learning of Ergodic Markov Chains

arXiv.org Machine Learning

We compute the finite-sample minimax (modulo logarithmic factors) sample complexity of learning the parameters of a finite Markov chain from a single long sequence of states. Our error metric is a natural variant of total variation. The sample complexity necessarily depends on the spectral gap and minimal stationary probability of the unknown chain - for which, at least in the reversible case, there are known finite-sample estimators with fully empirical confidence intervals. To our knowledge, this is the first PAC-type result with nearly matching (up to logs) upper and lower bounds for learning, in any metric in the context of Markov chains.


MSc Dissertation: Exclusive Row Biclustering for Gene Expression Using a Combinatorial Auction Approach

arXiv.org Machine Learning

The availability of large microarray data has led to a growing interest in biclustering methods in the past decade. Several algorithms have been proposed to identify subsets of genes and conditions according to different similarity measures and under varying constraints. In this paper we focus on the exclusive row biclustering problem for gene expression data sets, in which each row can only be a member of a single bicluster while columns can participate in multiple ones. This type of biclustering may be adequate, for example, for clustering groups of cancer patients where each patient (row) is expected to be carrying only a single type of cancer, while each cancer type is associated with multiple (and possibly overlapping) genes (columns). We present a novel method to identify these exclusive row biclusters through a combination of existing biclustering algorithms and combinatorial auction techniques. We devise an approach for tuning the threshold for our algorithm based on comparison to a null model in the spirit of the Gap statistic approach. We demonstrate our approach on both synthetic and real-world gene expression data and show its power in identifying large span non-overlapping rows sub matrices, while considering their unique nature. The Gap statistic approach succeeds in identifying appropriate thresholds in all our examples.


Fast greedy algorithms for dictionary selection with generalized sparsity constraints

arXiv.org Machine Learning

In dictionary selection, several atoms are selected from finite candidates that successfully approximate given data points in the sparse representation. We propose a novel efficient greedy algorithm for dictionary selection. Not only does our algorithm work much faster than the known methods, but it can also handle more complex sparsity constraints, such as average sparsity. Using numerical experiments, we show that our algorithm outperforms the known methods for dictionary selection, achieving competitive performances with dictionary learning algorithms in a smaller running time.


How to Combine Tree-Search Methods in Reinforcement Learning

arXiv.org Artificial Intelligence

Finite-horizon lookahead policies are abundantly used in Reinforcement Learning and demonstrate impressive empirical success. Usually, the lookahead policies are implemented with specific planning methods such as Monte Carlo Tree Search (e.g. in AlphaZero). Referring to the planning problem as tree search, a reasonable practice in these implementations is to back up the value only at the leaves while the information obtained at the root is not leveraged other than for updating the policy. Here, we question the potency of this approach.Namely, the latter procedure is non-contractive in general, and its convergence is not guaranteed. Our proposed enhancement is straightforward and simple: use the return from the optimal tree path to back up the values at the descendants of the root. This leads to a \gamma^h-contracting procedure, where \gamma is the discount factor and $h$ is the tree depth. To establish our results, we first introduce a notion called multiple-step greedy consistency. We then provide convergence rates for two algorithmic instantiations of the above enhancement in the presence of noise injected to both the tree search stage and value estimation stage.


Vulcan: A Monte Carlo Algorithm for Large Chance Constrained MDPs with Risk Bounding Functions

arXiv.org Artificial Intelligence

Chance Constrained Markov Decision Processes maximize reward subject to a bounded probability of failure, and have been frequently applied for planning with potentially dangerous outcomes or unknown environments. Solution algorithms have required strong heuristics or have been limited to relatively small problems with up to millions of states, because the optimal action to take from a given state depends on the probability of failure in the rest of the policy, leading to a coupled problem that is difficult to solve. In this paper we examine a generalization of a CCMDP that trades off probability of failure against reward through a functional relationship. We derive a constraint that can be applied to each state history in a policy individually, and which guarantees that the chance constraint will be satisfied. The approach decouples states in the CCMDP, so that large problems can be solved efficiently. We then introduce Vulcan, which uses our constraint in order to apply Monte Carlo Tree Search to CCMDPs. Vulcan can be applied to problems where it is unfeasible to generate the entire state space, and policies must be returned in an anytime manner. We show that Vulcan and its variants run tens to hundreds of times faster than linear programming methods, and over ten times faster than heuristic based methods, all without the need for a heuristic, and returning solutions with a mean suboptimality on the order of a few percent. Finally, we use Vulcan to solve for a chance constrained policy in a CCMDP with over $10^{13}$ states in 3 minutes.


ExpIt-OOS: Towards Learning from Planning in Imperfect Information Games

arXiv.org Artificial Intelligence

The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.


An Efficient Matheuristic for the Minimum-Weight Dominating Set Problem

arXiv.org Artificial Intelligence

A minimum dominating set in a graph is a minimum set of vertices such that every vertex of the graph either belongs to it, or is adjacent to one vertex of this set. This mathematical object is of high relevance in a number of applications related to social networks analysis, design of wireless networks, coding theory, and data mining, among many others. When vertex weights are given, minimizing the total weight of the dominating set gives rise to a problem variant known as the minimum weight dominating set problem. To solve this problem, we introduce a hybrid matheuristic combining a tabu search with an integer programming solver. The latter is used to solve subproblems in which only a fraction of the decision variables, selected relatively to the search history, are left free while the others are fixed. Moreover, we introduce an adaptive penalty to promote the exploration of intermediate infeasible solutions during the search, enhance the algorithm with perturbations and node elimination procedures, and exploit richer neighborhood classes. Extensive experimental analyses on a variety of instance classes demonstrate the good performance of the algorithm, and the contribution of each component in the success of the search is analyzed.


AlphaGo Zero demystified

#artificialintelligence

DeepMind has shaken the world of Reinforcement Learning and Go with its creation AlphaGo, and later AlphaGo Zero. It is the first computer program to beat a human professional Go player without handicap on a 19 x 19 board. It has also beaten the world champion Lee Sedol 4 games to 1, Ke Jie (number one world ranked player at the time) and many other top ranked players with the Zero version. The game of Go is a difficult environment because of its very large branching factor at every move which makes classical techniques such as alpha-beta pruning and heuristic search unrealistic. I will present my work on reproducing the paper as closely as I could.