Controlling Deliberation with the Success Probability in a Dynamic Environment

AAAI Conferences

Seiji Yamada Interdisciplinary Graduate School of Science and Engineering Tokyo Institute of Technology 4259 Nagatsuda, Midori-ku, Yokohama, Kanagawa 226, JAPAN Email: yamada ai.sanken.osaka-u.ac.j p Abstract This paper describes a novel method to interleave planning with execution in a dynamic environment. Though, in such planning, it is very important to control deliberation: to determine the timing for interleaving them, few research has been done. To cope with this problem, we propose a method to determine the interleave timing with the success probability, SP, that a plan will be successfully executed in an environment. We also developed a method to compute it efficiently with Bayesian networks and implemented SZ system. The system stops planning when the locally optimal plan's SP falls below an execution threshold, and executes the plan. Since SP depends on dynamics of an environment, a system does reactive behavior in a very dynamic environment, and becomes deliberative in a static one. We made experiments in Tileworld by changing dynamics and observation costs. As a result, we found the optimal threshold between reactivity and deliberation in some problem classes. Furthermore we found out the optimal threshold is robust against the change of dynamics and observation cost, and one of the classes in which S2"P works well is that the dynamics itself changes.


Expectation-maximization for logistic regression

arXiv.org Machine Learning

We present a family of expectation-maximization (EM) algorithms for binary and negative-binomial logistic regression, drawing a sharp connection with the variational-Bayes algorithm of Jaakkola and Jordan (2000). Indeed, our results allow a version of this variational-Bayes approach to be re-interpreted as a true EM algorithm. We study several interesting features of the algorithm, and of this previously unrecognized connection with variational Bayes. We also generalize the approach to sparsity-promoting priors, and to an online method whose convergence properties are easily established. This latter method compares favorably with stochastic-gradient descent in situations with marked collinearity.


Model Selection Through Sparse Maximum Likelihood Estimation

arXiv.org Artificial Intelligence

We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.


High-Accuracy Population-Based Image Search - DZone AI

#artificialintelligence

Established in 2018, the Machine Intelligence Technology Laboratory comprises of a group of outstanding scientists and engineers, with research centers located in Hangzhou, Beijing, Seattle, Silicon Valley, and Singapore. Machine Intelligence Technology Laboratory is Alibaba's core team responsible for the research and development of artificial intelligence technologies. Relying on Alibaba's valuable massive data and machine learning/deep learning technologies, the lab has developed image recognition, speech interaction, natural language understanding, intelligent decision-making, and other core artificial intelligence technologies. It fully empowers Alibaba Group's important businesses such as e-commerce, finance, logistics, social interaction, and entertainment, and also provides outputs to ecosystem partners to jointly build a smart future. Image Search is an intelligent image search product that enables search by image using image recognition and search functions, based on deep learning and large-scale machine learning technologies.


MAD-Bayes: MAP-based Asymptotic Derivations from Bayes

arXiv.org Machine Learning

The classical mixture of Gaussians model is related to K-means via small-variance asymptotics: as the covariances of the Gaussians tend to zero, the negative log-likelihood of the mixture of Gaussians model approaches the K-means objective, and the EM algorithm approaches the K-means algorithm. Kulis & Jordan (2012) used this observation to obtain a novel K-means-like algorithm from a Gibbs sampler for the Dirichlet process (DP) mixture. We instead consider applying small-variance asymptotics directly to the posterior in Bayesian nonparametric models. This framework is independent of any specific Bayesian inference algorithm, and it has the major advantage that it generalizes immediately to a range of models beyond the DP mixture. To illustrate, we apply our framework to the feature learning setting, where the beta process and Indian buffet process provide an appropriate Bayesian nonparametric prior. We obtain a novel objective function that goes beyond clustering to learn (and penalize new) groupings for which we relax the mutual exclusivity and exhaustivity assumptions of clustering. We demonstrate several other algorithms, all of which are scalable and simple to implement. Empirical results demonstrate the benefits of the new framework.