Plotting

 Country


Bootstrapping Intrinsically Motivated Learning with Human Demonstrations

arXiv.org Artificial Intelligence

The word intrinsic motivation was first used in I. APPROACHESFOR ADAPTIVEPERSONALROBOTS psychology to describe the capability of humans to be attracted toward different activities for the pleasure that they experience The promise of personal robots operating in human environments intrinsically. These mechanisms have been shown crucial for to interact with people on a daily basis points out the humans to autonomously learn and discover new capabilities importance of adaptivity of the machine to its environment and [14]-[16]. This inspired the creation of fully autonomous users. The robot can no longer simply be all-programmed in robots [17]-[22] with meta-exploration mechanisms monitoring advance by engineers, and reproduce only actions predesigned the evolution of learning performances of the robot, in in factories. It needs to match its behaviour and learn new order to maximise informational gain, and with heuristics skills as the environment and users' needs change.


Entropy Search for Information-Efficient Global Optimization

arXiv.org Machine Learning

Contemporary global optimization algorithms are based on local measures of utility, rather than a probability measure over location and value of the optimum. They thus attempt to collect low function values, not to learn about the optimum. The reason for the absence of probabilistic global optimizers is that the corresponding inference problem is intractable in several ways. This paper develops desiderata for probabilistic optimization algorithms, then presents a concrete algorithm which addresses each of the computational intractabilities with a sequence of approximations and explicitly adresses the decision problem of maximizing information gain from each evaluation.


Classification with Invariant Scattering Representations

arXiv.org Machine Learning

A scattering transform defines a signal representation which is invariant to translations and Lipschitz continuous relatively to deformations. It is implemented with a non-linear convolution network that iterates over wavelet and modulus operators. Lipschitz continuity locally linearizes deformations. Complex classes of signals and textures can be modeled with low-dimensional affine spaces, computed with a PCA in the scattering domain. Classification is performed with a penalized model selection. State of the art results are obtained for handwritten digit recognition over small training sets, and for texture classification.


Strong Equivalence of Qualitative Optimization Problems

arXiv.org Artificial Intelligence

We introduce the framework of qualitative optimization problems (or, simply, optimization problems) to represent preference theories. The formalism uses separate modules to describe the space of outcomes to be compared (the generator) and the preferences on outcomes (the selector). We consider two types of optimization problems. They differ in the way the generator, which we model by a propositional theory, is interpreted: by the standard propositional logic semantics, and by the equilibrium-model (answer-set) semantics. Under the latter interpretation of generators, optimization problems directly generalize answer-set optimization programs proposed previously. We study strong equivalence of optimization problems, which guarantees their interchangeability within any larger context. We characterize several versions of strong equivalence obtained by restricting the class of optimization problems that can be used as extensions and establish the complexity of associated reasoning tasks. Understanding strong equivalence is essential for modular representation of optimization problems and rewriting techniques to simplify them without changing their inherent properties.


Dimension adaptability of Gaussian process models with variable selection and projection

arXiv.org Machine Learning

It is now known that an extended Gaussian process model equipped with rescaling can adapt to different smoothness levels of a function valued parameter in many nonparametric Bayesian analyses, offering a posterior convergence rate that is optimal (up to logarithmic factors) for the smoothness class the true function belongs to. This optimal rate also depends on the dimension of the function's domain and one could potentially obtain a faster rate of convergence by casting the analysis in a lower dimensional subspace that does not amount to any loss of information about the true function. In general such a subspace is not known a priori but can be explored by equipping the model with variable selection or linear projection. We demonstrate that for nonparametric regression, classification, density estimation and density regression, a rescaled Gaussian process model equipped with variable selection or linear projection offers a posterior convergence rate that is optimal (up to logarithmic factors) for the lowest dimension in which the analysis could be cast without any loss of information about the true function. Theoretical exploration of such dimension reduction features appears novel for Bayesian nonparametric models with or without Gaussian processes.


Automatic Vehicle Checking Agent (VCA)

arXiv.org Artificial Intelligence

A definition of intelligence is given in terms of performance that can be quantitatively measured. In this study, we have presented a conceptual model of Intelligent Agent System for Automatic Vehicle Checking Agent (VCA). To achieve this goal, we have introduced several kinds of agents that exhibit intelligent features. These are the Management agent, internal agent, External Agent, Watcher agent and Report agent. Metrics and measurements are suggested for evaluating the performance of Automatic Vehicle Checking Agent (VCA). Calibrate data and test facilities are suggested to facilitate the development of intelligent systems.


Information-Maximization Clustering based on Squared-Loss Mutual Information

arXiv.org Machine Learning

Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it only involves continuous optimization of model parameters, which is substantially easier to solve than discrete optimization of cluster assignments. However, existing methods still involve non-convex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this paper, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.


Mask Iterative Hard Thresholding Algorithms for Sparse Image Reconstruction of Objects with Known Contour

arXiv.org Machine Learning

We develop mask iterative hard thresholding algorithms (mask IHT and mask DORE) for sparse image reconstruction of objects with known contour. The measurements follow a noisy underdetermined linear model common in the compressive sampling literature. Assuming that the contour of the object that we wish to reconstruct is known and that the signal outside the contour is zero, we formulate a constrained residual squared error minimization problem that incorporates both the geometric information (i.e. the knowledge of the object's contour) and the signal sparsity constraint. We first introduce a mask IHT method that aims at solving this minimization problem and guarantees monotonically non-increasing residual squared error for a given signal sparsity level. We then propose a double overrelaxation scheme for accelerating the convergence of the mask IHT algorithm. We also apply convex mask reconstruction approaches that employ a convex relaxation of the signal sparsity constraint. In X-ray computed tomography (CT), we propose an automatic scheme for extracting the convex hull of the inspected object from the measured sinograms; the obtained convex hull is used to capture the object contour information. We compare the proposed mask reconstruction schemes with the existing large-scale sparse signal reconstruction methods via numerical simulations and demonstrate that, by exploiting both the geometric contour information of the underlying image and sparsity of its wavelet coefficients, we can reconstruct this image using a significantly smaller number of measurements than the existing methods.


Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations

arXiv.org Machine Learning

This paper establishes information-theoretic limits in estimating a finite field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse - a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n\times n-sensing matrices contain, on average, \Omega(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the above results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one. Finally, we provide a non-exhaustive procedure to search for the unknown low-rank matrix.


Joint estimation of linear non-Gaussian acyclic models

arXiv.org Machine Learning

A linear non-Gaussian structural equation model called LiNGAM is an identifiable model for exploratory causal analysis. Previous methods estimate a causal ordering of variables and their connection strengths based on a single dataset. However, in many application domains, data are obtained under different conditions, that is, multiple datasets are obtained rather than a single dataset. In this paper, we present a new method to jointly estimate multiple LiNGAMs under the assumption that the models share a causal ordering but may have different connection strengths and differently distributed variables. In simulations, the new method estimates the models more accurately than estimating them separately.