Goto

Collaborating Authors

bayesian approach


Automating WhatsApp with NLP: Complete guide

#artificialintelligence

We all know that coding is a superpower and can achieve numerous things with it. In this article we briefly review one of its application by'Building a chat bot using python'. Well, actually not only a chat bot but you can do a lot of cool stuffs with it. All you need is basic knowledge in python programming. This tutorial is only for educational purpose to demonstrate another application of Natural Language Processing (NLP) in Machine Learning using Python.


Generative AI: A Key to Machine Intelligence?

#artificialintelligence

We're living in the age of the next industrial revolution: the very first three freed most of the humans from hard labor. This one is aiming to take us over the last domain of human dominance on this planet: our intelligence. In this article, we will put aside ethical, political and social effects of such revolution and concentrate a bit more on the technical side of it. What we see in media today looks a bit different from the real dominance of machines over humans… or not? The most rapidly growing areas of artificial intelligence in the few last years have been computer vision, natural language processing, speech processing and, of course, different customer analytics applications like recommender systems (you may not like it, but targeted advertisements are accurate enough to grow companies' revenues).


4 Free Math Courses to do and Level up your Data Science Skills - KDnuggets

#artificialintelligence

For a lot of higher-level courses in Machine Learning and Data Science, you find you need to freshen up on the basics in mathematics -- stuff you may have studied before in school or university, but which was taught in another context, or not very intuitively, such that you struggle to relate it to how it's used in Computer Science. This specialization aims to bridge that gap, getting you up to speed in the underlying mathematics, building an intuitive understanding, and relating it to Machine Learning and Data Science. TIP: most of Coursera's courses and specializations have the option to audit them. You won't get a certificate, but you'll access most of the resources of the course--something I personally found more than enough. At the moment of enrolling, just select the option to audit the course.


Protecting Classifiers From Attacks. A Bayesian Approach

arXiv.org Machine Learning

Classification problems in security settings are usually modeled as confrontations in which an adversary tries to fool a classifier manipulating the covariates of instances to obtain a benefit. Most approaches to such problems have focused on game-theoretic ideas with strong underlying common knowledge assumptions, which are not realistic in the security realm. We provide an alternative Bayesian framework that accounts for the lack of precise knowledge about the attacker's behavior using adversarial risk analysis. A key ingredient required by our framework is the ability to sample from the distribution of originating instances given the possibly attacked observed one. We propose a sampling procedure based on approximate Bayesian computation, in which we simulate the attacker's problem taking into account our uncertainty about his elements. For large scale problems, we propose an alternative, scalable approach that could be used when dealing with differentiable classifiers. Within it, we move the computational load to the training phase, simulating attacks from an adversary, adapting the framework to obtain a classifier robustified against attacks.


The covariance matrix of Green's functions and its application to machine learning

arXiv.org Machine Learning

In this paper, a regression algorithm based on Green's function theory is proposed and implemented. We first survey Green's function for the Dirichlet boundary value problem of 2nd order linear ordinary differential equation, which is a reproducing kernel of a suitable Hilbert space. We next consider a covariance matrix composed of the normalized Green's function, which is regarded as aprobability density function. By supporting Bayesian approach, the covariance matrix gives predictive distribution, which has the predictive mean $\mu$ and the confidence interval [$\mu$-2s, $\mu$+2s], where s stands for a standard deviation.


Bayesian System ID: Optimal management of parameter, model, and measurement uncertainty

arXiv.org Machine Learning

We evaluate the robustness of a probabilistic formulation of system identification (ID) to sparse, noisy, and indirect data. Specifically, we compare estimators of future system behavior derived from the Bayesian posterior of a learning problem to several commonly used least squares-based optimization objectives used in system ID. Our comparisons indicate that the log posterior has improved geometric properties compared with the objective function surfaces of traditional methods that include differentially constrained least squares and least squares reconstructions of discrete time steppers like dynamic mode decomposition (DMD). These properties allow it to be both more sensitive to new data and less affected by multiple minima --- overall yielding a more robust approach. Our theoretical results indicate that least squares and regularized least squares methods like dynamic mode decomposition and sparse identification of nonlinear dynamics (SINDy) can be derived from the probabilistic formulation by assuming noiseless measurements. We also analyze the computational complexity of a Gaussian filter-based approximate marginal Markov Chain Monte Carlo scheme that we use to obtain the Bayesian posterior for both linear and nonlinear problems. We then empirically demonstrate that obtaining the marginal posterior of the parameter dynamics and making predictions by extracting optimal estimators (e.g., mean, median, mode) yields orders of magnitude improvement over the aforementioned approaches. We attribute this performance to the fact that the Bayesian approach captures parameter, model, and measurement uncertainties, whereas the other methods typically neglect at least one type of uncertainty.


Architecture as a Graph

#artificialintelligence

The Bayesian approach used in this article demonstrates the relevance of stochasticity for the design process. On one hand, statistical inference allows us to model and replicate complicated phenomena and here, complexity found among floorplans. On the other hand, it allows us to generate a wide variety of options, that will inspire the creative process. At the same time, we provide evidence for the importance of unpacking design into nested steps and levels of abstraction. We have addressed here the underlying structure of floorplans by tackling their adjacency.


A Bayesian Approach to Concept Drift

Neural Information Processing Systems

To cope with concept drift, we placed a probability distribution over the location of the most-recent drift point. We used Bayesian model comparison to update this distribution from the predictions of models trained on blocks of consecutive observations and pruned potential drift points with low probability. We compare our approach to a non-probabilistic method for drift and a probabilistic method for change-point detection. In our experiments, our approach generally yielded improved accuracy and/or speed over these other methods. Papers published at the Neural Information Processing Systems Conference.


A Bayesian Approach for Policy Learning from Trajectory Preference Queries

Neural Information Processing Systems

We consider the problem of learning control policies via trajectory preference queries to an expert. In particular, the learning agent can present an expert with short runs of a pair of policies originating from the same state and the expert then indicates the preferred trajectory. The agent's goal is to elicit a latent target policy from the expert with as few queries as possible. To tackle this problem we propose a novel Bayesian model of the querying process and introduce two methods that exploit this model to actively select expert queries. Experimental results on four benchmark problems indicate that our model can effectively learn policies from trajectory preference queries and that active query selection can be substantially more efficient than random selection.


A Bayesian Approach to Generative Adversarial Imitation Learning

Neural Information Processing Systems

Generative adversarial training for imitation learning has shown promising results on high-dimensional and continuous control tasks. This paradigm is based on reducing the imitation learning problem to the density matching problem, where the agent iteratively refines the policy to match the empirical state-action visitation frequency of the expert demonstration. Although this approach has shown to robustly learn to imitate even with scarce demonstration, one must still address the inherent challenge that collecting trajectory samples in each iteration is a costly operation. To address this issue, we first propose a Bayesian formulation of generative adversarial imitation learning (GAIL), where the imitation policy and the cost function are represented as stochastic neural networks. Then, we show that we can significantly enhance the sample efficiency of GAIL leveraging the predictive density of the cost, on an extensive set of imitation learning tasks with high-dimensional states and actions.