Bayesian Inference

Machine Learning Trick of the Day (7): Density Ratio Trick


A probability on its own is often an uninteresting thing. But when we can compare probabilities, that is when their full splendour is revealed. By comparing probabilities we are able form judgements; by comparing probabilities we can exploit the elements of our world that are probable; by comparing probabilities we can see the value of objects that are rare. In their own ways, all machine learning tricks help us make better probabilistic comparisons. Comparison is the theme of this post--not discussed in this series before--and the right start to this second sprint of machine learning tricks.

The 10 Algorithms Machine Learning Engineers Need to Know


It is no doubt that the sub-field of machine learning / artificial intelligence has increasingly gained more popularity in the past couple of years. As Big Data is the hottest trend in the tech industry at the moment, machine learning is incredibly powerful to make predictions or calculated suggestions based on large amounts of data. Some of the most common examples of machine learning are Netflix's algorithms to make movie suggestions based on movies you have watched in the past or Amazon's algorithms that recommend books based on books you have bought before. So if you want to learn more about machine learning, how do you start? For me, my first introduction is when I took an Artificial Intelligence class when I was studying abroad in Copenhagen.

Bayesian Methods for Machine Learning Coursera


About this course: Bayesian methods are used in lots of fields: from game development to drug discovery. They give superpowers to many machine learning algorithms: handling missing data, extracting much more information from small datasets. Bayesian methods also allow us to estimate uncertainty in predictions, which is a really desirable feature for fields like medicine. When Bayesian methods are applied to deep learning, it turns out that they allow you to compress your models 100 folds, and automatically tune hyperparametrs, saving your time and money. In six weeks we will discuss the basics of Bayesian methods: from how to define a probabilistic model to how to make predictions from it.

Probability concepts explained: Maximum likelihood estimation


In this post I'll explain what the maximum likelihood method for parameter estimation is and go through a simple example to demonstrate the method. Some of the content requires knowledge of fundamental probability concepts such as the definition of joint probability and independence of events. I've written a blog post with these prerequisites so feel free to read this if you think you need a refresher. Often in machine learning we use a model to describe the process that results in the data that are observed. For example, we may use a random forest model to classify whether customers may cancel a subscription from a service (known as churn modelling) or we may use a linear model to predict the revenue that will be generated for a company depending on how much they may spend on advertising (this would be an example of linear regression).

The 2002 Trading Agent Competition

AI Magazine

This article summarizes 16 agent strategies that were designed for the 2002 Trading Agent Competition. Agent architects use numerous general-purpose AI techniques, including machine learning, planning, partially observable Markov decision processes, Monte Carlo simulations, and multiagent systems. Ultimately, the most successful agents were primarily heuristic based and domain specific. It would be quite a daunting task to manually monitor prices and make bidding decisions at all web sites currently offering the camera--especially if accessories such as a flash and a tripod are sometimes bundled with the camera and sometimes auctioned separately. However, for the next generation of trading agents, autonomous bidding in simultaneous auctions will be a routine task.

Thinking Backward for Knowledge Acquisition

AI Magazine

This article examines the direction in which knowledge bases are constructed for diagnosis and decision making When building an expert system, it is traditional to elicit knowledge from an expert in the direction in which the knowledge is to be applied, namely, from observable evidence toward unobservable hypotheses However, experts usually find it simpler to reason in the opposite direction-from hypotheses to unobservable evidence-because this direction reflects causal relationships Therefore, we argue that a knowledge base be constructed following the expert's natural reasoning direction, and then reverse the direction for use This choice of representation direction facilitates knowledge acquisition in deterministic domains and is essential when a problem involves uncertainty We illustrate this concept with influence diagrams, a methodology for graphically representing a joint probability distribution Influence diagrams provide a practical means by which an expert can characterize the qualitative and quantitative relationships among evidence and hypotheses in the appropriate direction Once constructed, the relationships can easily be reversed into the less intuitive direction in order to perform inference and diagnosis, In this way, knowledge acquisition is made cognitively simple; the machine carries the burden of translating the representation "OK," we replied, "If the tiger were present, what is the probability that you would see that image? On the other hand, if the tiger were not present, what is the probability you would see it?" Before we could say "what is the probability there is a tiger in the first place?" Since then, we have pondered this question. Why is it that we want to look at problems of evidential reasoning backward?

PAGODA: A Model for

AI Magazine

The system consists of an overall agent architecture and five components within the architecture. The five components are (1) goaldirected learning (GDL), a decisiontheoretic method for selecting learning goals; (2) probabilistic bias evaluation (PBE), a technique for using probabilistic background knowledge to select learning biases for the learning goals; (3) uniquely predictive theories (UPTs) and probability computation using independence (PCI), a probabilistic representation and Bayesian inference method for the agent's theories; (4) a probabilistic learning component, consisting of a heuristic search algorithm and a Bayesian method for evaluating proposed theories; and (5) a decision-theoretic probabilistic planner, which searches through the probability space defined by the agent's current theory to select the best action. PAGODA's initial learning goal is just An autonomous agent must be able to select biases (Mitchell 1980) for new learning tasks as they arise. PBE uses probabilistic background knowledge and a model of the system's expected learning performance to compute the expected value of learning biases for each learning goal. The resulting expected discounted future accuracy is used as the expected value of the bias.

Review of Artificial Intelligence and Mobile Robotics: Case Studies of Successful Robot Systems

AI Magazine

Today, mobile robotics is an increasingly important bridge between the two areas. It is advancing the theory and practice of cooperative cognition, perception, and action and serving to reunite planning techniques with sensing and real-world performance. Further, developments in mobile robotics can have important practical economic and military consequences. For some time now, amateurs, hobbyists, students, and researchers have had access to how-to books on the low-level mechanical and electronic aspects of mobile-robot construction (Everett 1995; McComb 1987). The famous Massachusetts Institute of Technology (MIT) 6.270 robot-building course has contributed course notes and hardware kits that are now available commercially and in the form of an influential book (Jones 1998; Jones and Flynn 1993).

Cognitive Robotics

AI Magazine

The American Association for Artificial Intelligence (AAAI) held its 1998 Fall Symposium Series on 23 to 25 October at the Omni Rosen Hotel in Orlando, Florida. This article contains summaries of seven of the symposia that were conducted: (1) Cognitive Robotics; (2) Distributed, Continual Planning; (3) Emotional and Intelligent: The Tangled Knot of Cognition; (4) Integrated Planning for Autonomous Agent Architectures; (5) Planning with Partially Observable Markov Decision Processes; (6) Reasoning with Visual and Diagrammatic Representations; and (7) Robotics and Biology: Developing Connections. Research in cognitive robotics is concerned with the theory and implementation of robots that reason, act, and perceive in changing incompletely known, unpredictable environments. Such robots must have higherlevel cognitive functions that involve, for example, reasoning about goals, actions, the cognitive states of other agents, and time as well as when to perceive and what to look for.

The Multi-Agent Programming Contest

AI Magazine

It was started in 2005 and is an annual event that attracts between 5 and 10 teams. It has since been organized by the AI group at Clausthal University of Technology. MAPC is not collocated with any other event. Using our MASSim platform, the participants are running their own systems locally and only interact with the tournament server over the Internet. A steering committee oversees the whole process and determines the organization committee. The scenario changes every other year: the current one is "Agents on Mars." The goal was to implement a team of heterogeneous, cooperating agents to occupy zones on planet Mars. The infrastructure on Mars is given by a directed graph (300 nodes). Agents could take on roles (explorer, sentinel, saboteur, repairer, inspector) and needed to cooperate in an environment with incomplete knowledge so as to win against a competing team: the graph was not known, and each action comes at a price. Conquered terrain brings in money to improve agents. The timeline of the contest is as follows.