Uncertainty


ANZ OnePath using AI and fuzzy logic to avoid 'the dreaded other'

#artificialintelligence

Applying for life insurance is a long and often frustrating process. Thousands of questions on seemingly every medical condition ever suffered – except yours. "We've had multiple occurrences where people answer no to all the [medical] questions, then they come to the'other' box at the end and they'll go – 'oh yeah I've had X'. And that question is actually back there, but they didn't understand it so they defaulted to'other' and started writing chapter and verse about their medical condition," explains ANZ OnePath's chief underwriter Peter Tilocca. Whenever answers are given free form, typically the application will require the scrutiny of an underwriter.


Keynotes – BNAIC/BENELEARN 2018

#artificialintelligence

Information-rich representations of text often decrease sample complexity when an natural language processing (NLP) system is trained on a task. One effective way of producing such representations is the traditional NLP pipeline: tokenization, tagging, parsing etc. An alternative are so-called embeddings that represent text in a high-dimensional real-valued space that is smooth and thereby supports generalization. Most commonly, words are represented as embeddings, but more recently contextualized embeddings like ELMo have been proposed. I will address two challenges for embeddings in this talk.


When Bayes, Ockham, and Shannon come together to define machine learning

#artificialintelligence

It is somewhat surprising that among all the high-flying buzzwords of machine learning, we don't hear much about the one phrase which fuses some of the core concepts of statistical learning, information theory, and natural philosophy into a single three-word-combo. Moreover, it is not just an obscure and pedantic phrase meant for machine learning (ML) Ph.Ds and theoreticians. It has a precise and easily accessible meaning for anyone interested to explore, and a practical pay-off for the practitioners of ML and data science. I am talking about Minimum Description Length. Let's peel the layers off and see how useful it is… We start with (not chronologically) with Reverend Thomas Bayes, who by the way, never published his idea about how to do statistical inference, but was later immortalized by the eponymous theorem.


Learning under Misspecified Objective Spaces

arXiv.org Artificial Intelligence

Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human's desired objective lies within the robot's hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot's task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human's correction is for the robot's hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a 7DoF robot manipulator.


Toward Human-Understandable, Explainable AI

IEEE Computer

Recent increases in computing power, coupled with rapid growth in the availability and quantity of data have rekindled our interest in the theory and applications of artificial intelligence (AI). However, for AI to be confidently rolled out by industries and governments, users want greater transparency through explainable AI (XAI) systems. The author introduces XAI concepts, and gives an overview of areas in need of further exploration--such as type-2 fuzzy logic systems--to ensure such systems can be fully understood and analyzed by the lay user.


Compositional planning in Markov decision processes: Temporal abstraction meets generalized logic composition

arXiv.org Artificial Intelligence

Abstract-- In hierarchical planning for Markov decision processes (MDPs), temporal abstraction allows planning with macro-actions that take place at different time scale in form of sequential composition. In this paper, we propose a novel approach to compositional reasoning and hierarchical planning for MDPs under temporal logic constraints. In addition to sequential composition, we introduce a composition of policies based on generalized logic composition: Given sub-policies for sub-tasks and a new task expressed as logic compositions of subtasks, a semi-optimal policy, which is optimal in planning with only sub-policies, can be obtained by simply composing sub-polices. Thus, a synthesis algorithm is developed to compute optimal policies efficiently by planning with primitive actions, policies for sub-tasks, and the compositions of sub-policies, for maximizing the probability of satisfying temporal logic specifications. We demonstrate the correctness and efficiency of the proposed method in stochastic planning examples with a single agent and multiple task specifications. I. INTRODUCTION Temporal logic is an expressive language to describe desired system properties: safety, reachability, obligation, stability, and liveness [18]. The algorithms for planning and probabilistic verification with temporal logic constraints have developed, with both centralized [2], [7], [17] and distributed methods [10]. Yet, there are two main barriers to practical applications: 1) The issue of scalability: In temporal logic constrained control problems, it is often necessary to introduce additional memory states for keeping track of the evolution of state variables with respect to these temporal logic constraints. The additional memory states grow exponentially (or double exponentially depending on the class of temporal logic) in the length of a specification [11] and make synthesis computational extensive.


Probabilistic Meta-Representations Of Neural Networks

arXiv.org Artificial Intelligence

Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently. Here, we consider a richer prior distribution in which units in the network are represented by latent variables, and the weights between units are drawn conditionally on the values of the collection of those variables. This allows rich correlations between related weights, and can be seen as realizing a function prior with a Bayesian complexity regularizer ensuring simple solutions. We illustrate the resulting meta-representations and representations, elucidating the power of this prior.


Counterfactually Fair Prediction Using Multiple Causal Models

arXiv.org Artificial Intelligence

In this paper we study the problem of making predictions using multiple structural casual models defined by different agents, under the constraint that the prediction satisfies the criterion of counterfactual fairness. Relying on the frameworks of causality, fairness and opinion pooling, we build upon and extend previous work focusing on the qualitative aggregation of causal Bayesian networks and causal models. In order to complement previous qualitative results, we devise a method based on Monte Carlo simulations. This method enables a decision-maker to aggregate the outputs of the causal models provided by different experts while guaranteeing the counterfactual fairness of the result. We demonstrate our approach on a simple, yet illustrative, toy case study.


Fuzzy logic makes a comeback – in picking where Earth sticks its probes into alien worlds

#artificialintelligence

MIT boffins reckon they can use old-school artificial intelligence to do much of the grunt work in the tricky task of picking suitable landing spots for spacecraft. The software uses fuzzy logic algorithms, which were introduced in the 1960s and were rather trendy in the 1990s. "Traditionally this idea comes from mathematics, where instead of saying an element belongs to a set, yes or no, fuzzy logic says it belongs with a certain probability, thus reflecting incomplete or imprecise information," Victor Pankratius, coauthor of the paper and a research scientist and principal investigator in NASA and National Science Foundation projects at MIT, explained this week. NASA and other space agencies have slowly amassed troves of geographical data on Mars. The researchers reckon that NASA has over 100 Terabits from all the different orbiters, landers, and rovers sent to the Red Planet, but it's still not enough to completely determine the exact conditions on the ground there.


How to Optimise Ad CTR with Reinforcement Learning Codementor

#artificialintelligence

In this blog we will try to get the basic idea behind reinforcement learning and understand what is a multi arm bandit problem. We will also be trying to maximise CTR(click through rate) for advertisements for a advertising agency. Article includes: 1. Basics of reinforcement learning 2. Types of problems in reinforcement learning 3. Understamding multi-arm bandit problem 4. Basics of conditional probability and Thompson sampling 5. Optimizing ads CTR using Thompson sampling in R Reinforcement Learning Basics Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) or maximise along a particular dimension over many steps; for example, maximise the points won in a game over many moves. They can start from a blank slate, and under the right conditions, they achieve superhuman performance. Like a child incentivized by spankings and candy, these algorithms are penalized when they make the wrong decisions and rewarded when they make the right ones -- this is reinforcement.