Goto

Collaborating Authors

Bayesian Inference


@Radiology_AI

#artificialintelligence

Interpretable, highly accurate segmentation models have the potential to provide substantial benefit for automated clinical workflows. Estimating the uncertainty in a model's prediction (predictive uncertainty) can help clinicians quantify, visualize, and communicate model performance. Variational inference, Monte Carlo dropout, and ensembles are reliable methods to estimate predictive uncertainty. Interpretable artificial intelligence is key for clinical translation of this technology. Artificial intelligence (AI) has seen a resurgence in popularity since the development of deep learning (DL), a method to learn representations within data with multiple levels of abstraction (1). DL frameworks have been widely successful for a variety of applications, including image object recognition and detection tasks where there is a particular interest in applying this technology to interpret complex medical images (2). As modern DL frameworks are structured through multiple hidden layers of network weights, these networks are coined as black box models.



Parametric Bayesian Inference: Implementation of Numerical Sampling Techniques with Proofs

#artificialintelligence

Parametric Bayesian Inference: Implementation of Numerical Sampling Techniques with Proofs. Acceptance/Rejection Sampling & MCMC Metropolis-Hastings Sampling with full Computational Simulation.


Naive Bayes Algorithm

#artificialintelligence

Have you ever noticed emails being categorized into different buckets and automatically being marked as important, spam, promotions, etc? And if you have, has it really piqued your curiosity as to…


Machine Learning Algorithm and Deep Learning

#artificialintelligence

There are 4 main types of Machine Learning Algorithm, the choice of the algorithm depends on the data type in the use case. It is an equation which describes a line, which represents relationship between input (x) and output (y) variables. By finding specific weightage for input variables called coefficients (b). Predictive modeling is primarily concerned when minimizing system errors or making the most accurate predictions possible at the expense of expansibility. It is a graphical representation of all possible solutions to a decision based on few conditions, it uses predictive models to achieve results, it is drawn upside down with its root at the top and it splits into branches based on a condition or internal node The end of the branch that doesn't not split, is the decision leaf.


Logical Credal Networks

arXiv.org Artificial Intelligence

This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability. Given imprecise information represented by probability bounds and conditional probability bounds of logic formulas, this logic specifies a set of probability distributions over all interpretations. On the one hand, our approach allows propositional and first-order logic formulas with few restrictions, e.g., without requiring acyclicity. On the other hand, it has a Markov condition similar to Bayesian networks and Markov random fields that is critical in real-world applications. Having both these properties makes this logic unique, and we investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud. The results show that the proposed method outperforms existing approaches, and its advantage lies in aggregating multiple sources of imprecise information.


A survey of Bayesian Network structure learning

arXiv.org Artificial Intelligence

Bayesian Networks (BNs) have become increasingly popular over the last few decades as a tool for reasoning under uncertainty in fields as diverse as medicine, biology, epidemiology, economics and the social sciences. This is especially true in real-world areas where we seek to answer complex questions based on hypothetical evidence to determine actions for intervention. However, determining the graphical structure of a BN remains a major challenge, especially when modelling a problem under causal assumptions. Solutions to this problem include the automated discovery of BN graphs from data, constructing them based on expert knowledge, or a combination of the two. This paper provides a comprehensive review of combinatoric algorithms proposed for learning BN structure from data, describing 61 algorithms including prototypical, well-established and state-of-the-art approaches. The basic approach of each algorithm is described in consistent terms, and the similarities and differences between them highlighted. Methods of evaluating algorithms and their comparative performance are discussed including the consistency of claims made in the literature. Approaches for dealing with data noise in real-world datasets and incorporating expert knowledge into the learning process are also covered.


A Latent Restoring Force Approach to Nonlinear System Identification

arXiv.org Machine Learning

Identification of nonlinear dynamic systems remains a significant challenge across engineering. This work suggests an approach based on Bayesian filtering to extract and identify the contribution of an unknown nonlinear term in the system which can be seen as an alternative viewpoint on restoring force surface type approaches. To achieve this identification, the contribution which is the nonlinear restoring force is modelled, initially, as a Gaussian process in time. That Gaussian process is converted into a state-space model and combined with the linear dynamic component of the system. Then, by inference of the filtering and smoothing distributions, the internal states of the system and the nonlinear restoring force can be extracted. In possession of these states a nonlinear model can be constructed. The approach is demonstrated to be effective in both a simulated case study and on an experimental benchmark dataset.


A Robust Asymmetric Kernel Function for Bayesian Optimization, with Application to Image Defect Detection in Manufacturing Systems

arXiv.org Machine Learning

Some response surface functions in complex engineering systems are usually highly nonlinear, unformed, and expensive-to-evaluate. To tackle this challenge, Bayesian optimization, which conducts sequential design via a posterior distribution over the objective function, is a critical method used to find the global optimum of black-box functions. Kernel functions play an important role in shaping the posterior distribution of the estimated function. The widely used kernel function, e.g., radial basis function (RBF), is very vulnerable and susceptible to outliers; the existence of outliers is causing its Gaussian process surrogate model to be sporadic. In this paper, we propose a robust kernel function, Asymmetric Elastic Net Radial Basis Function (AEN-RBF). Its validity as a kernel function and computational complexity are evaluated. When compared to the baseline RBF kernel, we prove theoretically that AEN-RBF can realize smaller mean squared prediction error under mild conditions. The proposed AEN-RBF kernel function can also realize faster convergence to the global optimum. We also show that the AEN-RBF kernel function is less sensitive to outliers, and hence improves the robustness of the corresponding Bayesian optimization with Gaussian processes. Through extensive evaluations carried out on synthetic and real-world optimization problems, we show that AEN-RBF outperforms existing benchmark kernel functions.


Deep Bayesian Estimation for Dynamic Treatment Regimes with a Long Follow-up Time

arXiv.org Artificial Intelligence

Causal effect estimation for dynamic treatment regimes (DTRs) contributes to sequential decision making. However, censoring and time-dependent confounding under DTRs are challenging as the amount of observational data declines over time due to a reducing sample size but the feature dimension increases over time. Long-term follow-up compounds these challenges. Another challenge is the highly complex relationships between confounders, treatments, and outcomes, which causes the traditional and commonly used linear methods to fail. We combine outcome regression models with treatment models for high dimensional features using uncensored subjects that are small in sample size and we fit deep Bayesian models for outcome regression models to reveal the complex relationships between confounders, treatments, and outcomes. Also, the developed deep Bayesian models can model uncertainty and output the prediction variance which is essential for the safety-aware applications, such as self-driving cars and medical treatment design. The experimental results on medical simulations of HIV treatment show the ability of the proposed method to obtain stable and accurate dynamic causal effect estimation from observational data, especially with long-term follow-up. Our technique provides practical guidance for sequential decision making, and policy-making.