Goto

Collaborating Authors

Results


Utilizing variational autoencoders in the Bayesian inverse problem of photoacoustic tomography

#artificialintelligence

Photoacoustic tomography (PAT) is a hybrid biomedical imaging modality based on the photoacoustic effect [6, 44, 32]. In PAT, the imaged target is illuminated with a short pulse of light. Absorption of light creates localized areas of thermal expansion, resulting in localized pressure increases within the imaged target. This pressure distribution, called the initial pressure, relaxes as broadband ultrasound waves that are measured on the boundary of the imaged target. In the inverse problem of PAT, the initial pressure distribution is estimated from a set of measured ultrasound data.


The Application of Machine Learning Techniques for Predicting Match Results in Team Sport: A Review

Journal of Artificial Intelligence Research

Predicting the results of matches in sport is a challenging and interesting task. In this paper, we review a selection of studies from 1996 to 2019 that used machine learning for predicting match results in team sport. Considering both invasion sports and striking/fielding sports, we discuss commonly applied machine learning algorithms, as well as common approaches related to data and evaluation. Our study considers accuracies that have been achieved across different sports, and explores whether evidence exists to support the notion that outcomes of some sports may be inherently more difficult to predict. We also uncover common themes of future research directions and propose recommendations for future researchers. Although there remains a lack of benchmark datasets (apart from in soccer), and the differences between sports, datasets and features makes between-study comparisons difficult, as we discuss, it is possible to evaluate accuracy performance in other ways. Artificial Neural Networks were commonly applied in early studies, however, our findings suggest that a range of models should instead be compared. Selecting and engineering an appropriate feature set appears to be more important than having a large number of instances. For feature selection, we see potential for greater inter-disciplinary collaboration between sport performance analysis, a sub-discipline of sport science, and machine learning.


A Hybrid Feature Extraction Method for Nepali COVID-19-Related Tweets Classification

#artificialintelligence

COVID-19 is one of the deadliest viruses, which has killed millions of people around the world to this date. The reason for peoples' death is not only linked to its infection but also to peoples' mental states and sentiments triggered by the fear of the virus. People's sentiments, which are predominantly available in the form of posts/tweets on social media, can be interpreted using two kinds of information: syntactical and semantic. Herein, we propose to analyze peoples' sentiment using both kinds of information (syntactical and semantic) on the COVID-19-related twitter dataset available in the Nepali language. For this, we, first, use two widely used text representation methods: TF-IDF and FastText and then combine them to achieve the hybrid features to capture the highly discriminating features. Second, we implement nine widely used machine learning classifiers (Logistic Regression, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, Decision Trees, Random Forest, Extreme Tree classifier, AdaBoost, and Multilayer Perceptron), based on the three feature representation methods: TF-IDF, FastText, and Hybrid. To evaluate our methods, we use a publicly available Nepali-COVID-19 tweets dataset, NepCov19Tweets, which consists of Nepali tweets categorized into three classes (Positive, Negative, and Neutral). The evaluation results on the NepCOV19Tweets show that the hybrid feature extraction method not only outperforms the other two individual feature extraction methods while using nine different machine learning algorithms but also provides excellent performance when compared with the state-of-the-art methods. Natural language processing (NLP) techniques have been developed to assess peoples' sentiments on various topics.


Mathematics for Deep Learning (Part 7)

#artificialintelligence

In the road so far, we have talked about MLP, CNN, and RNN architectures. These are discriminative models, that is models that can make predictions. Discriminative models essentially learn to estimate a conditional probability distribution p( x); that is, given a value, they try to predict the outcome based on what they learned about the probability distribution of x. Generative models are architectures of neural networks that learn the probability distribution of the data and learn how to generate data that seems to come from that probability distribution. Creating synthetic data is one use of generative models, but is not the only one.


DeepMind Believes These are the Key Pillars of Robust Machine Learning Systems

#artificialintelligence

DeepMind Believes These are the Key Pillars of Robust Machine Learning Systems. Specification Testing, Robust Training and Formal Verification are three elements that the AI powerhouse believe hold the essence of robust….


Uncertainty In Deep Learning-Bayesian CNN

#artificialintelligence

Now we have seen the parameters of a Reparameterization layer. We can start writing the models. First, let's start with how we could create a normal CNN: We will convert this model to a Bayesian Convolutional Neural Network. And note that this model has 98.442 parameters in total. Since Reparameterization layers are different from DenseVariational layers in terms of method parameters, we need to consider this when a writing a custom prior & posterior.


Deep Learning Roadmap 2022- Step-by-Step Career Path

#artificialintelligence

The first step or skill in deep learning is mathematical skills. It helps you to understand how deep learning and machine learning algorithms work. Now, let's see how all these subjects' knowledge will help you in machine learning and in deep learning. But before that, let me clear one thing, don't think you can directly jump into deep learning without learning machine learning. That's why I am discussing all the skills that are required for deep learning as well as machine learning.


Machine Learning, Deep Learning and Bayesian Learning

#artificialintelligence

This is a course on Machine Learning, Deep Learning (Tensorflow PyTorch) and Bayesian Learning (yes all 3 topics in one place!!!). This is a course on Machine Learning, Deep Learning (Tensorflow PyTorch) and Bayesian Learning (yes all 3 topics in one place!!!). We start off by analysing data using pandas, and implementing some algorithms from scratch using Numpy. These algorithms include linear regression, Classification and Regression Trees (CART), Random Forest and Gradient Boosted Trees. We start off using TensorFlow for our Deep Learning lessons.


Bernstein Flows for Flexible Posteriors in Variational Bayes

arXiv.org Machine Learning

Variational inference (VI) is a technique to approximate difficult to compute posteriors by optimization. In contrast to MCMC, VI scales to many observations. In the case of complex posteriors, however, state-of-the-art VI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method, flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art VI methods including normalizing flow based VI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI outperforms other VI methods. Further, we develop with BF-VI a Bayesian model for the semi-structured Melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate for the first time how the use of VI in semi-structured models.


Model Architecture Adaption for Bayesian Neural Networks

arXiv.org Artificial Intelligence

Bayesian Neural Networks (BNNs) offer a mathematically grounded framework to quantify the uncertainty of model predictions but come with a prohibitive computation cost for both training and inference. In this work, we show a novel network architecture search (NAS) that optimizes BNNs for both accuracy and uncertainty while having a reduced inference latency. Different from canonical NAS that optimizes solely for in-distribution likelihood, the proposed scheme searches for the uncertainty performance using both in- and out-of-distribution data. Our method is able to search for the correct placement of Bayesian layer(s) in a network. In our experiments, the searched models show comparable uncertainty quantification ability and accuracy compared to the state-of-the-art (deep ensemble). In addition, the searched models use only a fraction of the runtime compared to many popular BNN baselines, reducing the inference runtime cost by $2.98 \times$ and $2.92 \times$ respectively on the CIFAR10 dataset when compared to MCDropout and deep ensemble.