Directed Networks


Uncertainty Estimation in Deep Learning

#artificialintelligence

Twitter @tarantulae 4. Uncertainty in Deep Learning - Christian S. Perone (2019) Uncertainties Bayesian Inference Deep Learning Variational Inference Ensembles Q&A Section I Uncertainties 5. Uncertainty in Deep Learning - Christian S. Perone (2019) Uncertainties Bayesian Inference Deep Learning Variational Inference Ensembles Q&A Knowing what you don't know It is correct, somebody might say, that (...) Socrates did not know anything; and it was indeed wisdom that they recognized their own lack of knowledge, (...).



New superomniphobic glass soars high on butterfly wings using machine learning: Engineers develop new superclear, supertransparent, stain-resistant, anti-fogging nanostructured glass based on butterfly wing

#artificialintelligence

The team recently published a paper detailing their findings: "Creating Glasswing-Butterfly Inspired Durable Antifogging Omniphobic Supertransmissive, Superclear Nanostructured Glass Through Bayesian Learning and Optimization" in Materials Horizons (doi:10.1039/C9MH00589G). They recently presented this work at the ICML conference in the "Climate Change: How Can AI Help?" workshop. The nanostructured glass has random nanostructures, like the glasswing butterfly wing, that are smaller than the wavelengths of visible light. This allows the glass to have a very high transparency of 99.5% when the random nanostructures are on both sides of the glass. This high transparency can reduce the brightness and power demands on displays that could, for example, extend battery life.


Birth of Error Functions in Artificial Neural Networks – ML-DAWN

#artificialintelligence

In this talk we learn about what Artificial Neural Networks (ANNs) are, and find out how in general, Maximum Likelihood Estimations and Bayes' Rule help us develop our error functions in ANNs, namely, cross-entropy error function! We will derive the binary-cross entropy from scratch, step by step. Below you can see the video of this talk, however, the slides and some code is available. I would highly recommend you to follow the talk through these slides. The slides are available here! The link to the post regarding the Demo is available in here!


MercurialMonkey/Harvard-University-Capstone-Project-Data-Science

#artificialintelligence

I have submitted my own project using a dataset of my choosing. My project has been reviewed both by my peers and the professor. I chose to work with Credit Card Fraud Detection, It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase. The datasets contains transactions made by credit cards in September 2013 by european cardholders. Due to imbalancing nature of the data, many observations could be predicted as False Negative, in this case Legal Transactions instead of Fraudolent Transaction.


Comparing Classifiers: Decision Trees, K-NN & Naive Bayes

#artificialintelligence

A myriad of options exist for classification. That said, three popular classification methods-- Decision Trees, k-NN & Naive Bayes--can be tweaked for practically every situation. Naive Bayes and K-NN, are both examples of supervised learning (where the data comes already labeled). Decision trees are easy to use for small amounts of classes. If you're trying to decide between the three, your best option is to take all three for a test drive on your data, and see which produces the best results.


Bayesian inverse regression for supervised dimension reduction with small datasets

arXiv.org Machine Learning

We consider supervised dimension reduction problems, namely to identify a low dimensional projection of the predictors $\-x$ which can retain the statistical relationship between $\-x$ and the response variable $y$. We follow the idea of the sliced inverse regression (SIR) class of methods, which is to use the statistical information of the conditional distribution $\pi(\-x|y)$ to identify the dimension reduction (DR) space and in particular we focus on the task of computing this conditional distribution. We propose a Bayesian framework to compute the conditional distribution where the likelihood function is obtained using the Gaussian process regression model. The conditional distribution $\pi(\-x|y)$ can then be obtained directly by assigning weights to the original data points. We then can perform DR by considering certain moment functions (e.g. the first moment) of the samples of the posterior distribution. With numerical examples, we demonstrate that the proposed method is especially effective for small data problems.


Learning Directed Graphical Models from Gaussian Data

arXiv.org Machine Learning

In this paper, we introduce two new directed graphical models from Gaussian data: the Gaussian graphical interaction model (GGIM) and the Gaussian graphical conditional expectation model (GGCEM). The development of these models comes from considering stationary Gaussian processes on graphs, and leveraging the equations between the resulting steady-state covariance matrix and the Laplacian matrix representing the interaction graph. Through the presentation of conceptually straightforward theory, we develop the new models and provide interpretations of the edges in each graphical model in terms of statistical measures. We show that when restricted to undirected graphs, the Laplacian matrix representing a GGIM is equivalent to the standard inverse covariance matrix that encodes conditional dependence relationships. We demonstrate that the problem of learning sparse GGIMs and GGCEMs for a given observation set can be framed as a LASSO problem. By comparison with the problem of inverse covariance estimation, we prove a bound on the difference between the covariance matrix corresponding to a sparse GGIM and the covariance matrix corresponding to the $l_1$-norm penalized maximum log-likelihood estimate. In all, the new models present a novel perspective on directed relationships between variables and significantly expand on the state of the art in Gaussian graphical modeling.


Uncovering Probabilistic Implications in Typological Knowledge Bases

arXiv.org Artificial Intelligence

The study of linguistic typology is rooted in the implications we find between linguistic features, such as the fact that languages with object-verb word ordering tend to have post-positions. Uncovering such implications typically amounts to time-consuming manual processing by trained and experienced linguists, which potentially leaves key linguistic universals unexplored. In this paper, we present a computational model which successfully identifies known universals, including Greenberg universals, but also uncovers new ones, worthy of further linguistic investigation. Our approach outperforms baselines previously used for this problem, as well as a strong baseline from knowledge base population.


Introduction to Bayesian Modeling with PyMC3 - Dr. Juan Camilo Orduz

#artificialintelligence

We can also see this visually. We can verify the convergence of the chains formally using the Gelman Rubin test. Values close to 1.0 mean convergence. We can also test for correlation between samples in the chains. We are aiming for zero auto-correlation to get "random" samples from the posterior distribution. From these plots we see that the auto-correlation is not problematic.