Learning Graphical Models

Uncertainty Estimation in Deep Learning


Twitter @tarantulae 4. Uncertainty in Deep Learning - Christian S. Perone (2019) Uncertainties Bayesian Inference Deep Learning Variational Inference Ensembles Q&A Section I Uncertainties 5. Uncertainty in Deep Learning - Christian S. Perone (2019) Uncertainties Bayesian Inference Deep Learning Variational Inference Ensembles Q&A Knowing what you don't know It is correct, somebody might say, that (...) Socrates did not know anything; and it was indeed wisdom that they recognized their own lack of knowledge, (...).

Restricted Boltzmann Machine with Multivalued Hidden Variables


Generalization is one of the most important goals in statistical machine learning problems [1]. In various standard machine learning techniques, given a particular data set, we fit our probabilistic learning model to the empirical distribution (or the data distribution) of the data set. When our learning model is sufficiently flexible, it can fit the empirical distribution exactly via an appropriate learning method. A learning model that is too close to the empirical distribution frequently gives poor results for new data points. This situation is known as over-fitting.

New superomniphobic glass soars high on butterfly wings using machine learning: Engineers develop new superclear, supertransparent, stain-resistant, anti-fogging nanostructured glass based on butterfly wing


The team recently published a paper detailing their findings: "Creating Glasswing-Butterfly Inspired Durable Antifogging Omniphobic Supertransmissive, Superclear Nanostructured Glass Through Bayesian Learning and Optimization" in Materials Horizons (doi:10.1039/C9MH00589G). They recently presented this work at the ICML conference in the "Climate Change: How Can AI Help?" workshop. The nanostructured glass has random nanostructures, like the glasswing butterfly wing, that are smaller than the wavelengths of visible light. This allows the glass to have a very high transparency of 99.5% when the random nanostructures are on both sides of the glass. This high transparency can reduce the brightness and power demands on displays that could, for example, extend battery life.

Birth of Error Functions in Artificial Neural Networks – ML-DAWN


In this talk we learn about what Artificial Neural Networks (ANNs) are, and find out how in general, Maximum Likelihood Estimations and Bayes' Rule help us develop our error functions in ANNs, namely, cross-entropy error function! We will derive the binary-cross entropy from scratch, step by step. Below you can see the video of this talk, however, the slides and some code is available. I would highly recommend you to follow the talk through these slides. The slides are available here! The link to the post regarding the Demo is available in here!

LSTM-based Handwriting Recognition by Google


Handwriting is a one of the challenge in NLP task. It is because it can be various among different people. Sometimes, "O" can be written as "0" while human begin has the capability to distinguish whether it is "O" or "0" from contextualize information. For example, "0" will be used in phone number while "O" will be used as part of English word. Another skill is lexicon searching.

Unifying Logical and Statistical AI with Markov Logic

Communications of the ACM

For many years, the two dominant paradigms in artificial intelligence (AI) have been logical AI and statistical AI. Logical AI uses first-order logic and related representations to capture complex relationships and knowledge about the world. However, logic-based approaches are often too brittle to handle the uncertainty and noise present in many applications. Statistical AI uses probabilistic representations such as probabilistic graphical models to capture uncertainty. However, graphical models only represent distributions over propositional universes and must be customized to handle relational domains.



I have submitted my own project using a dataset of my choosing. My project has been reviewed both by my peers and the professor. I chose to work with Credit Card Fraud Detection, It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase. The datasets contains transactions made by credit cards in September 2013 by european cardholders. Due to imbalancing nature of the data, many observations could be predicted as False Negative, in this case Legal Transactions instead of Fraudolent Transaction.

Comparing Classifiers: Decision Trees, K-NN & Naive Bayes


A myriad of options exist for classification. That said, three popular classification methods-- Decision Trees, k-NN & Naive Bayes--can be tweaked for practically every situation. Naive Bayes and K-NN, are both examples of supervised learning (where the data comes already labeled). Decision trees are easy to use for small amounts of classes. If you're trying to decide between the three, your best option is to take all three for a test drive on your data, and see which produces the best results.

Free Book: Foundations of Data Science (from Microsoft Research Lab)


Computer science as an academic discipline began in the 1960s. Emphasis was on programming languages, compilers, operating systems, and the mathematical theory that supported these areas. Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful.