Machine Learning: Overviews

A Guide to AWS


Even those new to IT have probably heard that everyone is "moving to the cloud." This transition from standard infrastructure is thanks in large part to Amazon Web Services. Currently, AWS offers "over 90 fully featured services for computing, storage, networking, analytics, application services, d...

Who Killed Albert Einstein? From Open Data to Murder Mystery Games Artificial Intelligence

This paper presents a framework for generating adventure games from open data. Focusing on the murder mystery type of adventure games, the generator is able to transform open data from Wikipedia articles, OpenStreetMap and images from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves around the murder of a person with a Wikipedia article and populates the game with suspects who must be arrested by the player if guilty of the murder or absolved if innocent. Starting from only one person as the victim, an extensive generative pipeline finds suspects, their alibis, and paths connecting them from open data, transforms open data into cities, buildings, non-player characters, locks and keys and dialog options. The paper describes in detail each generative step, provides a specific playthrough of one WikiMystery where Albert Einstein is murdered, and evaluates the outcomes of games generated for the 100 most influential people of the 20th century.

Isolating Sources of Disentanglement in Variational Autoencoders Artificial Intelligence

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art $\beta$-VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.

Quantum Machine Learning: An Overview


At a recent conference in 2017, Microsoft CEO Satya Nadella used the analogy of a corn maze to explain the difference in approach between a classical computer and a quantum computer. In trying to find a path through the maze, a classical computer would start down a path, hit an obstruction, backtrac...

A guide to receptive field arithmetic for Convolutional Neural Networks


The receptive field is perhaps one of the most important concepts in Convolutional Neural Networks (CNNs) that deserves more attention from the literature. All of the state-of-the-art object recognition methods design their model architectures around this idea. However, to my best knowledge, current...

A Primer on Artificial Intelligence for Financial Advisors


Artificial intelligence will continue to be buzzing in wealth management in 2018. But there's a short list of professionals who actually understand AI and can clearly explain how advisors and wealth management firms will benefit from it now and in the future. To help break it down, We asked Fritz to unpack AI in a way anyone in the industry can understand and even act on it. Prior to founding F2 Strategy, Fritz was the CTO for First Republic Private Wealth Management.

Survey of DeepLearning4j Examples - Deeplearning4j: Open-source, Distributed Deep Learning for the JVM


Deeplearning4j's Github repository has many examples to cover its functionality. The Quick Start Guide shows you how to set up Intellij and clone the repository. This page provides an overview of some of those examples. Most of the examples make use of DataVec, a toolkit for preprocessing and clearning data through normalization, standardization, search and replace, column shuffles and vectorization. Reading raw data and transforming it into a DataSet object for your Neural Network is often the first step toward training that network.

Locally Weighted Regression


A couple of weeks back, I started a review of the linear models I've used over the years and and I realized that I never really understood how the locally weighted regression algorithm works. This and the fact that sklearn had no support for it, encouraged me to do an investigation into the working principles of the algorithm. In this post, I would attempt to provide an overview of the algorithm using mathematical inference and list some of the implementations available in Python. Regression is the estimation of a continuous response variable based on the values of some other variable. The variable to be estimated is dependent on the other variable(s) in the function space.