Goto

Collaborating Authors

3 Main Approaches to Machine Learning Models - KDnuggets

#artificialintelligence

In September 2018, I published a blog about my forthcoming book on The Mathematical Foundations of Data Science. The central question we address is: How can we bridge the gap between mathematics needed for Artificial Intelligence (Deep Learning and Machine learning) with that taught in high schools (up to ages 17/18)? In this post, we present a chapter from this book called "A Taxonomy of Machine Learning Models." The book is now available for an early bird discount released as chapters. If you are interested in getting early discounted copies, please contact ajit.jaokar at feynlabs.ai.


A Logical Neural Network Structure With More Direct Mapping From Logical Relations

arXiv.org Artificial Intelligence

Logical relations widely exist in human activities. Human use them for making judgement and decision according to various conditions, which are embodied in the form of \emph{if-then} rules. As an important kind of cognitive intelligence, it is prerequisite of representing and storing logical relations rightly into computer systems so as to make automatic judgement and decision, especially for high-risk domains like medical diagnosis. However, current numeric ANN (Artificial Neural Network) models are good at perceptual intelligence such as image recognition while they are not good at cognitive intelligence such as logical representation, blocking the further application of ANN. To solve it, researchers have tried to design logical ANN models to represent and store logical relations. Although there are some advances in this research area, recent works still have disadvantages because the structures of these logical ANN models still don't map more directly with logical relations which will cause the corresponding logical relations cannot be read out from their network structures. Therefore, in order to represent logical relations more clearly by the neural network structure and to read out logical relations from it, this paper proposes a novel logical ANN model by designing the new logical neurons and links in demand of logical representation. Compared with the recent works on logical ANN models, this logical ANN model has more clear corresponding with logical relations using the more direct mapping method herein, thus logical relations can be read out following the connection patterns of the network structure. Additionally, less neurons are used.


Some Notes on the Factorization of Probabilistic Logical Models under Maximum Entropy Semantics

AAAI Conferences

Probabilistic conditional logics offer a rich and well-founded framework for designing expert systems. The factorization of their maximum entropy models has several interesting applications. In this paper a general factorization is derived providing a more rigorous proof than in previous work. It yields an approach to extend Iterative Scaling variants to deterministic knowledge bases. Subsequently the connection to Markov Random Fields is revisited.


Paying down metadata debt: learning the representation of concepts using topic models

arXiv.org Artificial Intelligence

We introduce a data management problem called metadata debt, to identify the mapping between data concepts and their logical representations. We describe how this mapping can be learned using semisupervised topic models based on low-rank matrix factorizations that account for missing and noisy labels, coupled with sparsity penalties to improve localization and interpretability. We introduce a gauge transformation approach that allows us to construct explicit associations between topics and concept labels, and thus assign meaning to topics. We also show how to use this topic model for semisupervised learning tasks like extrapolating from known labels, evaluating possible errors in existing labels, and predicting missing features. We show results from this topic model in predicting subject tags on over 25,000 datasets from Kaggle.com, demonstrating the ability to learn semantically meaningful features.


Generating new concepts with hybrid neuro-symbolic models

arXiv.org Artificial Intelligence

Human conceptual knowledge supports the ability to generate novel yet highly structured concepts, and the form of this conceptual knowledge is of great interest to cognitive scientists. One tradition has emphasized structured knowledge, viewing concepts as embedded in intuitive theories or organized in complex symbolic knowledge structures. A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models. In this paper, we explore a synthesis of these two traditions through a novel neuro-symbolic model for generating new concepts. Using simple visual concepts as a testbed, we bring together neural networks and symbolic probabilistic programs to learn a generative model of novel handwritten characters. Two alternative models are explored with more generic neural network architectures. We compare each of these three models for their likelihoods on held-out character classes and for the quality of their productions, finding that our hybrid model learns the most convincing representation and generalizes further from the training observations.