Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures

Journal of Artificial Intelligence Research

We propose a method to combine the interpretability and expressive power of firstorder logic with the effectiveness of neural network learning. In particular, we introduce a lifted framework in which first-order rules are used to describe the structure of a given problem setting. These rules are then used as a template for constructing a number of neural networks, one for each training and testing example. As the different networks corresponding to different examples share their weights, these weights can be efficiently learned using stochastic gradient descent. Our framework provides a flexible way for implementing and combining a wide variety of modelling constructs.


Deep Learning Neural Networks Simplified

#artificialintelligence

Deep learning is not as complex a concept that non-science people often happen to decipher. Scientific evolution over the years have reached a stage where a lot of explorations and defined research work needs the assistance of artificial intelligence. Since machines are usually fed with a particular set of algorithms to understand and react to various tasks within a matter of seconds, working with them broadens the scope of scientific breakthroughs resulting in the invention of techniques and procedures that make human life simpler and enriching. However, in order to work with machines, it is important for them to understand and recognize things just the way the human brain does. For example, we may recognize an apple through its shape and colour.


A History of Deep Learning - Import.io

#artificialintelligence

These days, you hear a lot about machine learning (or ML) and artificial intelligence (or AI) – both good or bad depending on your source. Many of us immediately conjure up images of HAL from 2001: A Space Odyssey, the Terminator cyborgs, C-3PO, or Samantha from Her when the subject turns to AI. And many may not even be familiar with machine learning as a separate subject. The phrases are often tossed around interchangeably, but they're not exactly the same thing. In the most general sense, machine learning has evolved from AI. In the Google Trends graph above, you can see that AI was the more popular search term until machine learning passed it for good around September 2015.


End-to-End Kernel Learning with Supervised Convolutional Kernel Networks

Neural Information Processing Systems

In this paper, we introduce a new image representation based on a multilayer kernel machine. Unlike traditional kernel methods where data representation is decoupled from the prediction task, we learn how to shape the kernel with supervision. We proceed by first proposing improvements of the recently-introduced convolutional kernel networks (CKNs) in the context of unsupervised learning; then, we derive backpropagation rules to take advantage of labeled training data. The resulting model is a new type of convolutional neural network, where optimizing the filters at each layer is equivalent to learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We show that our method achieves reasonably competitive performance for image classification on some standard deep learning'' datasets such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating the applicability of our approach to a large variety of image-related tasks.


Network-size independent covering number bounds for deep networks

arXiv.org Machine Learning

We give a covering number bound for deep learning networks that is independent of the size of the network. The key for the simple analysis is that for linear classifiers, rotating the data doesn't affect the covering number. Thus, we can ignore the rotation part of each layer's linear transformation, and get the covering number bound by concentrating on the scaling part.