Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures

Journal of Artificial Intelligence Research

We propose a method to combine the interpretability and expressive power of firstorder logic with the effectiveness of neural network learning. In particular, we introduce a lifted framework in which first-order rules are used to describe the structure of a given problem setting. These rules are then used as a template for constructing a number of neural networks, one for each training and testing example. As the different networks corresponding to different examples share their weights, these weights can be efficiently learned using stochastic gradient descent. Our framework provides a flexible way for implementing and combining a wide variety of modelling constructs.



A History of Deep Learning - Import.io

#artificialintelligence

These days, you hear a lot about machine learning (or ML) and artificial intelligence (or AI) โ€“ both good or bad depending on your source. Many of us immediately conjure up images of HAL from 2001: A Space Odyssey, the Terminator cyborgs, C-3PO, or Samantha from Her when the subject turns to AI. And many may not even be familiar with machine learning as a separate subject. The phrases are often tossed around interchangeably, but they're not exactly the same thing. In the most general sense, machine learning has evolved from AI. In the Google Trends graph above, you can see that AI was the more popular search term until machine learning passed it for good around September 2015.


Top 20 Deep Learning Papers, 2018 Edition

@machinelearnbot

Deep Learning, one of the subfields of Machine Learning and Statistical Learning has been advancing in impressive levels in the past years. Cloud computing, robust open source tools and vast amounts of available data have been some of the levers for these impressive breakthroughs. The criteria used to select the 20 top papers is by using citation counts from academic.microsoft.com. It is important to mention that these metrics are changing rapidly so the citations valued must be considered as the numbers when this article was published. In this list of papers more than 75% refer to deep learning and neural networks, specifically Convolutional Neural Networks (CNN).


Neurocomputing

#artificialintelligence

Neural networks (NNs) and deep learning (DL) currently provide the best solutions to many problems in image recognition, speech recognition, natural language processing, control and precision health. NN and DL make the artificial intelligence (AI) much closer to human thinking modes. However, there are many open problems related to DL in NN, e.g.: convergence, learning efficiency, optimality, multi-dimensional learning, on-line adaptation. This requires to create new algorithms and analysis methods. Practical applications both require and stimulate this development.