Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures

Journal of Artificial Intelligence Research

We propose a method to combine the interpretability and expressive power of firstorder logic with the effectiveness of neural network learning. In particular, we introduce a lifted framework in which first-order rules are used to describe the structure of a given problem setting. These rules are then used as a template for constructing a number of neural networks, one for each training and testing example. As the different networks corresponding to different examples share their weights, these weights can be efficiently learned using stochastic gradient descent. Our framework provides a flexible way for implementing and combining a wide variety of modelling constructs.



Deep Learning Neural Networks Simplified

#artificialintelligence

Deep learning is not as complex a concept that non-science people often happen to decipher. Scientific evolution over the years have reached a stage where a lot of explorations and defined research work needs the assistance of artificial intelligence. Since machines are usually fed with a particular set of algorithms to understand and react to various tasks within a matter of seconds, working with them broadens the scope of scientific breakthroughs resulting in the invention of techniques and procedures that make human life simpler and enriching. However, in order to work with machines, it is important for them to understand and recognize things just the way the human brain does. For example, we may recognize an apple through its shape and colour.


A History of Deep Learning - Import.io

#artificialintelligence

These days, you hear a lot about machine learning (or ML) and artificial intelligence (or AI) โ€“ both good or bad depending on your source. Many of us immediately conjure up images of HAL from 2001: A Space Odyssey, the Terminator cyborgs, C-3PO, or Samantha from Her when the subject turns to AI. And many may not even be familiar with machine learning as a separate subject. The phrases are often tossed around interchangeably, but they're not exactly the same thing. In the most general sense, machine learning has evolved from AI. In the Google Trends graph above, you can see that AI was the more popular search term until machine learning passed it for good around September 2015.


Narodytska

AAAI Conferences

Understanding properties of deep neural networks is an important challenge in deep learning. In this paper, we take a step in this direction by proposing a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability. Our main contribution is a construction that creates a representation of a binarized neural network as a Boolean formula. Our encoding is the first exact Boolean representation of a deep neural network. Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks. A particular focus will be on the critical property of robustness to adversarial perturbations. For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks. To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network.