Goto

Collaborating Authors

Differential equations as models of deep neural networks

arXiv.org Machine Learning

In this work we systematically analyze general properties of differential equations used as machine learning models. We demonstrate that the gradient of the loss function with respect to to the hidden state can be considered as a generalized momentum conjugate to the hidden state, allowing application of the tools of classical mechanics. In addition, we show that not only residual networks, but also feedforward neural networks with small nonlinearities and the weights matrices deviating only slightly from identity matrices can be related to the differential equations. We propose a differential equation describing such networks and investigate its properties. 1 Introduction Deep learning is a form of machine learning that uses neural networks with many hidden layers [1, 2]. Deep learning models have dramatically improved speech recognition, visual object recognition, object detection and many other domains [2]. Since the number of layers in deep neural networks become large, it is possible to consider the layer number as a continuous variable [3] and represent the neural network by an differential equation. The connection between the neural networks and differential equations first appeared with an additive model for continuous time recurrent neural networks, described by the differential equations [4] τ i dx i dt x i n null j 1w j,iσ ( x j θ j) I i(t) . Hopfield's work [6] pioneered the analog computation of continuous time recurrent neural networks instead of digital computation using complex numerical algorithms on a digital computer. A Hopfield network has a quadratic form as an Lyapunov function for the activity dynamics. As a consequence, the state of the network evolves to a final state that is a minimum of the Lyapunov function when started in any initial state [7].


The Complexity of Integer Bound Propagation

Journal of Artificial Intelligence Research

Bound propagation is an important Artificial Intelligence technique used in Constraint Programming tools to deal with numerical constraints. It is typically embedded within a search procedure (branch and prune) and used at every node of the search tree to narrow down the search space, so it is critical that it be fast. The procedure invokes constraint propagators until a common fixpoint is reached, but the known algorithms for this have a pseudo-polynomial worst-case time complexity: they are fast indeed when the variables have a small numerical range, but they have the well-known problem of being prohibitively slow when these ranges are large. An important question is therefore whether strongly-polynomial algorithms exist that compute the common bound consistent fixpoint of a set of constraints.


Understanding Back propagation In Neural Networks

#artificialintelligence

I wanted to ask a question, when newborn baby born does he able to think and start recognizing the things at day 1. The answer is no because baby has to undergo a training process at every second that let him or her know that this is your mother, father, brother and sisters. Once this training is completed, the connection between the neurons become so strong; easily he or she start recognizing his family members. But what happens if someone try to show the earlier known faces with some resembling faces like sister of mother who is not mother but resembles like mother? The baby tries to relate the existing images with the older images of mother and figure out that this is not my mother but exactly looks like mother.


Network Learning with Local Propagation

arXiv.org Artificial Intelligence

This paper presents a locally decoupled network parameter learning with local propagation. Three elements are taken into account: (i) sets of nonlinear transforms that describe the representations at all nodes, (ii) a local objective at each node related to the corresponding local representation goal, and (iii) a local propagation model that relates the nonlinear error vectors at each node with the goal error vectors from the directly connected nodes. The modeling concepts (i), (ii) and (iii) offer several advantages, including (a) a unified learning principle for any network that is represented as a graph, (b) understanding and interpretation of the local and the global learning dynamics, (c) decoupled and parallel parameter learning, (d) a possibility for learning in infinitely long, multi-path and multi-goal networks. Numerical experiments validate the potential of the learning principle. The preliminary results show advantages in comparison to the state-of-the-art methods, w.r.t. the learning time and the network size while having comparable recognition accuracy.


A Graph Convolutional Network Composition Framework for Semi-supervised Classification

arXiv.org Machine Learning

Graph convolutional networks (GCNs) have gained popularity due to high performance achievable on several downstream tasks including node classification. Several architectural variants of these networks have been proposed and investigated with experimental studies in the literature. Motivated by a recent work on simplifying GCNs, we study the problem of designing other variants and propose a framework to compose networks using building blocks of GCN. The framework offers flexibility to compose and evaluate different networks using feature and/or label propagation networks, linear or non-linear networks, with each composition having different computational complexity. We conduct a detailed experimental study on several benchmark datasets with many variants and present observations from our evaluation. Our empirical experimental results suggest that several newly composed variants are useful alternatives to consider because they are as competitive as, or better than the original GCN.