Goto

Collaborating Authors

Input-Output Equivalence of Unitary and Contractive RNNs

Neural Information Processing Systems

Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies. A basic question is how restrictive is the unitary constraint on the possible input-output mappings of such a network? This works shows that for any contractive RNN with ReLU activations, there is a URNN with at most twice the number of hidden states and the identical input-output mapping. Hence, with ReLU activations, URNNs are as expressive as general RNNs. In contrast, for certain smooth activations, it is shown that the input-output mapping of an RNN cannot be matched with a URNN, even with an arbitrary number of states.


What is Machine Learning?

#artificialintelligence

Machine learning is not an exact science. It encompasses a broad range of machine learning tools, techniques and ideas. Here are the most common types of machine learning techniques and algorithms along with a brief summary of how each can be used to solve problems. Some of the most simplistic tasks fall under supervised learning. For example, a handwriting recognition algorithm would typically be classified as a supervised learning task.


Multivariate Conditional Outlier Detection: Identifying Unusual Input-Output Associations in Data

AAAI Conferences

We study multivariate conditional outlier detection, a special type of the conditional outlier detection problem, where data instances consist of continuous input (context) and binary output (responses) vectors. We present a novel outlier detection framework that identifies abnormal input-output associations in data using a decomposable conditional probabilistic model. Since the components of this model can vary in their quality, we combine them with the help of weights reflecting their reliability in assessment of outliers. We propose two ways of calculating the component weights: global that relies on all data and local that relies only on the instances similar to the target instance. Experimental results on data from various domains demonstrate the ability of our framework to successfully identify multivariate conditional outliers.


Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

arXiv.org Artificial Intelligence

Networks are fundamental building blocks for representing data, and computations. Remarkable progress in learning in structurally defined (shallow or deep) networks has recently been achieved. Here we introduce evolutionary exploratory search and learning method of topologically flexible networks under the constraint of producing elementary computational steady-state input-output operations. Our results include; (1) the identification of networks, over four orders of magnitude, implementing computation of steady-state input-output functions, such as a band-pass filter, a threshold function, and an inverse band-pass function. Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state. Furthermore, we find that the fraction of required driver nodes is constant during evolutionary learning, suggesting a stable system design. (3), our framework allows multiplexing of different computations using the same network. For example, using a binary representation of the inputs, the network can readily compute three different input-output functions. Finally, (4) the proposed evolutionary learning demonstrates transfer learning. If the system learns one function A, then learning B requires on average less number of steps as compared to learning B from tabula rasa. We conclude that the constrained evolutionary learning produces large robust controllable circuits, capable of multiplexing and transfer learning. Our study suggests that network-based computations of steady-state functions, representing either cellular modules of cell-to-cell communication networks or internal molecular circuits communicating within a cell, could be a powerful model for biologically inspired computing. This complements conceptualizations such as attractor based models, or reservoir computing.


Safe Predictors for Enforcing Input-Output Specifications

arXiv.org Machine Learning

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.