Goto

Collaborating Authors

Target Output

#artificialintelligence

A target output is the true output or labels on a given dataset. The function that maps the input to its correct labels is called the target function. Therefore, the underlying goal of many machine learning methods is to produce a function that matches the target function as close as possible without giving up generalizability. The target output can be used to compare the predictions of a model and determine its accuracy. Consider a neural network that classifies images.


5 Essential Neural Network Algorithms

#artificialintelligence

Data scientists use many different algorithms to train neural networks, and there are many variations of each. In this article, I will outline five algorithms that will give you a rounded understanding of how neural networks operate. I will start with an overview of how a neural network works, mentioning at what stage the algorithms are used. Neural networks are fairly similar to the human brain. They are made up of artificial neurons, take in multiple inputs, and produce a single output.


Deep Learning for Sampling from Arbitrary Probability Distributions

arXiv.org Machine Learning

This paper proposes a fully connected neural network model to map samples from a uniform distribution to samples of any explicitly known probability density function. During the training, the Jensen-Shannon divergence between the distribution of the model's output and the target distribution is minimized. We experimentally demonstrate that our model converges towards the desired state. It provides an alternative to existing sampling methods such as inversion sampling, rejection sampling, Gaussian mixture models and Markov-Chain-Monte-Carlo. Our model has high sampling efficiency and is easily applied to any probability distribution, without the need of further analytical or numerical calculations. It can produce correlated samples, such that the output distribution converges faster towards the target than for independent samples. But it is also able to produce independent samples, if single values are fed into the network and the input values are independent as well. We focus on one-dimensional sampling, but additionally illustrate a two-dimensional example with a target distribution of dependent variables.


Exact Inference of Hidden Structure from Sample Data in Noisy-OR Networks

arXiv.org Artificial Intelligence

In the literature on graphical models, there has been increased attention paid to the problems of learning hidden structure (see Heckerman [H96] for survey) and causal mechanisms from sample data [H96, P88, S93, P95, F98]. In most settings we should expect the former to be difficult, and the latter potentially impossible without experimental intervention. In this work, we examine some restricted settings in which perfectly reconstruct the hidden structure solely on the basis of observed sample data.


Sequence Transduction with Recurrent Neural Networks

arXiv.org Machine Learning

Many machine learning tasks can be expressed as the transformation---or \emph{transduction}---of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since \emph{finding} the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that is in principle able to transform any input sequence into any finite, discrete output sequence. Experimental results for phoneme recognition are provided on the TIMIT speech corpus.