Plotting

 Country


Kirchoff Law Markov Fields for Analog Circuit Design

Neural Information Processing Systems

Three contributions to developing an algorithm for assisting engineers indesigning analog circuits are provided in this paper. First, a method for representing highly nonlinear and noncontinuous analog circuits using Kirchoff current law potential functions within the context of a Markov field is described. Second, a relatively efficient algorithmfor optimizing the Markov field objective function is briefly described and the convergence proof is briefly sketched. And third, empirical results illustrating the strengths and limitations ofthe approach are provided within the context of a JFET transistor design problem. The proposed algorithm generated a set of circuit components for the JFET circuit model that accurately generated the desired characteristic curves. 1 Analog circuit design using Markov random fields


Manifold Stochastic Dynamics for Bayesian Learning

Neural Information Processing Systems

We propose a new Markov Chain Monte Carlo algorithm which is a generalization ofthe stochastic dynamics method. The algorithm performs exploration of the state space using its intrinsic geometric structure, facilitating efficientsampling of complex distributions. Applied to Bayesian learning in neural networks, our algorithm was found to perform at least as well as the best state-of-the-art method while consuming considerably less time. 1 Introduction



The Relevance Vector Machine

Neural Information Processing Systems

The support vector machine (SVM) is a state-of-the-art technique for regression and classification, combining excellent generalisation properties with a sparse kernel representation. However, it does suffer from a number of disadvantages, notably the absence of probabilistic outputs,the requirement to estimate a tradeoff parameter and the need to utilise'Mercer' kernel functions. In this paper we introduce the Relevance Vector Machine (RVM), a Bayesian treatment ofa generalised linear model of identical functional form to the SVM. The RVM suffers from none of the above disadvantages, and examples demonstrate that for comparable generalisation performance, theRVM requires dramatically fewer kernel functions.


Model Selection for Support Vector Machines

Neural Information Processing Systems

New functionals for parameter (model) selection of Support Vector Machines areintroduced based on the concepts of the span of support vectors and rescaling of the feature space. It is shown that using these functionals, onecan both predict the best choice of parameters of the model and the relative quality of performance for any value of parameter.


Learning Factored Representations for Partially Observable Markov Decision Processes

Neural Information Processing Systems

The problem of reinforcement learning in a non-Markov environment is explored using a dynamic Bayesian network, where conditional independence assumptionsbetween random variables are compactly represented by network parameters. The parameters are learned online, and approximations areused to perform inference and to compute the optimal value function. The relative effects of inference and value function approximations onthe quality of the final policy are investigated, by learning to solve a moderately difficult driving task. The two value function approximations, linearand quadratic, were found to perform similarly, but the quadratic model was more sensitive to initialization. Both performed below thelevel of human performance on the task.


State Abstraction in MAXQ Hierarchical Reinforcement Learning

Neural Information Processing Systems

Forexample, in the Options framework [1,2], the programmer defines a set of macro actions ("options") and provides a policy for each. Learning algorithms (such as semi-Markov Q learning) can then treat these temporally abstract actions as if they were primitives and learn a policy for selecting among them. Closely related is the HAM framework, in which the programmer constructs a hierarchy of finitestate controllers[3]. Each controller can include non-deterministic states (where the programmer was not sure what action to perform). The HAMQ learning algorithm can then be applied to learn a policy for making choices in the non-deterministic states.


Scale Mixtures of Gaussians and the Statistics of Natural Images

Neural Information Processing Systems

The statistics of photographic images, when represented using multiscale (wavelet) bases, exhibit two striking types of non Gaussian behavior. First, the marginal densities of the coefficients have extended heavy tails. Second, the joint densities exhibit variance dependenciesnot captured by second-order models. We examine propertiesof the class of Gaussian scale mixtures, and show that these densities can accurately characterize both the marginal and joint distributions of natural image wavelet coefficients. This class of model suggests a Markov structure, in which wavelet coefficients arelinked by hidden scaling variables corresponding to local image structure.


A Generative Model for Attractor Dynamics

Neural Information Processing Systems

However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious afuactors andill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding ofmultiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. Attractor networks map an input space, usually continuous, to a sparse output space composed of a discrete set of alternatives.


Bayesian Reconstruction of 3D Human Motion from Single-Camera Video

Neural Information Processing Systems

The three-dimensional motion of humans is underdetermined when the observation is limited to a single camera, due to the inherent 3D ambiguity of2D video. We present a system that reconstructs the 3D motion of human subjects from single-camera video, relying on prior knowledge about human motion, learned from training data, to resolve those ambiguities. Afterinitialization in 2D, the tracking and 3D reconstruction is automatic; we show results for several video sequences. The results show the power of treating 3D body tracking as an inference problem.