Goto

Collaborating Authors

 Asia


Non-Gaussian Component Analysis: a Semi-parametric Framework for Linear Dimension Reduction

Neural Information Processing Systems

We propose a new linear method for dimension reduction to identify non-Gaussian components in high dimensional data. Our method, NGCA (non-Gaussian component analysis), uses a very general semi-parametric framework. In contrast to existing projection methods we define what is uninteresting (Gaussian): by projecting out uninterestingness, we can estimate therelevant non-Gaussian subspace. We show that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate.Once NGCA components are identified and extracted, various tasks can be applied in the data analysis process, like data visualization, clustering, denoising or classification. A numerical study demonstrates the usefulness of our method.


Group and Topic Discovery from Relations and Their Attributes

Neural Information Processing Systems

We present a probabilistic generative model of entity relationships and their attributes that simultaneously discovers groups among the entities and topics among the corresponding textual attributes. Block-models of relationship data have been studied in social network analysis for some time. Here we simultaneously cluster in several modalities at once, incorporating the attributes (here, words) associated with certain relationships. Significantly, joint inference allows the discovery of topics to be guided by the emerging groups, and vice-versa. We present experimental results on two large data sets: sixteen years of bills put before the U.S. Senate, comprising their corresponding text and voting records, and thirteen years of similar data from the United Nations. We show that in comparison with traditional, separate latent-variable models for words, or Block-structures for votes, the Group-Topic model's joint inference discovers more cohesive groups and improved topics.


Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity

Neural Information Processing Systems

We present a nonlinear, simple, yet effective, feature subset selection method for regression and use it in analyzing cortical neural activity. Our algorithm involves a feature-weighted version of the k-nearest-neighbor algorithm. It is able to capture complex dependency of the target function onits input and makes use of the leave-one-out error as a natural regularization. We explain the characteristics of our algorithm on synthetic problemsand use it in the context of predicting hand velocity from spikes recorded in motor cortex of a behaving monkey. By applying feature selectionwe are able to improve prediction quality and suggest a novel way of exploring neural data.



Data-Driven Online to Batch Conversions

Neural Information Processing Systems

Online learning algorithms are typically fast, memory efficient, and simple toimplement. However, many common learning problems fit more naturally in the batch learning setting. The power of online learning algorithms can be exploited in batch settings by using online-to-batch conversions techniques which build a new batch algorithm from an existing onlinealgorithm. We first give a unified overview of three existing online-to-batch conversion techniques which do not use training data in the conversion process. We then build upon these data-independent conversions to derive and analyze data-driven conversions.


Separation of Music Signals by Harmonic Structure Modeling

Neural Information Processing Systems

Separation of music signals is an interesting but difficult problem. It is helpful for many other music researches such as audio content analysis. In this paper, a new music signal separation method is proposed, which is based on harmonic structure modeling. The main idea of harmonic structure modelingis that the harmonic structure of a music signal is stable, so a music signal can be represented by a harmonic structure model. Accordingly, acorresponding separation algorithm is proposed. The main idea is to learn a harmonic structure model for each music signal in the mixture, and then separate signals by using these models to distinguish harmonic structures of different signals. Experimental results show that the algorithm can separate signals and obtain not only a very high Signalto-Noise Ratio(SNR) but also a rather good subjective audio quality.


Modeling Memory Transfer and Saving in Cerebellar Motor Learning

Neural Information Processing Systems

There is a longstanding controversy on the site of the cerebellar motor learning. Different theories and experimental results suggest that either the cerebellar flocculus or the brainstem learns the task and stores the memory. With a dynamical system approach, we clarify the mechanism of transferring the memory generated in the flocculus to the brainstem and that of so-called savings phenomena. The brainstem learning must comply with a sort of Hebbian rule depending on Purkinje-cell activities. In contrast to earlier numerical models, our model is simple but it accommodates explanationsand predictions of experimental situations as qualitative features of trajectories in the phase space of synaptic weights, without fine parameter tuning.


An Analog Visual Pre-Processing Processor Employing Cyclic Line Access in Only-Nearest-Neighbor-Interconnects Architecture

Neural Information Processing Systems

An analog focal-plane processor having a 128 128 photodiode array has been developed for directional edge filtering. It can perform 4 4-pixel kernel convolution for entire pixels only with 256 steps of simple analog processing.Newly developed cyclic line access and row-parallel processing scheme in conjunction with the "only-nearest-neighbor interconnects" architecturehas enabled a very simple implementation. A proof-of-conceptchip was fabricated in a 0.35-m 2-poly 3-metal CMOS technology and the edge filtering at a rate of 200 frames/sec.


Noise and the two-thirds power Law

Neural Information Processing Systems

The two-thirds power law, an empirical law stating an inverse nonlinear relationship between the tangential hand speed and the curvature of its trajectory during curved motion, is widely acknowledged to be an invariant ofupper-limb movement. It has also been shown to exist in eyemotion, locomotionand was even demonstrated in motion perception and prediction. This ubiquity has fostered various attempts to uncover the origins of this empirical relationship. In these it was generally attributed eitherto smoothness in hand-or joint-space or to the result of mechanisms that damp noise inherent in the motor system to produce the smooth trajectories evident in healthy human motion. We show here that white Gaussian noise also obeys this power-law. Analysis ofsignal and noise combinations shows that trajectories that were synthetically created not to comply with the power-law are transformed to power-law compliant ones after combination with low levels of noise. Furthermore, there exist colored noise types that drive non-power-law trajectories to power-law compliance and are not affected by smoothing. These results suggest caution when running experiments aimed at verifying thepower-law or assuming its underlying existence without proper analysis of the noise. Our results could also suggest that the power-law might be derived not from smoothness or smoothness-inducing mechanisms operatingon the noise inherent in our motor system but rather from the correlated noise which is inherent in this motor system.


Query by Committee Made Real

Neural Information Processing Systems

Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC,which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling stepof the well known Query By Committee (QBC) algorithm by projecting onto a low dimensional space. KQBC also enables the use of kernels, providing a simple way of extending QBC to the nonlinear scenario. Sampling the low dimension space is done using the hit and run random walk. We demonstrate the success of this novel algorithm by applying it to both artificial and a real world problems.