Exploratory Feature Extraction in Speech Signals

Intrator, Nathan

Neural Information Processing Systems 

A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing multimodality is presented, and its connection to exploratory projection pursuit methods is discussed. This leads to a new statistical insight to the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principle based solely on distinguishing features, is demonstrated using a linguistically motivated phoneme recognition experiment, and compared with feature extraction using back-propagation network. 1 Introduction Due to the curse of dimensionality (Bellman, 1961) it is desirable to extract features from a high dimensional data space before attempting a classification. How to perform this feature extraction/dimensionality reduction is not that clear. A first simplification is to consider only features defined by linear (or semi-linear) projections of high dimensional data. This class of features is used in projection pursuit methods (see review in Huber, 1985). Even after this simplification, it is still difficult to characterize what interesting projections are, although it is easy to point at projections that are uninteresting. A statement that has recently been made precise by Diaconis and Freedman (1984) says that for most high-dimensional clouds, most low-dimensional projections are approximately normal. This finding suggests that the important information in the data is conveyed in those directions whose single dimensional projected distribution is far from Gaussian, especially at the center of the distribution.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found