Representing Face Images for Emotion Classification

Padgett, Curtis, Cottrell, Garrison W.

Neural Information Processing Systems 

Curtis Padgett Department of Computer Science University of California, San Diego La Jolla, CA 92034 GarrisonCottrell Department of Computer Science University of California, San Diego La Jolla, CA 92034 Abstract We compare the generalization performance of three distinct representation schemesfor facial emotions using a single classification strategy (neural network). The face images presented to the classifiers arerepresented as: full face projections of the dataset onto their eigenvectors (eigenfaces); a similar projection constrained to eye and mouth areas (eigenfeatures); and finally a projection of the eye and mouth areas onto the eigenvectors obtained from 32x32 random image patches from the dataset. The latter system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from a database in which human subjects consistentlyidentify a single emotion for the face . 1 Introduction Some of the most successful research in machine perception of complex natural image objects (like faces), has relied heavily on reduction strategies that encode an object as a set of values that span the principal component subspace of the object's images [Cottrell and Metcalfe, 1991, Pentland et al., 1994]. This approach has gained wide acceptance for its success in classification, for the efficiency in which the eigenvectors can be calculated, and because the technique permits an implementation thatis biologically plausible. The procedure followed in generating these face representations requires normalizing a large set of face views (" mugshots") and from these, identifying a statistically relevant subspace.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found