Dynamic Features for Visual Speechreading: A Systematic Comparison

Gray, Michael S., Movellan, Javier R., Sejnowski, Terrence J.

Neural Information Processing Systems 

Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found