Humans, Cover Your Mouths: Lip Reading Bots in the Wild
Researchers at Oxford University in the U.K. and Google have developed an algorithm that has outperformed professional human lip readers, a breakthrough they say could lead to surveillance video systems that can show the content of speech in addition to the actions of an individual. The researchers developed the algorithm by training Google's Deep Mind neural network on thousands of hours of subtitled BBC TV videos, showing a wide range of people speaking in a variety of poses, activities, and lighting. By translating mouth movements into individual characters, WLAS was able to spell out words. The Oxford researchers found a professional lip reader could correctly decipher less than 25% of the spoken words, while the neural network was able to decipher 50% of the spoken words.
Sep-8-2017, 18:25:09 GMT
- AI-Alerts:
- 2017 > 2017-09 > AAAI AI-Alert for Sep 12, 2017 (1.00)
- Country:
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.29)
- North America > United States
- Maryland (0.24)
- Europe > United Kingdom
- Technology: