Text Mining Support in Semantic Annotation and Indexing of Multimedia Data

AAAI Conferences

This short paper is describing a demonstrator that is complementing the paper "Towards Cross-Media Feature Extraction" in these proceedings. The demo is exemplifying the use of textual resources, out of which semantic information can be extracted, for supporting the semantic annotation and indexing of associated video material in the soccer domain. Entities and events extracted from textual data are marked-up with semantic classes derived from an ontology modeling the soccer domain. We show further how extracted Audio-Video features by video analysis can be taken into account for additional annotation of specific soccer event types, and how those different types of annotation can be combined.


A talking head architecture for entertainment and experimentation

AAAI Conferences

Kim Binsted Sony Computer Science Lab 3-14-13 Higashigotanda Shinagawa-ku, Tokyo 141 Abstract Byrne is a talking head system, developed with two goals in mind: to allow artists to create entertaining characters with strong personalities, expressed through speech and facial animation; and to allow cognitive scientists to implement and test theories of emotion and expression. Here we emphasize the latter aim. We describe Byrne's design, and discuss some ways in which it could be used in affect-related experiments. Byrne's first domain is football commentary; that is, Byrne provides an emotionally expressive running commentary on a RoboCup simulation league football game. We will give examples from this domain throughout this paper.


How well do facial recognition algorithms cope with a million strangers?

#artificialintelligence

The MegaFace dataset contains 1 million images representing more than 690,000 unique people. It is the first benchmark that tests facial recognition algorithms at a million scale.University of Washington In the last few years, several groups have announced that their facial recognition systems have achieved near-perfect accuracy rates, performing better than humans at picking the same face out of the crowd. But those tests were performed on a dataset with only 13,000 images -- fewer people than attend an average professional U.S. soccer game. What happens to their performance as those crowds grow to the size of a major U.S. city? University of Washington researchers answered that question with the MegaFace Challenge, the world's first competition aimed at evaluating and improving the performance of face recognition algorithms at the million person scale.


Topic Segmentation Algorithms for Text Summarization and Passage Retrieval: An Exhaustive Evaluation

AAAI Conferences

In order to solve problems of reliability of systems based on lexical repetition and problems of adaptability of language-dependent systems, we present a context-based topic segmentation system based on a new informative similarity measure based on word co-occurrence. In particular, our evaluation with the state-of-the-art in the domain i.e. the c99 and the TextTiling algorithms shows improved results both with and without the identification of multiword units.


Facial Recognition Used by Wales Police Has 90 Percent False Positive Rate

#artificialintelligence

Thousands of attendees of the 2017 Champions League final in Cardiff, Wales were mistakenly identified as potential criminals by facial recognition technology used by local law enforcement. According to the Guardian, the South Wales police scanned the crowd of more than 170,000 people who traveled to the nation's capital for the soccer match between Real Madrid and Juventus. The cameras identified 2,470 people as criminals. Having that many potential lawbreakers in attendance might make sense if the event was, say, a convict convention, but seems pretty high for a soccer match. As it turned out, the cameras were a little overly-aggressive in trying to spot some bad guys.