Facial recognition helps mom and dad see kids' camp photos, raises privacy concerns for some

USATODAY - Tech Top Stories

A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.


The A.I. "Gaydar" Study and the Real Dangers of Big Data

#artificialintelligence

Every face does not tell a story; it tells thousands of them. Over evolutionary time, the human brain has become an exceptional reader of the human face--computerlike, we like to think. A viewer instinctively knows the difference between a real smile and a fake one. In July, a Canadian study reported that college students can reliably tell if people are richer or poorer than average simply by looking at their expressionless faces. Scotland Yard employs a team of "super-recognizers" who can, from a pixelated photo, identify a suspect they may have seen briefly years earlier or come across in a mug shot.


Seize the data with Hewlett Packard Enterprise

#artificialintelligence

Empowering the data-driven organization is a core element of our strategy at Hewlett Packard Enterprise. This can sound like just another fancy marketing campaign – unless you were in a seat at the Seize the Data Analytics World Tour event in Palo Alto. DreamWorks Animation's Jeff Wike presenting at the Silicon Valley event As I sat smiling through the Kung Fu Panda 3 trailer and watched Jeff Wike, Head of Technology for Film and TV Production at DreamWorks Animation, take the stage, I expected to hear how analytics helped DreamWorks Animation analyze how many people watched the film and how they chose to do so - in the theater, on demand, on what device. That information is foundational to any organization in the media industries these days. I didn't expect to hear that the HPE Vertica Advanced Analytics database improved the ability of artists to iterate design and even render panda fur by minimizing compute resources, or how the studio was able to quickly redesign the characters' facial movements when the movie was translated to Mandarin, also due to the power of analytics.


Plan for massive facial recognition database sparks privacy concerns

The Guardian

If you've had a driver's licence photo or passport photo taken in Australia in the past few years, it's likely your face will end up in a massive new national network the federal government is trying to create. Victoria and Tasmania have already begun to upload driver's licence details to state databases that will eventually be linked to a future national one. Legislation before federal parliament will allow government agencies and private businesses to access facial IDs held by state and territory traffic authorities, and passport photos held by the foreign affairs department. The justification for what would be the most significant compulsory collection of personal data since My Health Record is cracking down on identity fraud. The home affairs department estimates that the annual cost of ID fraud is $2.2bn, and says introducing a facial component to the government's document verification service would help prevent it.


Text Mining Support in Semantic Annotation and Indexing of Multimedia Data

AAAI Conferences

This short paper is describing a demonstrator that is complementing the paper "Towards Cross-Media Feature Extraction" in these proceedings. The demo is exemplifying the use of textual resources, out of which semantic information can be extracted, for supporting the semantic annotation and indexing of associated video material in the soccer domain. Entities and events extracted from textual data are marked-up with semantic classes derived from an ontology modeling the soccer domain. We show further how extracted Audio-Video features by video analysis can be taken into account for additional annotation of specific soccer event types, and how those different types of annotation can be combined.