Kim Binsted Sony Computer Science Lab 3-14-13 Higashigotanda Shinagawa-ku, Tokyo 141 Abstract Byrne is a talking head system, developed with two goals in mind: to allow artists to create entertaining characters with strong personalities, expressed through speech and facial animation; and to allow cognitive scientists to implement and test theories of emotion and expression. Here we emphasize the latter aim. We describe Byrne's design, and discuss some ways in which it could be used in affect-related experiments. Byrne's first domain is football commentary; that is, Byrne provides an emotionally expressive running commentary on a RoboCup simulation league football game. We will give examples from this domain throughout this paper.
Next time you hear a voice generated by Baidu's Deep Voice 2, you might not be able to tell whether it's human. Baidu, the Beijing-based juggernaut that commands 80 percent of the Chinese internet search market, is investing heavily in artificial intelligence. In 2013, it opened the Institute of Deep Learning, an R&D center focused on machine learning. And in May, it took the wraps off the newest version of Deep Voice, its AI-powered text-to-speech engine. Deep Voice 2, which follows on the heels of Deep Voice's public debut earlier this year, can produce real-time speech that's nearly indistinguishable from a human voice.
The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release. At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.
Automated facial recognition systems from Japanese biz NEC will be used on staffers and athletes at the Tokyo 2020 Olympics. The technology – which is not without its detractors in the UK – was demonstrated at a media event in the city today. It will require athletes, staff, volunteers and the press to submit their photographs before the games start. These will then be linked up to IC chips in their passes and combined with scanners on entry to allow them access to more than 40 facilities. Tsuyoshi Iwashita, head of security for the games, said the aim was to reduce pressure on entry points and shorten queueing time for this group of people.
Can software identify complex personality traits simply by analysing your face? Faception, a start-up based in Tel Aviv, Israel, courted controversy this week when it claimed its tech does just that. And not just broad categories such as introvert or extrovert: Faception claims it can spot terrorists, paedophiles – and brand promoters. "Using automated feature extraction is standard for face recognition and emotion recognition," says Raia Hadsell, a machine vision engineer at Google DeepMind. The controversial part is what happens next.