A talking head architecture for entertainment and experimentation

AAAI Conferences

Kim Binsted Sony Computer Science Lab 3-14-13 Higashigotanda Shinagawa-ku, Tokyo 141 Abstract Byrne is a talking head system, developed with two goals in mind: to allow artists to create entertaining characters with strong personalities, expressed through speech and facial animation; and to allow cognitive scientists to implement and test theories of emotion and expression. Here we emphasize the latter aim. We describe Byrne's design, and discuss some ways in which it could be used in affect-related experiments. Byrne's first domain is football commentary; that is, Byrne provides an emotionally expressive running commentary on a RoboCup simulation league football game. We will give examples from this domain throughout this paper.

Baidu's Deep Voice 2 text-to-speech engine can imitate hundreds of human accents


Next time you hear a voice generated by Baidu's Deep Voice 2, you might not be able to tell whether it's human. Baidu, the Beijing-based juggernaut that commands 80 percent of the Chinese internet search market, is investing heavily in artificial intelligence. In 2013, it opened the Institute of Deep Learning, an R&D center focused on machine learning. And in May, it took the wraps off the newest version of Deep Voice, its AI-powered text-to-speech engine. Deep Voice 2, which follows on the heels of Deep Voice's public debut earlier this year, can produce real-time speech that's nearly indistinguishable from a human voice.

Neural Style Transfer: Creating Art with Deep Learning using tf.keras and eager execution


In this tutorial, we will learn how to use deep learning to compose images in the style of another image (ever wish you could paint like Picasso or Van Gogh?). This is known as neural style transfer! This is a technique outlined in Leon A. Gatys' paper, A Neural Algorithm of Artistic Style, which is a great read, and you should definitely check it out. Neural style transfer is an optimization technique used to take three images, a content image, a style reference image (such as an artwork by a famous painter), and the input image you want to style -- and blend them together such that the input image is transformed to look like the content image, but "painted" in the style of the style image. For example, let's take an image of this turtle and Katsushika Hokusai's The Great Wave off Kanagawa: Now how would it look like if Hokusai decided to add the texture or style of his waves to the image of the turtle?

Controversial software claims to tell personality from your face

New Scientist

Can software identify complex personality traits simply by analysing your face? Faception, a start-up based in Tel Aviv, Israel, courted controversy this week when it claimed its tech does just that. And not just broad categories such as introvert or extrovert: Faception claims it can spot terrorists, paedophiles – and brand promoters. "Using automated feature extraction is standard for face recognition and emotion recognition," says Raia Hadsell, a machine vision engineer at Google DeepMind. The controversial part is what happens next.

Apple Reportedly Acquires AI-Based Facial Recognition Startup RealFace


In a bid to boost its prospects in the world of artificial intelligence (AI), Apple has acquired Israel-based startup RealFace that develops deep learning-based face authentication technology, media reported on Monday. Reported by Calcalist, the acquisition is to be worth roughly $2 million (roughly Rs. 13.39 crores). A Times of Israel report cites Startup Nation Central to note RealFace had raised $1 million in funding thus far, employed about 10 people, and had sales operations China, Europe, Israel, and the US. Set up in 2014 by Adi Eckhouse Barzilai and Aviv Mader, RealFace has developed a facial recognition software that offers users a smart biometric login, aiming to make passwords redundant when accessing mobile devices or PCs. The firm's first app - Pickeez - selects the best photos from the user's album.