Apple has purchased the company behind motion-capture technology used in the latest Star Wars film. Faceshift, a Zurich based start-up, specialises in software that allows 3D animated characters to mimic the facial expressions of an actor. Apple has now bought the company, though it is not known how much the deal cost the tech giant. It is also unclear what Apple's plans are for the company following its acquisition. A spokesman said: "Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans."
People feel emotions when listening to music. However, emotions are not tangible objects that can be exploited in the music composition process as they are difficult to capture and quantify in algorithms. We present a novel musical interface, Mugeetion, designed to capture occurring instances of emotional states from users' facial gestures and relay that data to associated musical features. Mugeetion can translate qualitative data of emotional states into quantitative data, which can be utilized in the sound generation process. We also presented and tested this work in the exhibition of sound installation, Hearing Seascape, using the audiences' facial expressions. Audiences heard changes in the background sound based on their emotional state. The process contributes multiple research areas, such as gesture tracking systems, emotion-sound modeling, and the connection between sound and facial gesture.
Turchyn, Sergiy (Case Western Reserve University) | Moreno, Inés Olza (Institute for Culture and Society, University of Navarra) | Cánovas, Cristóbal Pagán (Institute for Culture and Society, University of Navarra) | Steen, Francis F. (University of California-Los Angeles) | Turner, Mark (Case Western Reserve University) | Valenzuela, Javier (University of Murcia) | Ray, Soumya (Case Western Reserve University)
Human communication is multimodal and includes elements such as gesture and facial expression along with spoken language. Modern technology makes it feasible to capture all such aspects of communication in natural settings. As a result, similar to fields such as genetics, astronomy and neuroscience, scholars in areas such as linguistics and communication studies are on the verge of a data-driven revolution in their fields. These new approaches require analytical support from machine learning and artificial intelligence to develop tools to help process the vast data repositories. The Distributed Little Red Hen Lab project is an international team of interdisciplinary researchers building a large-scale infrastructure for data-driven multimodal communications research. In this paper, we describe a machine learning system developed to automatically annotate a large database of television program videos as part of this project. The annotations mark regions where people or speakers are on screen along with body part motions including head, hand and shoulder motion. We also annotate a specific class of gestures known as timeline gestures. An existing gesture annotation tool, ELAN, can be used with these annotations to quickly locate gestures of interest. Finally, we provide an update mechanism for the system based on human feedback. We empirically evaluate the accuracy of the system as well as present data from pilot human studies to show its effectiveness at aiding gesture scholars in their work.
The iPhone 8 could include face recognition and a "wraparound" screen design, analyst Timothy Arcuri of Cowen and Company said, according to Business Insider. Arcuri predicts three iPhone models later this year, according to a research note circulated to Cowen and Company clients. The next iPhone 8 is referred to in the note as the "iPhone X." The note says one of the models, the iPhone X, will be a 5.8 inch OLED iPhone 8 with a "wraparound" "fixed flex" screen design with embedded sensors, according to Apple Insider, who also obtained the note. The model is rumored to come with features such as, face recognition.
Artificial intelligence has long been thought of in terms similar to that of fusion power -- it's always 20 years away. Outside, it was a normal morning. Inside, looking out the window and barely noticing the chickadee, the business woman, a manager at a large call center downtown, waited for her morning coffee. It was a short wait. Her coffee maker knew that she woke at 5:30 a.m. It knew that because the alarm clock in the woman's bedroom sensed her movement and saved that information to the woman's Amazon Web Services (AWS) account. The coffee maker, also tied to that AWS account, took the hint and turned itself on. Ten minutes later, the shower came on in the bathroom. This also was noted by the house's systems and that data point was duly recorded by the woman's AWS account. That was the next cue for the coffee maker. It was plumbed directly into the house's water lines, and it opened the valve and filled itself with just the right amount of water to brew the coffee. It was brewing as the woman dressed, and five minutes after the woman appeared, the coffee was delivered to her by her Boston Dynamics personal assistant. She took a sip, just as the chickadee flew away. It was now 6:30 a.m., and time to leave for the office. Just like every other weekday, it would be a peaceful commute. As she left her apartment building and the door closed behind her, her ride was just pulling up to the curb.