About Us Faceware Tech develops innovative facial motion capture products for the professional animation and videogame industries. We make the tools used by your favorite videogame developers (EA, Bungie, Rockstar, 2K, WB, Sony, Capcom, and more) and VFX studios (Dneg, Atomic Fiction, Oats, CamD) to create high-end facial animation. At Siggraph 2016, we announced FTI Interactive-- a new division within the company focused on real-time content creation. The world's demands for content are even higher than ever before-- with needs in interactive, realtime content creation on the cusp of a revolutionary breakthrough. This position will be part of the team leading the cutting edge of entertainment by innovating new techniques for the exciting up and coming industries of VR, MR, and AR.
Tracking people with integrated stereo, color, and face detection. Abstract We present an approach to robust, real-time person tracking in crowded and/or unknown environments using multimodal integration. We combine stereo, color, and face detection modules into a single robust system, and show an initial application for an interactive display where the user sees his face distorted into various comic poses in real-time. Stereo processing is used to isolate the figure of a user from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the foreground region, and face pattern detection discriminates and localizes the face within the tracked body parts. We discuss the failure modes of these individual components, and report results with the complete system in trials with thousands of users. Introduction The creation of displays or environments which passively observe and react to people is an exciting challenge for computer vision.
The Gromov-Hausdorff distance provides a metric on the set of isometry classes of compact metric spaces. Unfortunately, computing this metric directly is believed to be computationally intractable. Motivated by applications in shape matching and point-cloud comparison, we study a semidefinite programming relaxation of the Gromov-Hausdorff metric. This relaxation can be computed in polynomial time, and somewhat surprisingly is itself a pseudometric. We describe the induced topology on the set of compact metric spaces. Finally, we demonstrate the numerical performance of various algorithms for computing the relaxed distance and apply these algorithms to several relevant data sets. In particular we propose a greedy algorithm for finding the best correspondence between finite metric spaces that can handle hundreds of points.
Building on earlier work which teaches computers to recognise emotions and expressions in human faces, the system is able to detect the distinct parts of a sheep's face and compare it with a standardised measurement tool developed by veterinarians for diagnosing pain. Their results will be presented today (1 June) at the 12th IEEE International Conference on Automatic Face and Gesture Recognition in Washington, DC. Severe pain in sheep is associated with conditions such as foot rot, an extremely painful and contagious condition which causes the foot to rot away; or mastitis, an inflammation of the udder in ewes caused by injury or bacterial infection. Both of these conditions are common in large flocks, and early detection will lead to faster treatment and pain relief. Reliable and efficient pain assessment would also help with early diagnosis.
One of today's more popular artificially intelligent (AI) androids comes from the TV series "MARVEL's Agents of S.H.I.E.L.D." Those of you who followed the latest season's story -- no spoilers here! One of the most interesting things about this fictional AI character is that it can read people's emotions. Thanks to researchers from the University of Cambridge, this AI ability might soon make the jump from sci-fi to reality. The first step in creating such a system is training an algorithm on simpler facial expressions and just one specific emotion or feeling. To that end, the Cambridge team focused on using a machine learning algorithm to figure out if a sheep is in pain, and this week, they presented their research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.