"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
This week's furor over FaceApp has largely centered on concerns that its Russian developers might be compelled to share the app's data with the Russian government, much as the Snowden disclosures illustrated the myriad ways in which American companies were compelled to disclose their private user data to the US government. Yet the reality is that this represents a mistaken understanding of just how the modern data trade works today and the simple fact that American universities and companies routinely make their data available to companies all across the world, including in Russia and China. In today's globalized world, data is just as globalized, with national borders no longer restricting the flow of our personal information - trend made worse by the data-hungry world of deep learning. Data brokers have long bought and sold our personal data in a shadowy world of international trade involving our most intimate and private information. The digital era has upended this explicit trade through the interlocking world of passive exchange through analytics services.
The use of facial recognition in the United States public sector has received a great deal of press lately, and most of it isn't positive. There's a lot of concern over how state and federal government agencies are using this technology and how the resulting biometric data will be used. Many fear that the use of this technology will lead to a Big Brother state. Unfortunately, these concerns are not without merit. We're already seeing damaging results where this technology is prevalent in countries like China, Singapore, and even the United Kingdom where London authorities recently fined a man for disorderly conduct for covering his face to avoid surveillance on the streets.
A police department in Orlando has terminated its trial of Amazon's AI-powered facial recognition for the second time, citing costs and complexity. According to a report from Orlando Weekly, the department ended its trial of the technology, called Rekognition, after 15 months of glitches and concerns over whether the technology was actually working. 'At this time, the city was not able to dedicate the resources to the pilot to enable us to make any noticeable progress toward completing the needed configuration and testing,' Orlando's Chief Administrative Office said in a memo to City Council, as reported by Orlando Weekly. A police department in Orlando has ended its pilot of Amazon's facial recognition software after being unable to get its system working properly. The decision marks the second time in just 10 months that the department decided not to proceed with using the technology.
With images aggregated from social media platforms, dating sites, or even CCTV footage of a trip to the local coffee shop, companies could be using your face to train a sophisticated facial recognition software. As reported by the New York Times, among the sometimes massive data sets that researchers use to teach artificially intelligent software to recognize faces is a database collected by Stanford researchers called Brainwash. More than 10,000 images of customers at a cafe in San Francisco were collected in 2014 without their knowledge. OKCupid and photo-sharing platforms like Flickr are among for researchers looking to load their databases up with images that help train facial recognition software. That same database was then made available to other academics, including some in China at the National University of Defense Technology.
Better known as a supplier of facial recognition software used by the Chinese government, an AI-startup that is backed by Alibaba has developed software that can identify dogs by their noses. No, it isn't April 1st; the facial recognition software developed by Megvii really can identify one dog from another by using nasal biometrics. KrAsia news reports that the company has developed the software on the basis that dogs have unique nose prints. Dr. David Dorman, a professor of toxicology, has previously said that: "Like human fingerprints, each dog has a unique nose print. Some kennel clubs have used dog nose prints for identification."
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).
Roundup Hello, here's a few announcements from the world of machine learning beyond what we've already covered this week. AlphaStar is coming out to play: AlphaStar, the StarCraft II-playing bot built by DeepMind researchers, will be facing human players in a series of 1v1 games online. StarCraft II players can enter the open competition league set up by Blizzard Entertainment, the creators of the popular battle strategy game, and opt-in to play against AlphaStar. But nobody will know if they're facing the bot, however, because it'll be entering the matches anonymously. Characters in the StarCraft II are from three species: Terran, Zerg or Protoss.
The sophisticated technology that powers face recognition in many modern smartphones someday could receive a high-tech upgrade that sounds--and looks--surprisingly low-tech. This window to the future is none other than a piece of glass. University of Wisconsin-Madison engineers have devised a method to create pieces of "smart" glass that can recognize images without requiring any sensors or circuits or power sources. "We're using optics to condense the normal setup of cameras, sensors and deep neural networks into a single piece of thin glass," says UW-Madison electrical and computer engineering professor Zongfu Yu. Yu and colleagues published details of their proof-of-concept research today in the journal Photonics Research.
Dozens of databases of people's faces are being compiled without their knowledge by companies and researchers, with many of the images then being shared around the world, in what has become a vast ecosystem fueling the spread of facial recognition technology. The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurants and on college quads. While there is no precise count of the data sets, privacy activists have pinpointed repositories that were built by Microsoft, Stanford University and others, with one holding over 10 million images while another had more than two million. The face compilations are being driven by the race to create leading-edge facial recognition systems. This technology learns how to identify people by analyzing as many digital pictures as possible using "neural networks," which are complex mathematical systems that require vast amounts of data to build pattern recognition.
Megvii, a Chinese AI startup that supplies facial recognition software for the Chinese government's surveillance program, is expanding its technology beyond humans to recognize different faces of pets. As reported by Abacus News, Megvii's new program is trained to recognize dogs by their nose prints -- much like how humans have unique fingerprints. Using the Megvii app, the company says it can register your dog simply by scanning the snout through your phone's camera. Just like how a phone registers your fingerprint for biometric unlocks, the app asks you to take photos of your dog's nose from multiple angles. Megvii says it has an accuracy rate of 95 percent and has reunited 15,000 pets with their owners through the app.