"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
Clearview AI is just one of many facial recognition firms scraping billions of online images to create a massive database for purchase – but a new program could block their efforts. Researchers designed an image clocking tool that makes subtle pixel-level changes that distort pictures enough so they cannot be used by online scrapers – and claims it is 100 percent effective. Named in honor of the'V for Vendetta' mask, Fawkes is an algorithm and software combination that'cloaks' an image to trick systems, which is like adding an invisible mask to your face. These altered pictures teach technologies a distorted version of the subject and when presented with an'uncloaked' form, the scraping app fails to recognize the individual. 'It might surprise some to learn that we started the Fawkes project a while before the New York Times article that profiled Clearview.ai in February 2020,' researchers from the SANLab at University of Chicago shared in a statement.
Researchers from the US National Institute of Standards and Technology found that face masks are causing facial recognition algorithms to fail as much as 50% of the time. In a report, the US National Institute of Standards and Technology found that face masks were thwarting even the most advanced facial recognition algorithms. Error rates varied from 5% to 50%, depending on an algorithm's capabilities. The results are troubling for the facial recognition industry which has been scrambling to develop algorithms that can identify people through their eyes and nose alone as people turn to face masks amid the coronavirus pandemic. The masks have caused trouble for facial recognition software prompting tech companies to adapt.
"If we go in front of a live camera that is using facial recognition to identify and interpret who they're looking at and compare that to a passport photo, we can realistically and repeatedly cause that kind of targeted misclassification," said the study's lead author, Steve Povolny. To misdirect the algorithm, the researchers used an image translation algorithm known as CycleGAN, which excels at morphing photographs from one style into another. For example, it can make a photo of a harbor look as if it were painted by Monet, or make a photo of mountains taken in the summer look like it was taken in the winter. The McAfee team used 1,500 photos of each of the project's two leads and fed the images into a CycleGAN to morph them into one another. At the same time, they used the facial recognition algorithm to check the CycleGAN's generated images to see who it recognized. After generating hundreds of images, the CycleGAN eventually created a faked image that looked like person A to the naked eye but fooled the face recognition into thinking it was person B. While the study raises clear concerns about the security of face recognition systems, there are some caveats.
Advances in Artificial Intelligence (AI) and computer processors have opened new ways for face recognition online services not possible before. Startups all over the world are developing Apps and products that make use of Face Recognition. Moreover, they are bringing products into the market with user authentication, attendance tracking and photo grouping (for event photographers) capabilities, to name a few. Face Recognition Online software components are challenging to develop in-house. For this reason, it makes sense for startups and software companies to buy this capability from specialized vendors.
Ubiquitous facial recognition is a serious threat to privacy. The idea that the photos we share are being collected by companies to train algorithms that are sold commercially is worrying. Anyone can buy these tools, snap a photo of a stranger, and find out who they are in seconds. But researchers have come up with a clever way to help combat this problem. The solution is a tool named Fawkes, and was created by scientists at the University of Chicago's Sand Lab.
With AI often thrown around as a buzzword in business circles, people often forget that machine learning is a means to an end, rather than an end in itself. For most companies, building an AI is not your true goal. Instead, AI implementation can provide you with the tools to meet your goals, be it better customer service through an intuitive chatbot or streamlining video production through synthetic voiceovers. To help shed light on some real-world applications of machine learning, this article introduces five innovative AI software that you should keep on eye on throughout 2020. Scanta is an AI startup with a very interesting history.
Apple is refreshing its 27-inch iMac, though you'll need a keen eye to spot the differences. The new model doesn't look any different from its predecessors, sporting the same classic look Apple has used for several years now with thick bezels surrounding the 5K display. You won't find any radically new features here either. There's still no biometric authentication, meaning there's no Face ID or Touch ID, and the screen uses the exact same panel and pixel resolution as before. Most of the changes are on the inside, and impact performance.
"I personally think that no matter which approach you use, you lose," said Emily Wenger, a Ph.D. student who helped create Fawkes. "You can have these technological solutions, but it's a cat-and-mouse game. And you can have a law, but there will always be illegal actors." Ms. Wenger thinks "a two-prong approach" is needed, where individuals have technological tools and a privacy law to protect themselves. Elizabeth Joh, a law professor at the University of California, Davis, has written about tools like Fawkes as "privacy protests," where individuals want to thwart surveillance but not for criminal reasons.
Several sports teams are exploring the use and implementation of facial-recognition technology in their stadiums, an effort that would help reduce risks from the coronavirus when fans return, the Wall Street Journal reported. The initial outbreak of coronavirus appeared to accelerate due to high-occupancy sports venues in Europe acting as super-spreaders – most notably soccer stadiums in Italy, and matches in the Champions League involving Spanish teams. With some areas seeing the pandemic under control, sports teams are looking to bring back fans in a safe and controlled way. The use of facial-recognition technology may allow sports venues to bring back small numbers of fans – most likely season-ticket holders or VIP guests – suggested Shaun Moore, chief executive of Trueface, a facial-recognition supplier. Moore indicated that the primary concern is that even scanning ticket bar codes could help to spread the virus.
Face masks are one of the best defenses against the spread of COVID-19, but their growing adoption is having a second, unintended effect: breaking facial recognition algorithms. Wearing face masks that adequately cover the mouth and nose causes the error rate of some of the most widely used facial recognition algorithms to spike to between 5 percent and 50 percent, a study by the US National Institute of Standards and Technology (NIST) has found. Black masks were more likely to cause errors than blue masks, and the more of the nose covered by the mask, the harder the algorithms found it to identify the face. "With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces," said Mei Ngan, an author of the report and NIST computer scientist. "We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind."