"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
"A world perfectly fair in some dimensions would be horribly unfair in others." "Fairness" in Artificial Intelligence (AI) applications -- both as a concept and a practice -- is the focus of many organisations as they deploy new technologies for greater effectiveness and efficiencies. That machines are faster at processing large amounts of information and the notion that they are'more objective' than humans, appear to make them an obvious choice for progressivity and seemingly impartial actors in'fairer' decision-making. Yet, algorithmic based decisions have not come without their share of controversies -- Australia's recent'robo-debt' government intervention which wrongly pursued thousands of welfare recipients; the UK's'A-Levels fiasco' of downgrading graduating grades based on historical data, its controversial visa application streaming tool; and concerns about Clearview AI's facial recognition software for policing are raising new questions on the role of these technologies in society. Risk assessments are part of the fabric of modern society, but what we are dealing with here is not just'scaling up' human capacity for decision-making without the unwanted human biases and errors -- we are also extolling the'virtues of objectivity' under the guise of'fairness' (which is inherently subjective!) and failing to recognise the many inter-relationships that are being unraveled through the use of these algorithms in our daily lives.
The government aims to put a facial recognition system into practical use to prevent new coronavirus infections at large-scale events including the Tokyo Olympics and Paralympics, it was learned Friday. The government also hopes to improve the national capacity to conduct saliva-based polymerase chain reaction tests to simultaneously detect cases of influenza and novel coronavirus infection, informed sources said. The proposals are included in a draft program for developing new technologies for preventing coronavirus infection. The government will unveil the program shortly and carry out demonstration tests at relevant ministries and agencies. According to the draft, the government is looking at using security cameras equipped with a facial recognition system to record the movements of visitors to the Tokyo Games, which were postponed to 2021, and other large-scale events, the sources said.
YI Technology (YI), the global provider of advanced, intelligent imaging technologies and products, announced the launch of the Kami Mini Indoor Camera, a tiny indoor security camera with a big brain, and the YI Dome Camera U, a complete coverage home security camera with extra privacy features. Both home security solutions are powered by YI Technology's proprietary, EDGE-based artificial intelligence (AI) technology which offers the most advanced feature set at the lowest possible price point to make smart home technology more accessible to the masses. YI has pioneered the development of a series of AI-enabled camera processing integrated circuits (ICs) which power Kami Mini and YI Dome Camera U. The sophisticated System-On-a-Chip (SoC) processing leverages Edge-based AI allowing for more advanced computing, fewer false alerts, and guaranteed access to AI features whether or not the user has a cloud subscription. Embedding the advanced AI chip in the camera also reduces detection latency because detection happens directly on the device rather than sending it to the cloud and waiting for a return signal. Recommended AI News: Cryptocurrency Prodigy, Joseph "PlugWalkJoe" O'Connor, Is Helping People Everywhere Master Crypto AI-Based Alerts & Face Detection: Equipped with the latest Edge Computing enabled chip, users can review all the faces that appear in their clips directly in the YI Home or Kami Home Apps.
We all are moving towards an era of Artificial Intelligence. Earlier when face recognition was something to be amazed at it is now easily implemented using existing libraries and frameworks. Machine learning is now embedded into our lives and it is thickening its grasp with time. Earlier it was a buzzword but now it is a reality that is making our lives easier and better. So let's talk about some of the problems with Machine Learning.
OpenCV (Open source computer vision) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source BSD license. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc.
Editor's Note: The use of face recognition technology in policing has been a long-standing subject of concern, even more-so now after the murder of George Floyd and the demonstrations that have followed. In this article, Mike Loukides, VP of Content Strategy at O'Reilly Media, reviews how companies and cities have addressed these concerns, as well as ways in which individuals can mitigate face recognition technology or even use it to increase accountability. We'd love to hear from you about what you think about this piece. Largely on the impetus of the Black Lives Matter movement, the public's response to the murder of George Floyd, and the subsequent demonstrations, we've seen increased concern about the use of facial identification in policing. First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces.
The Los Angeles Police Commission on Tuesday said it would review the city Police Department's use of facial recognition software and how it compared with programs in other major cities. The commission did so after citing reporting by The Times this week that publicly revealed the scope of the LAPD's use of facial recognition for the first time -- including that hundreds of LAPD officers have used it nearly 30,000 times since 2009. Critics say police denials of its use are part of a long pattern of deception and that transparency is essential, given potential privacy and civil rights infringements. Commission President Eileen Decker said a subcommittee of the commission would "do a deeper dive" into the technology's use and "work with the department in terms of analyzing the oversight mechanisms" for the system. "It's a good time to take a global look at this issue," Decker said.
Facial-recognition tech can see around hoodies or big shades, so pair them with a face covering. Plus, you'll get protection against coronavirus particles and tear gas. There are makeup tutorials online for edgy face paint intended to trick face-recognizing algorithms, but these designs are unproven. Also, it's probably easier for humans to track you if you look like a member of Insane Clown Posse. Make yourself less memorable to both humans and machines by wearing clothing as dark and pattern-free as your commitment to privacy.