Not enough data to create a plot.
Try a different view from the menu above.
"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
Pigs could be issued with biometric passports based on facial recognition technology, giving farmers a more practical and welfare-friendly way of identifying individuals than ear notches or tags, the current industry standards. Identifying pigs based on their unique facial features could enable them to receive individualised food and veterinary care, and be traced as they go through meat processing.
Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a "positive" match, according to documents obtained by the Internet Freedom Foundation through a public records request. Facial recognition's arrival in India's capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India's lack of a comprehensive data protection law makes matters even more concerning.
Customs and Border Protection is making progress testing and deploying their facial recognition technology at air, sea and land ports across the country. CBP is using the technology to scan travelers at 26 seaports and 159 land-ports and airports across the country, the Government Accountability Office told the House Committee on Homeland Security hearing on July 27. CBP installed facial recognition technology to biometrically confirm travelers' identities for all arriving and departing travelers.
Researchers from Michigan State University have devised a way for synthetic faces to take a break from the deepfakes scene and do some good in the world – by helping image recognition systems to become more accurate. The new controllable face synthesis module (CFSM) they've devised is capable of regenerating faces in the style of real-world video surveillance footage, rather than relying on the uniformly higher-quality images used in popular open source datasets of celebrities, which do not reflect all the faults and shortcomings of genuine CCTV systems, such as facial blur, low resolution, and sensor noise – factors that can affect recognition accuracy. CFSM is not intended specifically to authentically simulate head poses, expressions, or all the other usual traits that are the objective of deepfake systems, but rather to generate a range of alternative views in the style of the target recognition system, using style transfer. The system is designed to mimic the style domain of the target system, and to adapt its output according to the resolution and range of'eccentricities' therein. The use-case includes legacy systems that are not likely to be upgraded due to cost, but which can currently contribute little to the new generation of facial recognition technologies, due to poor quality of output that may once have been leading-edge.
Australia's second-biggest appliances chain says it is pausing a trial of facial recognition technology in stores after a consumer group referred it to the privacy regulator for possible enforcement action. In an email on Tuesday, a spokesperson for JB Hi-Fi Ltd said The Good Guys, which JB Hi-Fi owns, would stop trialling a security system with optional facial recognition in two Melbourne outlets. Use of the technology by The Good Guys, owned by JB Hi-Fi Ltd, was "unreasonably intrusive" and potentially in breach of privacy laws, the group, CHOICE, told the Office of the Australian Information Commissioner (OAIC). While the company took confidentiality of personal information seriously and is confident it complied with relevant laws, it decided "to pause the trial … pending any clarification from the OAIC regarding the use of this technology", JB Hi-Fi's spokesperson added. The Good Guys was named in a complaint alongside Bunnings, Australia's biggest home improvement chain, and big box retailer Kmart, both owned by Wesfarmers Ltd, with total annual sales of about 25 billion Australian dollars ($19.47m) across 800 stores.
Microsoft is overhauling its artificial intelligence ethics policies and will no longer let companies use its technology to do things such as infer emotion, gender or age using facial recognition technology, the company has said. As part of its new "responsible AI standard", Microsoft says it intends to keep "people and their goals at the centre of system design decisions". The high-level principles will lead to real changes in practice, the company says, with some features being tweaked and others withdrawn from sale. Microsoft's Azure Face service, for instance, is a facial recognition tool that is used by companies such as Uber as part of their identity verification processes. Now, any company that wants to use the service's facial recognition features will need to actively apply for use, including those that have already built it into their products, to prove they are matching Microsoft's AI ethics standards and that the features benefit the end user and society.
New legislation is expected to open the door to the use of facial recognition within a range of surveillance technologies in Ireland, including CCTV cameras and police body cams, automatic number plate recognition (ANPR or LPR in the U.S.), according to reports by the Irish Times. Backlash to the facial recognition element has been significant from civil society and academics. They find the move premature given the current stage of the EU AI Act and call for a moratorium on facial recognition technology. An amendment to the Garda Síochána (Digital Recording) Bill (Garda Síochána being the National Police), expected in the autumn after further scrutiny by government, will clarify the law in light of national and European Union legislation such as GDPR for these technologies to be used with face biometrics. It could be enacted by the end of the year.
More and more privacy watchdogs around the world are standing up to Clearview AI, a U.S. company that has collected billions of photos from the internet without people's permission. The company, which uses those photos for its facial recognition software, was fined £7.5 million ($9.4 million) by a U.K. regulator on May 26. The U.K. Information Commissioner's Office (ICO) said the firm, Clearview AI, had broken data protection law. The company denies breaking the law. But the case reveals how nations have struggled to regulate artificial intelligence across borders. Facial recognition tools require huge quantities of data.
The UK's data watchdog has fined a facial recognition company £7.5m for collecting images of people from social media platforms and the web to add to a global database. The Information Commissioner's Office (ICO) also ordered US-based Clearview AI to delete the data of UK residents from its systems. Clearview AI has collected more than 20bn images of people's faces from Facebook, other social media companies and from scouring the web. John Edwards, the UK information commissioner, said Clearview's business model was unacceptable. "Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20bn images," he said. "The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service.