Not enough data to create a plot.
Try a different view from the menu above.
"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
The U.S. Department of Homeland Security opened industry applications for its 2022 Biometric Technology Rally. The department's Science and Technology directorate is emphasizing discerning people in groups and their level of consent to face biometric scanning. Competitors are to address the challenge of reliably screening small groups of people opting in to facial recognition from among bystanders who have not consented. The competition will be an unattended "high throughput" scenario where group-processing systems must rapidly capture biometrics from multiple subjects. Companies will have to match photographs and identify faces, acquire only needed biometric images and meet performance benchmarks for demographic groups.
The sheer potential of facial recognition technology in various fields is almost unimaginable. However, certain errors that commonly creep into its functionality and a few ethical considerations need to be addressed before its most elaborate applications can be realized. An accurate facial recognition system uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. Facial recognition can help verify a person's identity, but it also raises privacy issues. A few decades back, we could not have predicted that facial recognition would go on to become a near-indispensable part of our lives in the future.
Will a few fines put a stop to the company's facial recognition search platform? Also, they discuss how Clearview's troubles relate to countries being more restrictive about data in general. Finally, they pour one out for Seth Green's lost Bored Ape – RIP NFT! Listen above, or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!
A Florida teenager taking a biology class at a community college got an upsetting note this year. A start-up called Honorlock had flagged her as acting suspiciously during an exam in February. She was, she said in an email to The New York Times, a Black woman who had been "wrongfully accused of academic dishonesty by an algorithm." What happened, however, was more complicated than a simple algorithmic mistake. It involved several humans, academic bureaucracy and an automated facial detection tool from Amazon called Rekognition.
Clearview AI, the surveillance firm notoriously known for harvesting some 20 billion face scans off of public social media searches, said it may bring its technology to schools and other private businesses. In an interview with Reuters on Tuesday, the company revealed it's working with a U.S. company selling visitor management systems to schools. That reveal came around the same time as a horrific shooting at Robb Elementary School in Uvalde, Texas that tragically left 19 children and two teachers dead. Though Clearview wouldn't provide more details about the education-linked companies to Gizmodo, other facial recognition competitors have spent years trying to bring the tech to schools with varying levels of success and pushback. New York state even moved to ban facial recognition in schools two years ago.
Exadel, a global software engineering, business consultancy and solutions company, announced the release of CompreFace 1.0, the newest version of its free, open-source facial recognition service. CompreFace is easily integrated into an existing system and requires no prior machine learning experience. The full list of updates in CompreFace Version 1.0 include: "With the support of the developer community, Exadel has continued to improved the CompreFace user experience," said Serhii Pospielov, Head of the AI Practice at Exadel. "As AI practices continue to rise, we are seeing more use cases emerge for facial recognition. This open-source service allows anyone to enter the facial recognition market and access highly reliable services without needing deep coding or machine learning experience."
Synamedia, the world's largest independent video software provider, announced the acquisition of Utelly, a UK-based privately-owned content discovery platform provider with products targeted at the entertainment industry. Its offerings include metadata aggregation, search and recommendations, as well as content management and a content promotion engine. Its SaaS-based technology is already pre-integrated with the Synamedia Go video platform and will now be embedded in the Go.Aggregate add-on pack to solve one of the major challenges viewers face: finding content across TV and apps on any screen. Utelly's technology achieves this through metadata aggregation, intelligent asset linking, AI and machine learning. By unifying data and using AI to enrich sparse data sets, Utelly provides customers with search and recommendations that enhance viewers' content discovery experiences.
We already know that algorithms can and do significantly affect humans. They're not only used to control workers and citizens in physical workplaces, but also control workers on digital platforms and influence the behavior of individuals who use them. Even studies of algorithms have previewed the worrying ease with which these systems can be used to dabble in phrenology and physiognomy. A federal review of facial recognition algorithms in 2019 found that they were rife with racial biases. One 2020 Nature paper used machine learning to track historical changes in how "trustworthiness" has been depicted in portraits, but created diagrams indistinguishable from well-known phrenology booklets and offered universal conclusions from a dataset limited to European portraits of wealthy subjects.
The Information Commissioner's Office (ICO) in the UK has fined facial recognition database company Clearview AI Inc more than £7.5m for using images of people that were scraped from websites and social media. Clearview AI collected the data to create a global online database, with one of the resulting applications being facial recognition. Clearview AI have also been ordered to delete personal data they hold on UK residents, and to stop obtaining and using the personal data that is publicly available on the internet. The ICO is the UK's independent authority set up to uphold information rights in the public interest. This action follows an investigation that they carried out in conjunction with the Office of the Australian Information Commissioner (OAIC).
The Information Commissioner's Office (ICO) of the UK fined United States-based facial recognition firm Clearview AI £7.5 million for illegally storing images. The much controversial company has been facing such issues for some time, and this new development is yet another hit for Clearview AI. This fine has been imposed on the company for its practice of collecting and storing images of citizens from social media platforms without their consent, which is a severe threat to privacy according to several countries. Moreover, the ICO has also ordered the US firm to remove UK citizens' data from its systems. According to the ICO, Clearview AI has stored more than 20 billion pictures of people in its database.