Collaborating Authors


Instagram is testing artificial intelligence to verify the age of users


The social network Instagram is testing new ways to verify the age of its users, including an artificial intelligence facial recognition tool, to verify that people are 18 or older. Tools are not yet available to try to keep kids off the Meta platform. The use of artificial intelligence for facial recognition, especially in teens, has raised some alarms, given Mita's turbulent history, When it comes to protecting users' privacy. Mita emphasized that the technology used to verify the age of people Unable to identify you – only age. Once verification is complete, Meta, in partnership with Yoti "Startup", Face video recording will be deleted.

Australian firm halts facial recognition trial over privacy fears

Al Jazeera

Australia's second-biggest appliances chain says it is pausing a trial of facial recognition technology in stores after a consumer group referred it to the privacy regulator for possible enforcement action. In an email on Tuesday, a spokesperson for JB Hi-Fi Ltd said The Good Guys, which JB Hi-Fi owns, would stop trialling a security system with optional facial recognition in two Melbourne outlets. Use of the technology by The Good Guys, owned by JB Hi-Fi Ltd, was "unreasonably intrusive" and potentially in breach of privacy laws, the group, CHOICE, told the Office of the Australian Information Commissioner (OAIC). While the company took confidentiality of personal information seriously and is confident it complied with relevant laws, it decided "to pause the trial … pending any clarification from the OAIC regarding the use of this technology", JB Hi-Fi's spokesperson added. The Good Guys was named in a complaint alongside Bunnings, Australia's biggest home improvement chain, and big box retailer Kmart, both owned by Wesfarmers Ltd, with total annual sales of about 25 billion Australian dollars ($19.47m) across 800 stores.

Microsoft Restricts Its Facial Recognition Tools, Citing the Need for 'Responsible AI'


Microsoft is restricting access to its facial recognition tools, citing risks to society that the artificial intelligence systems could pose. The tech company released a 27-page "Responsible AI Standard" on Tuesday that details the company's goals toward equitable and trustworthy AI. To align with the standard, Microsoft is limiting access to facial recognition tools in Azure Face API, Computer Vision and Video Indexer. "We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve," wrote Natasha Crampton, chief responsible AI officer at Microsoft, in a blog post. She added the company would retire its Azure services that infer "emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup."

Microsoft facial recognition tool is no longer able to read emotions

Daily Mail - Science & tech

Microsoft is retiring a controversial facial recognition feature that claims to identify emotion in people's faces from videos and photos. As part of an overhaul its AI policies, the US tech giant is removing facial analysis capabilities that infer emotional states, like surprise and anger, from Azure Face. It's also retiring the ability of the technology platform to identify attributes such as gender, age, smile, hair and makeup. Microsoft's Azure Face is a service for developers that uses AI algorithms to detect, recognise, and analyse human faces in digital images. It is used in scenarios such as identity verification, touchless access control and face blurring for privacy.

The Facial Recognition Problem: Why Companies Should be Striving for 'Imperfect' AI


What would you say if I told you that'Imperfect' artificial intelligence (AI) can do more for your business than facial recognition ever will? Your response might be to suggest that facial recognition is needed for 100% accuracy in data insights, and you probably think that this is important for keeping up with the competition. You may also tell me that it's necessary to meet growing customer expectations, or even go into the things that you'just can't do' without it. But one thing you won't be able to argue is that facial recognition technology is good for compliance. The use of facial recognition is on the rise and causing a stir across Europe, with 11 EU nations reportedly already using it, and the European Data Watchdog warning that nations are not ready for AI-powered surveillance.

Apple Just Killed the Password--for Real This Time


Year after year, the most popular passwords leaked in data breaches are 123456, 123456789, and 12345--'qwerty' and'password' come close behind--and using these weak passwords leaves you vulnerable to all sorts of hacking. Weak and repeated passwords are one of the most significant risks to your online life. For years, we've been promised a more secure, password-free future, but it seems like 2022 will actually be the year that millions of people start to move away from passwords. At Apple's Worldwide Developer Conference yesterday, the company announced it will launch passwordless logins across Macs, iPhones, iPads, and Apple TVs around September of this year. Instead of using passwords, you will be able to log in to websites and apps using "Passkeys" with iOS 16 and macOS Ventura.

How AI Analyzes facial expressions?


Until now, most AI-related news reports have been related to image recognition and voice recognition, but with the evolution of AI, it is likely that there will be more reports and discussions on sentiment analysis AI in the future. In the United States, sentiment analysis AI that works on online conferencing systems has recently appeared one after another and has become a subject of controversy. For example, Silicon Valley startup Uniphore announced on March 1, 2022, the sentiment analysis AI "Q for Sales" aimed at supporting business negotiations . It is a sentiment analysis AI that uses computer vision, tonal analysis, conversation analysis, natural language processing, etc. It is said to read emotions from the facial expressions of the business partner and increase the business negotiation success rate.

Ethical Principles of Facial Recognition Technology


The sheer potential of facial recognition technology in various fields is almost unimaginable. However, certain errors that commonly creep into its functionality and a few ethical considerations need to be addressed before its most elaborate applications can be realized. An accurate facial recognition system uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. Facial recognition can help verify a person's identity, but it also raises privacy issues. A few decades back, we could not have predicted that facial recognition would go on to become a near-indispensable part of our lives in the future.

Synamedia Acquires Utelly To Boost Synamedia Go's Content Discovery Capabilities


Synamedia, the world's largest independent video software provider, announced the acquisition of Utelly, a UK-based privately-owned content discovery platform provider with products targeted at the entertainment industry. Its offerings include metadata aggregation, search and recommendations, as well as content management and a content promotion engine. Its SaaS-based technology is already pre-integrated with the Synamedia Go video platform and will now be embedded in the Go.Aggregate add-on pack to solve one of the major challenges viewers face: finding content across TV and apps on any screen. Utelly's technology achieves this through metadata aggregation, intelligent asset linking, AI and machine learning. By unifying data and using AI to enrich sparse data sets, Utelly provides customers with search and recommendations that enhance viewers' content discovery experiences.

UK fines Clearview AI £7.5M for scraping citizens' data


Clearview AI has been fined £7.5 million by the UK's privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world. In November 2021, the UK's Information Commissioner's Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today's announcement suggests Clearview AI got off relatively lightly.