"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
Amazon's controversial facial recognition technology has incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots, new tests have revealed. Amazon Rekognition uses artificial intelligence software to identify individuals from their facial structure. Customers include law enforcement and US government agencies like Immigration and Custome Enforcement (ICE). It is not the first time the software's accuracy has been called into question. In July 2018, the American Civil Liberties Union (ACLU) found 28 false matches between US Congress members and pictures of people arrested for a crime.
Russian researchers from HSE University and Open University for the Humanities and Economics have demonstrated that artificial intelligence is able to infer people's personality from'selfie' photographs better than human raters do. Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces. The technology can be used to find the'best matches' in customer service, dating or online tutoring. The article, "Assessing the Big Five personality traits using real-life static facial images," will be published on May 22 in Scientific Reports.
The American Civil Liberties Union (ACLU) is taking Clearview AI to court, claiming the company's facial surveillance activities violate the Illinois Biometric Information Privacy Act (BIPA) and "represent an unprecedented threat to our security and safety". The legal action, brought on by lawyers at the ACLU of Illinois and the law firm Edelson PC, is on behalf of organisations that represent survivors of sexual assault and domestic violence, undocumented immigrants, and other vulnerable communities. Clearview AI, founded by Australian entrepreneur Hoan Ton-That, provides facial recognition software, marketed primarily at law enforcement. The ACLU said not stopping Clearview AI would "end privacy as we know it". "Face recognition technology offers a surveillance capability unlike any other technology in the past. It makes it dangerously easy to identify and track us at protests, AA meetings, counselling sessions, political rallies, religious gatherings, and more," the ACLU wrote in a blog post.
If you own an iPhone X or later and have gone out into the world recently, you probably noticed an unfortunate side effect of the new mask-wearing culture: Face ID doesn't work. It is more of a feature than a bug, but the fact of the matter is that if Apple's True Depth camera system can't scan your whole face, it won't unlock your phone. If you're wearing a mask like most stores and restaurants require, you're left typing in your passcode whenever you want to check your shopping list or pay your bill. Apple offered up a workaround with the recent iOS 13.5 update, but it's hardly a fix. Now, instead of waiting for Face ID to fail a couple times before the passcode screen pops up, you can swipe up from the bottom of the screen to quickly enter your code.
"This is a bill being sold as a privacy bill, but it's a wolf in sheep's clothing," Matt Cagle, an attorney for the American Civil Liberties Union of Northern California, said in an interview. The ACLU, Electronic Frontier Foundation and other civil liberties groups held a virtual rally Thursday night to rail against the bill, calling it vaguely worded and potentially dangerous for low-income communities hit hard by the coronavirus. Their remarks were the latest shots fired from a campaign to halt the legislation. The bill's fate in California--which has pushed for more aggressive privacy protections in recent years--could foreshadow how a potentially huge market for facial recognition technology is regulated by other states. The bill calls for companies and agencies that use facial recognition tools in areas accessible to the public to "provide a conspicuous and contextually appropriate notice" that faces may get scanned.
Today Flying Cloud Technology announces it has entered into an OEM relationship with Wireless Guardian. Wireless Guardian is the world's first forward-facing human threat detection system and the most effective investigative security solution for today's high-tech environment. Providing protection to patrons and facilities, Wireless Guardian tracks both security and pandemic threats up to a mile outside the facility's perimeter. "Flying Cloud is extremely happy to enter into this strategic partnership with Wireless Guardian. We feel that this partnership will showcase the incredible strengths of both companies. Wireless Guardian will be an invaluable data source that is fed into and analyzed by Flying Cloud. This data will allow our joint customers to not only detect someone entering their facility with a temperature, but with our patented AI models, we can clearly show where they went in a facility and show who they were in contact with. Flying cloud is now the only company that can track both the user and the data that they interact with," said Brian Christian, CEO of Flying Cloud Technology.
JA: In terms of the VMS market itself – it seems the leading players are more clearly defined, and some players are fading away. Would you agree with that? PR: Up to a certain point the basic video recording functionality is commoditised, what's not commoditised is the reliability with which that functionality can be carried out. Regardless, there will always be at least 3 competitors in any market. So, yes, the market is fragmented but is becoming less fragmented. JA: What in your opinion are the major VMS trends of the moment?
From public CCTV cameras to biometric identification systems in airports, facial recognition technology is now common in a growing number of places around the world. In its most benign form, facial recognition technology is a convenient way to unlock your smartphone. At the state level though, facial recognition is a key component of mass surveillance, and it already touches half the global population on a regular basis. Today's visualizations from SurfShark classify 194 countries and regions based on the extent of surveillance. Click here to explore the full research methodology.
In its annual report, the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people's emotions in certain cases. Specifically, the researchers said affect recognition technology, also called emotion recognition technology, should not be used in decisions that "impact people's lives and access to opportunities," such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions. What is this technology, which is already being used and marketed, and why is it raising concerns? Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003.