"What exactly is computer vision then? Computer vision is a research field working to equip computers with the ability to process and understand visual data, as sighted humans can. Human brains process the gigabytes of data passing through our eyes every second and translate that data into sight - that is, into discrete objects and entities we can recognise or understand. Similarly, computer vision aims to give computers the ability to understand what they are seeing, and act intelligently on that knowledge."
– Computer vision: Cheat Sheet. ZDNet.com (December 6, 2011), by Natasha Lomas.
ORO VALLEY, AZ / ACCESSWIRE / February 18, 2020 / Tautachrome, Inc. (OTC PINK:TTCM) announces Google's TensorFlow artificial intelligence (AI) for image classification in ARknet release 1.3.4. ARknet will have the ability to classify incoming images utilizing TensorFlow, Google's artificial intelligence (AI) engine. The deployment of TensorFlow AI image recognition enables ARknet to crowdsource its userbase for machine learning purposes. Since ARknet can have segmented datasets, the ability to develop specialized machine learning models will also allow services to be provided to a wide range of business applications, connected devices, and enterprise services. TensorFlow is planned as the first of many AI frameworks that will be introduced into the ARknet platform in the future.
The EU's digital and competition chief has said that automated facial recognition breaches GDPR, as the technology fails to meet the regulation's requirement for consent. Margrethe Vestager, the European Commission's executive vice president for digital affairs, told reporters that "as it stands right now, GDPR would say'don't use it', because you cannot get consent," EURACTIV revealed today. GDPR classes information on a person's facial features as biometric data, which is labeled as "sensitive personal data." The use of such data is highly restricted, and typically requires consent from the subject -- unless the processing meets a range of exceptional circumstances. These exemptions include it being necessary for public security.
NEW DELHI/MUMBAI, INDIA – When artist Rachita Taneja heads out to protest in New Delhi, she covers her face with a pollution mask, a hoodie or a scarf to reduce the risk of being identified by police facial recognition software. Police in the Indian capital and the northern state of Uttar Pradesh -- both hotbeds of dissent -- have used the technology during protests that have raged since mid-December against a new citizenship law that critics say marginalizes Muslims. Activists are worried about insufficient regulation around the new technology, amid what they say is a crackdown on dissent under Prime Minister Narendra Modi, whose Hindu nationalist agenda has gathered pace since his re-election in May. "I do not know what they are going to do with my data," said Taneja, 28, who created a popular online cartoon about cheap ways for protesters to hide their faces. "We need to protect ourselves, given how this government cracks down."
The Irish Times reports the European Commission will publish a new position paper on artificial intelligence across the bloc next week. While the paper does not include a pitch for a previously proposed facial-recognition moratorium, the commission is set to allow member states, via an independent assessor, to decipher how and when they will permit the use of facial recognition. Meanwhile, Euractiv reports that Clearview AI aims to expand services across the European market.
Artificial Intelligence (AI) systems that companies claim can "read" facial expressions is based on outdated science and risks being unreliable and discriminatory, one of the world's leading experts on the psychology of emotion has warned. Lisa Feldman Barrett, professor of psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies – some of which are already being deployed in real-world settings – run the risk of being unreliable or discriminatory, she said. "I don't know how companies can continue to justify what they're doing when it's really clear what the evidence is," she said. "There are some companies that just continue to claim things that can't possibly be true."
The European Union is backing away from its plan to introduce a temporary ban on facial recognition technology -- instead delegating decisions on the software to its member states. In a previous draft of a paper on AI, the European Commission had proposed introducing a five-year moratorium on the technology. But in a new draft seen by the Financial Times, that suggestion has been dropped. "The early draft floated the idea of a full ban, which is very popular among civil rights campaigners worried about abuse," a person with direct knowledge of the discussions told the FT. "But the security community is against the ban because they think it's a good tool."
Video filmed and edited for TV is typically created and viewed in landscape, but problematically, aspect ratios like 16:9 and 4:3 don't always fit the display being used for viewing. Fortunately, Google is on the case. Given a video and a target dimension, it analyzes the video content and develops optimal tracking and cropping strategies, after which it produces an output video with the same duration in the desired aspect ratio. As Google Research senior software engineer Nathan Frey and senior software engineer Zheng Sun note in a blog post, traditional approaches for reframing video usually involve static cropping, which often leads to unsatisfactory results. More bespoke approaches are superior, but they typically require video curators to manually identify salient content in each frame, track their transitions from frame to frame, and adjust crop regions accordingly throughout the video.
We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs.
The US military is developing a portable face-recognition device capable of identifying individuals from a kilometre away. The Advanced Tactical Facial Recognition at a Distance Technology project is being carried out for US Special Operations Command (SOCOM). It commenced in 2016, and a working prototype was demonstrated in December 2019, paving the way for a production version. SOCOM says the research is ongoing, but declined to comment further. Initially designed for hand-held use, the technology could also be used from drones.
This paper describes a new model for human visual classification that enables the recovery of image features that explain human subjects' performance on different visual classification tasks. Unlike previous methods, this algorithm does not model their performance with a single linear classifier operating on raw image pixels. This approach extracts more information about human visual classification than has been previously possible with other methods and provides a foundation for further exploration. Papers published at the Neural Information Processing Systems Conference.