"What exactly is computer vision then? Computer vision is a research field working to equip computers with the ability to process and understand visual data, as sighted humans can. Human brains process the gigabytes of data passing through our eyes every second and translate that data into sight - that is, into discrete objects and entities we can recognise or understand. Similarly, computer vision aims to give computers the ability to understand what they are seeing, and act intelligently on that knowledge."
– Computer vision: Cheat Sheet. ZDNet.com (December 6, 2011), by Natasha Lomas.
You take a picture of a person, upload it and get to see public photos of that person along with links to where those photos appeared. By Kashmir Hill Until recently, Hoan Ton-That's greatest hit was an app that let people put Donald Trump's distinctive yellow hair on their own photos. Then Ton-That did something momentous: He invented a tool that could end your ability to walk down the street anonymously and provided it to hundreds of law enforcement agencies. His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person along with links to where those photos appeared.
If you've heard it once, you've heard it dozens of times: "Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans." When it comes to its corporate acquisitions, Cupertino likes to play its cards very close to its chest. Of course, that doesn't stop industry watchers from peering at the tea leaves to see if they can divine exactly what the company might be working on. And, hey, I'm no different than those folks, because Apple does so little to telegraph its plans that even a boilerplate statement confirming an acquisition is a rare peek behind the curtain. Apple CEO Tim Cook said not long ago that the company makes an acquisition every two to three weeks, and not even all of those make it into the public eye.
A new data set to train and benchmark AI systems to better understand actions in videos -- in particular, actions that can't be determined by viewing just a single frame. Current video data sets often focus on actions where a single image is enough for recognition, such as washing dishes, eating pizza, or playing guitar. To improve computer vision systems' understanding of elements that can be recognized only in a video sequence -- such as whether someone is sneezing or opening a door -- we discovered a set of actions where temporal information is essential for recognition. We're now sharing this work, along with our methodology for determining those classes and results from training networks on it, in order to help researchers benchmark their systems' ability to recognize temporal actions. To discover which actions in video should be designated as temporal classes, we presented annotators with video clips from existing video recognition data sets, with their frames shuffled out of order.
Yesterday, The New York Times ran an alarming piece by Kashmir Hill about Clearview AI, a startup that allows third parties to quickly learn many details about you based on only seeing your face; The New York Times further reported that Clearview's technology is already in use by government agencies across the United States. Today, therefore, I am sharing some tips on how to prevent yourself from being recognized by facial recognition systems. I have personally utilized some of these techniques in test environments – and they worked. Others I have seen demonstrated. Keep in mind that not all of the tips that I provide below apply in all environments – normally, people seeking not to be recognized also do not want to stand out.
In late 2019, researchers at Seoul-based Hyperconnect developed a tool (MarioNETte) that could manipulate the facial features of a historical figure, a politician, or a CEO using nothing but a webcam and still images. More recently, a team hailing from Hong Kong-based tech giant SenseTIme, Nanyang Technological University, and the Chinese Academy of Sciences' Institute of Automation proposed a method of editing target portrait footage by taking sequences of audio to synthesize photo-realistic videos. As opposed to MarioNETte, SenseTime's technique is dynamic, meaning it's able to better handle media it hasn't before encountered. And the results are impressive, albeit worrisome in light of recent developments involving deepfakes. The coauthors of the study describing the work note that the task of "many-to-many" audio-to-video translation -- that is, translation that doesn't assume a single identity of source video and the target video -- is challenging.
Until recently, Hoan Ton-That's greatest hits included an obscure iPhone game and an app that let people put Donald Trump's distinctive yellow hair on their own photos. Then Mr. Ton-That -- an Australian techie and onetime model -- did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security. His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system -- whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites -- goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
The European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors, according to a draft white paper on Artificial Intelligence obtained by EURACTIV. If implemented, the plans could throw current AI projects off course in some EU countries, including Germany's wish to roll out automatic facial recognition at 134 railway stations and 14 airports. France also has plans to establish a legal framework permitting video surveillance systems to be embedded with facial recognition technologies. The Commission paper, which gives an insight into proposals for a European approach to Artificial Intelligence, stipulates that a future regulatory framework could "include a time–limited ban on the use of facial recognition technology in public spaces." The document adds that the "use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. More generally, the draft White Paper, the completed version of which the Commission should publish towards the end of February, features five regulatory options for Artificial Intelligence across the bloc. A Voluntary Labelling framework could consist of a legal instrument whereby developers could "chose to comply, on a voluntary basis, with requirements for ethical and trustworthy artificial intelligence." Should compliance in this area be guaranteed, a'label' of ethical or trustworthy artificial intelligence would be granted, with binding conditions. Option two focuses on a specific area of public concern – the use of artificial intelligence by public authorities – as well as the employment of facial recognition technologies generally. In the former area, the paper states that the EU could adopt an approach akin to the stance taken by Canada in its Directive on Automated Decision Making, which sets out minimum standards for government departments that wish to use an Automated Decision System. As for facial recognition, the Commission document highlights provisions from the EU's General Data Protection Regulation, which give citizens "the right not to be subject of a decision based solely on automated processing, including profiling." In the third area which the Commission is currently priming for regulation, legally binding instruments would apply only "to high–risk applications of artificial intelligence.
The European Commission has revealed it is considering a ban on the use of facial recognition in public areas for up to five years. Regulators want time to work out how to prevent the technology being abused. The technology allows faces captured on CCTV to be checked in real time against watch lists, often compiled by police. Exceptions to the ban could be made for security projects as well as research and development. The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.
Reports are circulating that the Seattle-based AI at the edge company Xnor has been quietly acquired by Apple. An investigation by GeekWire suggests the deal was worth in the region of $200 million. This development could mean Xnor's low-power algorithms for object detection in photos end up on the iPhone. Xnor, a spin-out from the Allen Institute for Artificial Intelligence (AI2), had raised $14.6 million in funding since it was founded three years ago. Xnor's founders, Ali Farhadi and Mohammed Rastegari, are the creators of YOLO, a well-known neural network widely used for object detection.
Millions of potential employees are subjected to artificial intelligence screenings during the hiring process every month. While some systems make it easier to weed out candidates who lack necessary educational or work qualifications, many AI hiring solutions are nothing more than snake oil. Thousands of companies around the world rely on outside businesses to provide so-called intelligent hiring solutions. These AI-powered packages are advertised as a way to narrow job applicants down to a'cream of the crop' for humans to consider. On the surface, this seems like a good idea.