Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?
As humans, we can distinguish between different objects easily - such as dogs wearing hats, or between oranges and bananas in a bag - but for computers this has been typically much more difficult. A team of Google researchers has developed an advanced image classification and detection algorithm called GoogLeNet, which is twice as effective than previous programs. It is so accurate it can locate and distinguish between a range of object sizes within a single image, and it can also determine an object within, or on top of, an object, within the photo. A team of California-based Google researchers developed GoogLeNet, that uses an advanced classification and detection algorithm to identify object. The software recently placed first in the ImageNet large-scale visual recognition challenge (ILSVRC).
From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims. IBM's technology was designed to match stills of individuals with specific physical characteristics, including clothing color, age, gender, hair color, and even skin tone, according to The Intercept. Internal documents and sources involved with the program cited by the report reveal IBM released an early iteration of its video analytics software by 2013, before improving its capabilities over the following years. The report adds to growing concerns on the potential for racial profiling with advanced surveillance technology. From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims According to the investigation by The Intercept and the Investigative Fund, the NYPD did not end up using IBM's analytics program as part of its larger surveillance system, and discontinued it by 2016.
Most people use Google's search-by-image feature to either look for copyright infringement, or for shopping. See some shoes you like on a frenemy's Instagram? Search will pull up all the matching images on the web, including from sites that will sell you the same pair. In order to do that, Google's computer vision algorithms had to be trained to extract identifying features like colors, textures, and shapes from a vast catalogue of images. Luis Ceze, a computer scientist at the University of Washington, wants to encode that same process directly in DNA, making the molecules themselves carry out that computer vision work. And he wants to do it using your photos.
A demo of the Orcam MyEye 2.0 was one of the highlights at the AbilityNet/RNIB TechShare Pro event in November. This small device, an update to the MyEye released in 2013, clips onto any pair of glasses and provides discrete audio feedback about the world around the wearer. It uses state-of-the-art image recognition to read signs and documents as well as recognise people and does not require internet connection. It's just one of many apps and devices that are using the power of artificial intelligence (AI) to transform the lives of people who are blind or have sight loss.