"Image understanding (IU) is the research area concerned with the design and experimentation of computer systems that integrate explicit models of a visual problem domain with one or more methods for extracting features from images and one or more methods for matching features with models using a control structure. Given a goal, or a reason for looking at a particular scene, these systems produce descriptions of both the images and the world scenes that the images represent."
– Image Understanding, by J.K. Tsotos. In Encyclopedia of Artificial Intelligence. Stuart C. Shapiro, editor. 1987. New York: John Wiley & Sons.
The nation's top-level intelligence office, the Director of National Intelligence, wants to find "the most accurate unconstrained face recognition algorithm." asks a posting on challenge.gov The goal of the Face Recognition Prize Challenge is to improve core face recognition accuracy and expand the breadth of capture conditions and environments suitable for successful face recognition. The government noted that there has been "enormous research" done in the field, and it wants "to know whether this rich vein of research has produced advancements in face recognition accuracy." The most accurate algorithms submitted to the government for the contest are eligible to split a pot of $50,000, according to the contest rules.
A research team led by Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed a semiconductor chip, CNNP (CNN Processor), that runs AI algorithms with ultra-low power, and K-Eye, a face recognition system using CNNP. To accomplish this, the research team proposed two key technologies: an image sensor with "Always-on" face detection and the CNNP face recognition chip. The face detection sensor combines analog and digital processing to reduce power consumption. The second key technology, CNNP, achieved incredibly low power consumption by optimizing a convolutional neural network (CNN) in the areas of circuitry, architecture, and algorithms.
Currently, the process of examination involves taking tissue samples from the margin of the operated area, freezing them with liquid nitrogen, sectioning them to thin slices, and sending them to a lab for examination. But the diagnostic method devised by the researchers at Lehigh University uses advanced imaging techniques and artificial intelligence algorithms to speed up the process and enable real-time scanning and evaluation of the operated margin without the need for extracting tissues. "The process takes a large number of images, and labels the types of tissue in the sample," said Sharon Huang, associate professor of computer science and engineering at Lehigh. The researchers trained the algorithms with a large number of OCM images obtained from the patients, and tested the results against those obtained from the histopathological analysis of the same patients.
South Wales Police didn't provide details about the nature of the arrest, presumably because it's an ongoing case. Back in April, it emerged that South Wales Police planned to scan the faces "of people at strategic locations in and around the city centre" ahead of the UEFA Champions League final, which was played at the Millennium Stadium in Cardiff on June 3. "It was a local man and unconnected to the Champions League," a South Wales Police spokesperson told Ars. We know from the request for tender published by the South Wales Police, however, that the man's face was probably included in the force's "Niche Record Management system," which contains "500,000 custody images."
The process begins after customers check in for their flights, go through security and arrive at their gates. Travelers will step up to a camera at a Self-Boarding station, where they will have their photos taken. CBP will match the photo with travelers' passport, visa or immigration documentation. A message indicating the photo has been verified will flash on a screen, and customers will then be allowed onto the jet bridge.
Right now, the US is trotting out an airport security plan revolving around facial recognition. However, Customs and Border Protection now wants to expand the effort to include virtually every situation where you normally need an ID -- and that could include scanning US citizens. The existing plan has facial recognition systems tossing out photos of US citizens as soon as they're recognized. Current facial recognition technology requires a clear, emotionless and well-lit view of your face, and you don't get all of those very often at the airport.
The Safariland Group ("Safariland"), the parent company of Vievu, and Veritone, a leading provider of artificial intelligence solutions, have announced their intent to enter into an agreement to integrate their product offerings to apply artificial intelligence to uniquely extract and process crucial data from police body-worn camera footage. The integration is designed to allow Vievu's customers to upload large volumes of video and audio recordings into the Veritone Platform and process them in near real-time, enabling law enforcement personnel to rapidly extract actionable information for use in investigations, monitoring and training, as well as to respond more quickly and efficiently to public record requests. "The Veritone Platform will enable law enforcement agencies to save thousands of hours of manual searching by using intelligent audio and video analysis, allowing them to focus time and resources on more mission-critical tasks," said Chad Steelberg, chief executive officer of Veritone. "Joining Veritone's best-in-class artificial intelligence capabilities with Vievu's leading body-worn camera and video solutions has the potential to bring critical, world-class technology into the hands of thousands of law enforcement personnel," said Terry O'Shea, chief technology officer of Safariland.
The algorithms break the task of identifying the face into thousands of smaller, bite-sized tasks, each of which is easy to solve. Like a series of waterfalls, the OpenCV cascade breaks the problem of detecting faces into multiple stages. This function detects the actual face – and is the key part of our code, so let's go over the options. The detection algorithm uses a moving window to detect objects.
Inception v3 is a deep convolutional neural network trained for single-label image classification on ImageNet data set. We need to prepare files with correct labels for each image. This method creates the ground_truth vectors containing the correct labels of each returned image. But for our multi-label case, we would like our resulting class probabilities to be able to express that an image of a car belongs to class car with 90% probability and to class accident with 30% probability etc.
Taking a look back at seven days of news and headlines across the world of Android, this week's Android Circuit looks back at the launch of the Galaxy S8 and S8 Plus, the differences between the South Korean handsets, what the S8 says about the future, secrets of the Google Pixel's camera, the return of the Note 7, a lengthy LG G6 review, HTC's new user controls, and do we really want smaller bezels? The South Korean company has put the initial focus on the'infinity screen' that runs from edge to edge (and with reduced bezels), Samsung's portfolio and services, and traditional Galaxy elements such as waterproofing, microSD support, and wireless charging: "The Samsung Galaxy S8 ushers in a new era of smartphone design and fantastic new services, opening up new ways to experience the world," said DJ Koh, President of Mobile Communications Business, Samsung Electronics. Samsung President of Mobile Communications Business DJ Koh unveils the Samsung Galaxy S8 and S8 during Samsung Unpacked at David Geffen Hall on March 29, 2017 in New York City. Their mission was to improve photography on mobile devices by applying computational photography techniques.