A polynomial-time relaxation of the Gromov-Hausdorff distance

arXiv.org Machine Learning

The Gromov-Hausdorff distance provides a metric on the set of isometry classes of compact metric spaces. Unfortunately, computing this metric directly is believed to be computationally intractable. Motivated by applications in shape matching and point-cloud comparison, we study a semidefinite programming relaxation of the Gromov-Hausdorff metric. This relaxation can be computed in polynomial time, and somewhat surprisingly is itself a pseudometric. We describe the induced topology on the set of compact metric spaces. Finally, we demonstrate the numerical performance of various algorithms for computing the relaxed distance and apply these algorithms to several relevant data sets. In particular we propose a greedy algorithm for finding the best correspondence between finite metric spaces that can handle hundreds of points.


Euler Sparse Representation for Image Classification

AAAI Conferences

Sparse representation based classification (SRC) has gained great success in image recognition. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which may help improve the separability and margin between nearby data points, we propose Euler SRC for image classification, which is essentially the SRC with Euler sparse representation. To be specific, it first maps the images into the complex space by Euler representation, which has a negligible effect for outliers and illumination, and then performs complex SRC with Euler representation. The major advantage of our method is that Euler representation is explicit with no increase of the image space dimensionality, thereby enabling this technique to be easily deployed in real applications. To solve Euler SRC, we present an efficient algorithm, which is fast and has good convergence. Extensive experimental results illustrate that Euler SRC outperforms traditional SRC and achieves better performance for image classification.


Researchers Have Created an AI That Could Read and React to Emotions

#artificialintelligence

One of today's more popular artificially intelligent (AI) androids comes from the TV series "MARVEL's Agents of S.H.I.E.L.D." Those of you who followed the latest season's story -- no spoilers here! One of the most interesting things about this fictional AI character is that it can read people's emotions. Thanks to researchers from the University of Cambridge, this AI ability might soon make the jump from sci-fi to reality. The first step in creating such a system is training an algorithm on simpler facial expressions and just one specific emotion or feeling. To that end, the Cambridge team focused on using a machine learning algorithm to figure out if a sheep is in pain, and this week, they presented their research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.


AI system to diagnose pain levels in sheep

#artificialintelligence

Building on earlier work which teaches computers to recognise emotions and expressions in human faces, the system is able to detect the distinct parts of a sheep's face and compare it with a standardised measurement tool developed by veterinarians for diagnosing pain. Their results will be presented today (1 June) at the 12th IEEE International Conference on Automatic Face and Gesture Recognition in Washington, DC. Severe pain in sheep is associated with conditions such as foot rot, an extremely painful and contagious condition which causes the foot to rot away; or mastitis, an inflammation of the udder in ewes caused by injury or bacterial infection. Both of these conditions are common in large flocks, and early detection will lead to faster treatment and pain relief. Reliable and efficient pain assessment would also help with early diagnosis.


Face recognition and OCR processing of 300 million records from US yearbooks

#artificialintelligence

A yearbook is a type of a book published annually to record, highlight, and commemorate the past year of a school. Our team at MyHeritage took on a complex project: extracting individual pictures, names, and ages from hundreds of thousands of yearbooks, structuring the data, and creating a searchable index that covers the majority of US schools between the years 1890–1979 -- more than 290 million individuals. In this article I'll describe what problems we encountered during this project and how we solved them. First of all, let me explain why we needed to tackle this challenge. MyHeritage is a genealogy platform that provides access to almost 10 billion historical records.