Goto

Collaborating Authors

Shutterstock shows machine learning smarts with reverse image search for stock photos

#artificialintelligence

Shutterstock is flexing its AI muscles with the news that the stock photo giant is introducing new computer-vision search smarts to its platform. The company, which is headquartered in New York's Empire State Building, went public back in 2012 and now offers more than 70 million images for bloggers and media outlets -- which can make searching for specific assets challenging. Of course, the trusty old keyword search tool is effective to an extent, but what if you want to find images that are similar to one you have in your possession? Or what if you want alternative images based on color schemes, mood, or shapes? This is where Shutterstock's new reverse image search comes into play.


Shutterstock shows machine learning smarts with reverse image search for stock photos

#artificialintelligence

Shutterstock is flexing its AI muscles with the news that the stock photo giant is introducing new computer-vision search smarts to its platform. The company, which is headquartered in New York's Empire State Building, went public back in 2012 and now offers more than 70 million images for bloggers and media outlets -- which can make searching for specific assets challenging. Of course, the trusty old keyword search tool is effective to an extent, but what if you want to find images that are similar to one you have in your possession? Or what if you want alternative images based on color schemes, mood, or shapes? This is where Shutterstock's new reverse image search comes into play.


Interpretable Image Recognition with Hierarchical Prototypes

arXiv.org Machine Learning

Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.


Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP

#artificialintelligence

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services. At the company's AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services -- Amazon Lex, Amazon Polly, Amazon Rekognition -- to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others. The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps. AWS CEO Andy Jassy noted that Amazon has been building AI and machine learning technology for 20 years and said that there are now thousands of people "dedicated to AI in our business."


Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP

#artificialintelligence

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services. At the company's AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services -- Amazon Lex, Amazon Polly, Amazon Rekognition -- to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others. The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps. AWS CEO Andy Jassy noted that Amazon has been building AI and machine learning technology for 20 years and said that there are now thousands of people "dedicated to AI in our business."