Goto

Collaborating Authors

Filtering Abstract Senses From Image Search Results

Neural Information Processing Systems

We propose an unsupervised method that, given a word, automatically selects non-abstract senses of that word from an online ontology and generates images depicting the corresponding entities. When faced with the task of learning a visual model based only on the name of an object, a common approach is to find images on the web that are associated with the object name, and then train a visual classifier from the search result. As words are generally polysemous, this approach can lead to relatively noisy models if many examples due to outlier senses are added to the model. We argue that images associated with an abstract word sense should be excluded when training a visual classifier to learn a model of a physical object. While image clustering can group together visually coherent sets of returned images, it can be difficult to distinguish whether an image cluster relates to a desired object or to an abstract sense of the word. We propose a method that uses both image features and the text associated with the images to relate latent topics to particular senses. Our model does not require any human supervision, and takes as input only the name of an object category. We show results of retrieving concrete-sense images in two available multimodal, multi-sense databases, as well as experiment with object classifiers trained on concrete-sense images returned by our method for a set of ten common office objects.


Darpa Wants to Build an Image Search Engine out of DNA

WIRED

Most people use Google's search-by-image feature to either look for copyright infringement, or for shopping. See some shoes you like on a frenemy's Instagram? Search will pull up all the matching images on the web, including from sites that will sell you the same pair. In order to do that, Google's computer vision algorithms had to be trained to extract identifying features like colors, textures, and shapes from a vast catalogue of images. Luis Ceze, a computer scientist at the University of Washington, wants to encode that same process directly in DNA, making the molecules themselves carry out that computer vision work. And he wants to do it using your photos.


SlashPixels: an ambitious image search engine for designers

#artificialintelligence

Google is so dominant in the search engine market at large that it becomes hard to launch anything that remotly looks like a search tool. A team of Russian developers decided to still give it a go and focus on a niche market: image search. The team's objective seems very ambitious, create an artificial intelligence based image search engine to help designers find inspiration or resources in an easier way. They promise that SlashPixels will understand each image that it indexes, thus giving it a big advantage when it comes to sort the pictures. Unfortunatly, all this doesn't exist yet, but you can support the team's IndieGogo campaign to help them build this new tool.


Microsoft Launches Artificial Intelligence-Powered Image Search for Google Rival Bing

#artificialintelligence

Have you ever used Bing to search on the Internet? Most of us haven't but it is the rival service to Google's own search engine and it is owned and operated by none other than Microsoft. While not the most robust search engine around, obviously, Bing nonetheless has a few quirks that make it worth checking out from time to time. And you can't fault Microsoft for trying – from throwing in voice-powered Cortana search to integrating Bing into the Xbox and Windows, Microsoft has pulled out all the stops to make sure you at least have the ability to use Bing, even if you don't. Well it seems like some of us in the photography world might want to give Bing another look as Microsoft announced plans to bring powerful, artificial intelligence-powered image search to Bing.


Semantic Image Search for Robotic Applications

arXiv.org Machine Learning

Generalization in robotics is one of the most important problems. New generalization approaches use internet databases in order to solve new tasks. Modern search engines can return a large amount of information according to a query within milliseconds. However, not all of the returned information is task relevant, partly due to the problem of polysemes. Here we specifically address the problem of object generalization by using image search. We suggest a bi-modal solution, combining visual and textual information, based on the observation that humans use additional linguistic cues to demarcate intended word meaning. We evaluate the quality of our approach by comparing it to human labelled data and find that, on average, our approach leads to improved results in comparison to Google searches, and that it can treat the problem of polysemes.