Most people use Google's search-by-image feature to either look for copyright infringement, or for shopping. See some shoes you like on a frenemy's Instagram? Search will pull up all the matching images on the web, including from sites that will sell you the same pair. In order to do that, Google's computer vision algorithms had to be trained to extract identifying features like colors, textures, and shapes from a vast catalogue of images. Luis Ceze, a computer scientist at the University of Washington, wants to encode that same process directly in DNA, making the molecules themselves carry out that computer vision work. And he wants to do it using your photos.
Google is so dominant in the search engine market at large that it becomes hard to launch anything that remotly looks like a search tool. A team of Russian developers decided to still give it a go and focus on a niche market: image search. The team's objective seems very ambitious, create an artificial intelligence based image search engine to help designers find inspiration or resources in an easier way. They promise that SlashPixels will understand each image that it indexes, thus giving it a big advantage when it comes to sort the pictures. Unfortunatly, all this doesn't exist yet, but you can support the team's IndieGogo campaign to help them build this new tool.
Have you ever used Bing to search on the Internet? Most of us haven't but it is the rival service to Google's own search engine and it is owned and operated by none other than Microsoft. While not the most robust search engine around, obviously, Bing nonetheless has a few quirks that make it worth checking out from time to time. And you can't fault Microsoft for trying – from throwing in voice-powered Cortana search to integrating Bing into the Xbox and Windows, Microsoft has pulled out all the stops to make sure you at least have the ability to use Bing, even if you don't. Well it seems like some of us in the photography world might want to give Bing another look as Microsoft announced plans to bring powerful, artificial intelligence-powered image search to Bing.
We propose an unsupervised method that, given a word, automatically selects non-abstract senses of that word from an online ontology and generates images depicting the corresponding entities. When faced with the task of learning a visual model based only on the name of an object, a common approach is to find images on the web that are associated with the object name, and then train a visual classifier from the search result. As words are generally polysemous, this approach can lead to relatively noisy models if many examples due to outlier senses are added to the model. We argue that images associated with an abstract word sense should be excluded when training a visual classifier to learn a model of a physical object. While image clustering can group together visually coherent sets of returned images, it can be difficult to distinguish whether an image cluster relates to a desired object or to an abstract sense of the word.
Google's Hartmut Neven demonstrates his visual-search app by snapping a picture of a Salvador Dali clock in his office building. Google and other tech companies are racing to improve image-recognition software Computers can recognize some objects in images, but not all Google's engineering director predicts the technology will fully mature in 10 years Google's engineering director predicts the technology will fully mature in 10 years Santa Monica, California (CNN) -- Computers used to be blind, and now they can see. Thanks to increasingly sophisticated algorithms, computers today can recognize and identify the Eiffel Tower, the Mona Lisa or a can of Budweiser. Still, despite huge technological strides in the last decade or so, visual search has plenty more hurdles to clear. At this point, it would be quicker to describe the types of things an image-search engine can interpret instead of what it can't.