The most exciting thing about visual search is that it's becoming a highly accessible way for users to interpret the real world, in real time, as they see it. Rather than being a passive observer, camera phones are now a primary resource for knowledge and understanding in daily life. Users are searching with their own, unique photos to discover content. Though SEOs have little control over which photos people take, we can optimize our brand presentation to ensure we are easily discoverable by visual search tools. By prioritizing the presence of high impact visual search elements and coordinating online SEO with offline branding, businesses of all sizes can see results.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
Google has said that it will begin fact-checking images that appear from its search results. Starting today, a'Fact Check' label will start appearing under thumbnails. Clicking on the thumbnail will show a quick summary of the fact check, including the claim and a rating from a fact-checker such as Politifact. This tool is organised using ClaimReview, which is a method used by publishers to indicate fact-checked content to search engines, which are already used by Google Search and Google News. Fact-checkers have to meet Google's criteria before they can be used as the source.
Google's Hartmut Neven demonstrates his visual-search app by snapping a picture of a Salvador Dali clock in his office building. Google and other tech companies are racing to improve image-recognition software Computers can recognize some objects in images, but not all Google's engineering director predicts the technology will fully mature in 10 years Google's engineering director predicts the technology will fully mature in 10 years Santa Monica, California (CNN) -- Computers used to be blind, and now they can see. Thanks to increasingly sophisticated algorithms, computers today can recognize and identify the Eiffel Tower, the Mona Lisa or a can of Budweiser. Still, despite huge technological strides in the last decade or so, visual search has plenty more hurdles to clear. At this point, it would be quicker to describe the types of things an image-search engine can interpret instead of what it can't.
The social media site Flickr allows users to upload their photos, annotate them with tags, submit them to groups, and also to form social networks by adding other users as contacts. Flickr offers multiple ways of browsing or searching it. One option is tag search, which returns all images tagged with a specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an insect or a car, tag search results will include many images that are not relevant to the sense the user had in mind when executing the query. We claim that users express their photography interests through the metadata they add in the form of contacts and image annotations. We show how to exploit this metadata to personalize search results for the user, thereby improving search performance. First, we show that we can significantly improve search precision by filtering tag search results by user's contacts or a larger social network that includes those contact's contacts. Secondly, we describe a probabilistic model that takes advantage of tag information to discover latent topics contained in the search results. The users' interests can similarly be described by the tags they used for annotating their images. The latent topics found by the model are then used to personalize search results by finding images on topics that are of interest to the user.