Goto

Collaborating Authors

Results


SEO in Real Life: Harnessing Visual Search for Optimization Opportunities

#artificialintelligence

The most exciting thing about visual search is that it's becoming a highly accessible way for users to interpret the real world, in real time, as they see it. Rather than being a passive observer, camera phones are now a primary resource for knowledge and understanding in daily life. Users are searching with their own, unique photos to discover content. Though SEOs have little control over which photos people take, we can optimize our brand presentation to ensure we are easily discoverable by visual search tools. By prioritizing the presence of high impact visual search elements and coordinating online SEO with offline branding, businesses of all sizes can see results.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Google image search results will now get fact-check labels

The Independent - Tech

Google has said that it will begin fact-checking images that appear from its search results. Starting today, a'Fact Check' label will start appearing under thumbnails. Clicking on the thumbnail will show a quick summary of the fact check, including the claim and a rating from a fact-checker such as Politifact. This tool is organised using ClaimReview, which is a method used by publishers to indicate fact-checked content to search engines, which are already used by Google Search and Google News. Fact-checkers have to meet Google's criteria before they can be used as the source.


Ad-versarial: Defeating Perceptual Ad-Blocking

arXiv.org Machine Learning

Perceptual ad-blocking is a novel approach that uses visual cues to detect online advertisements. Compared to classical filter lists, perceptual ad-blocking is believed to be less prone to an arms race with web publishers and ad-networks. In this work we use techniques from adversarial machine learning to demonstrate that this may not be the case. We show that perceptual ad-blocking engenders a new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual ad-blocking can also introduce new vulnerabilities that let an attacker bypass web security boundaries and mount DDoS attacks. We first analyze the design space of perceptual ad-blockers and present a unified architecture that incorporates prior academic and commercial work. We then explore a variety of attacks on the ad-blocker's full visual-detection pipeline, that enable publishers or ad-networks to evade or detect ad-blocking, and at times even abuse its high privilege level to bypass web security boundaries. Our attacks exploit the unreasonably strong threat model that perceptual ad-blockers must survive. Finally, we evaluate a concrete set of attacks on an ad-blocker's internal ad-classifier by instantiating adversarial examples for visual systems in a real web-security context. For six ad-detection techniques, we create perturbed ads, ad-disclosures, and native web content that misleads perceptual ad-blocking with 100% success rates. For example, we demonstrate how a malicious user can upload adversarial content (e.g., a perturbed image in a Facebook post) that fools the ad-blocker into removing other users' non-ad content.



How Google is teaching computers to see

AITopics Original Links

Google's Hartmut Neven demonstrates his visual-search app by snapping a picture of a Salvador Dali clock in his office building. Google and other tech companies are racing to improve image-recognition software Computers can recognize some objects in images, but not all Google's engineering director predicts the technology will fully mature in 10 years Google's engineering director predicts the technology will fully mature in 10 years Santa Monica, California (CNN) -- Computers used to be blind, and now they can see. Thanks to increasingly sophisticated algorithms, computers today can recognize and identify the Eiffel Tower, the Mona Lisa or a can of Budweiser. Still, despite huge technological strides in the last decade or so, visual search has plenty more hurdles to clear. At this point, it would be quicker to describe the types of things an image-search engine can interpret instead of what it can't.


Personalizing Image Search Results on Flickr

arXiv.org Artificial Intelligence

The social media site Flickr allows users to upload their photos, annotate them with tags, submit them to groups, and also to form social networks by adding other users as contacts. Flickr offers multiple ways of browsing or searching it. One option is tag search, which returns all images tagged with a specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an insect or a car, tag search results will include many images that are not relevant to the sense the user had in mind when executing the query. We claim that users express their photography interests through the metadata they add in the form of contacts and image annotations. We show how to exploit this metadata to personalize search results for the user, thereby improving search performance. First, we show that we can significantly improve search precision by filtering tag search results by user's contacts or a larger social network that includes those contact's contacts. Secondly, we describe a probabilistic model that takes advantage of tag information to discover latent topics contained in the search results. The users' interests can similarly be described by the tags they used for annotating their images. The latent topics found by the model are then used to personalize search results by finding images on topics that are of interest to the user.