The component parts of a successful search engine optimization (SEO) strategy may have remained relatively constant, but their definition and purpose have changed entirely. Driven by trends like visual search and voice search, the industry's scope has expanded and evolved into something more dynamic. This delivers on a genuine consumer need. According to a report from Slyce.it, 74 percent of shoppers report that text-only search is insufficient for finding the products they want. It is unsurprising that Gartner research predicts that by 2021, early adopter brands that redesign their websites to support visual and voice search will increase digital commerce revenue by as much as 30 percent.
By 2020, 30% of all website sessions will be conducted without a screen. Now, you may be asking yourself, how is that possible? It turns out that voice-only search allows users to browse the web the Internet and consumer information without actually having to scroll through sites on desktops and mobile devices. And this new technology may be the key to successful brands in the future. Voice search essentially allows users to speak into a device as opposed to typing keywords into a search query to generate results.
There has been mixed success in applying semantic component analysis (LSA, PLSA, discrete PCA, etc.) to information retrieval. Previous experiments have shown that high-fidelity language models do not imply good quality retrieval. Here we combine link analysis with discrete PCA (a semantic component method) to develop an auxiliary score for information retrieval that is used in post-filtering documents retrieved via regular Tf.Idf methods. For this, we use a topic-specific version of link analysis based on topics developed automatically via discrete PCA methods. To evaluate the resultant topic and link based scoring, a demonstration has been built using the Wikipedia, the public domain encyclopedia on the web.
At the Sixth International Conference on Learning Representations, Jannis Bulian and Neil Houlsby, researchers at Google AI, presented a paper that shed light on new methods they're testing to improve search results. While publishing a paper certainly doesn't mean the methods are being used, or even will be, it likely increases the odds when the results are highly successful. And when those methods also combine with other actions Google is taking, one can be almost certain. I believe this is happening, and the changes are significant for search engine optimization specialists (SEOs) and content creators. Let's start with the basics and look topically at what's being discussed.
Most people use Google's search-by-image feature to either look for copyright infringement, or for shopping. See some shoes you like on a frenemy's Instagram? Search will pull up all the matching images on the web, including from sites that will sell you the same pair. In order to do that, Google's computer vision algorithms had to be trained to extract identifying features like colors, textures, and shapes from a vast catalogue of images. Luis Ceze, a computer scientist at the University of Washington, wants to encode that same process directly in DNA, making the molecules themselves carry out that computer vision work. And he wants to do it using your photos.