Google's use of machine learning and artificial intelligence to better understand people's search intentions and surface the most relevant results is paying off according to new research. Today's search results are more relevant than ever, while many simplistic techniques that marketers use to artificially help their pages rank higher ‒ such as collecting more backlinks and increasing the number of times keywords are mentioned in text – are becoming less effective. The findings come from Searchmetrics' annual study of Google Ranking Factors which analysed the top 20 search results for 10,000 keywords on Google.com. The aim of the analysis (carried out every year since 2012) is to identify the key factors that high ranking web pages have in common, providing generalised insights and benchmarks to help marketers, SEO professionals and webmasters. "Google revealed last year that it is turning to sophisticated AI and machine-learning techniques, such as its RankBrain system, to help it better understand the real intention behind the words that searchers enter in the search box and make its results more relevant," explains Marcus Tober, Searchmetrics' Founder and CTO.
Google's use of machine learning and artificial intelligence to better understand search intention and deliver the most relevant results is resulting in just that, according to new research from SEO and content performance platform Searchmetrics in its annual study of Google ranking factors. According to Searchmetrics, not only are search results more relevant than ever, many simplistic techniques to artificially inflate page rank ‒ like collecting backlinks and increasing the number of times keywords are mentioned in text – are becoming less effective. Backlinks are also becoming less important because of the rise of mobile search queries because pages viewed on mobile devices are often liked or shared but seldom linked to, Searchmetrics added. Searchmetrics analyzed the top 20 search results for 10,000 keywords on Google.com. The goal was to identify the key factors that high-ranking pages have in common.
For a long time search engines relied on static ranking factors. Those webmasters and SEOs who knew what to pay attention for were able to reach the best positions on Google's SERPs. This has changed recently and will be changing in the future: The increasing usage of machine learning techniques leads to both dynamic ranking criteria and – as confusing as it may sound – a greater influence of human signals. Machine learning is nothing new. Its roots go back to the 50s of the last century.
We investigate the possibility of using structured data to improve search over unstructured documents. In particular, we use relevance feedback to create a `virtuous cycle' between structured data gathered from the Semantic Web and web-pages gathered from the hypertext Web. Previous approaches have generally considered searching over the Semantic Web and hypertext Web to be entirely disparate, indexing and searching over different domains. Our novel approach is to use relevance feedback from hypertext Web results to improve Semantic Web search, and results from the Semantic Web to improve the retrieval of hypertext Web data. In both cases, our evaluation is based on certain kinds of informational queries (abstract concepts, people, and places) selected from a real-life query log and checked by human judges. We show our relevance model-based system is better than the performance of real-world search engines for both hypertext and Semantic Web search, and we also investigate Semantic Web inference and pseudo-relevance feedback.