AI technology was employed to present the right information efficiently to each reader and to reduce radically the workload of curators. The system went through three implementation cycles and processed more than 20 million news stories from about 12,000 Really Simple Syndication (RSS) feeds on more than 8000 topics organized by 160 curators for more than 600 registered readers. This article describes the approach, engineering, and AI technology of the system. It is hard to keep up on what matters. The limiting factor is not the amount of information available but our available attention (Simon 1971).
From 2008-2010 we built an experimental personalized news system where readers subscribe to organized channels of topical information that are curated by experts. AI technology was employed to efficiently present the right information to each reader and to radically reduce the workload of curators. The system went through three implementation cycles and processed over 20 million news stories from about 12,000 RSS feeds on over 8000 topics organized by 160 curators for over 600 registered readers. This paper describes the approach, engineering and AI technology of the system.
With the growth of online information, many people are challenged in finding and reading the information most important for their interests. From 2008-2010 we built an experimental personalized news system where readers can subscribe to organized channels of information that are curated by experts. AI technology was employed to radically reduce the work load of curators and to efficiently present information to readers. The system has gone through three implementation cycles and processed over 16 million news stories from about 12,000 RSS feeds on over 8000 topics organized by 160 curators for over 600 registered readers. This paper describes the approach, engineering and AI technology of the system.
While Wikipedia is a subject of great interest in the computing literature, very little work has considered Wikipedia’s important relationships with other information technologies like search engines. In this paper, we report the results of two deception studies whose goal was to better understand the critical relationship between Wikipedia and Google. These studies silently removed Wikipedia content from Google search results and examined the effect of doing so on participants’ interactions with both websites. Our findings demonstrate and characterize an extensive interdependence between Wikipedia and Google. Google becomes a worse search engine for many queries when it cannot surface Wikipedia content (for example, click-through rates on results pages drop significantly) and the importance of Wikipedia content is likely greater than many improvements to search algorithms. Our results also highlight Google’s critical role in providing readership to Wikipedia. However, we also found evidence that this mutually beneficial relationship is in jeopardy: changes Google has made to its search results that involve directly surfacing Wikipedia content are significantly reducing traffic to Wikipedia. Overall, our findings argue that researchers and practitioners should give deeper consideration to the interdependence between peer production communities and the information technologies that use and surface their content.
The first months of Donald Trump's presidency were a fraught and chaotic time in American politics. But in an age of shrinking newsrooms, early 2017 was a bright spot for online news publishers, especially those with some Facebook savvy. People were hungry for political news, commentary, and analysis, and Facebook fed them a steady diet of it. It was where conservatives gathered to crow and liberals went to commiserate and organize. Slate--yes, the publication you're reading right now--got more than 85 million clicks that originated from external sites and apps in January 2017 alone. Almost a third of them--28 million--came from Facebook.