We have seen a variety of Recommender Systems. But we left an important issue aside: How do we evaluate RecSys? Before answering that question per se, I want to make emphasys on something. Using just one error metric can give us a limited view of how these systems work. We should always try to evaluate with different methods our models, almost as picky as your ex, but prorizing quick iteration with the lowest cost possible.
A hybrid recommender system fuses multiple data sources, usually with static and nonadjustable weightings, to deliver recommendations. One limitation of this approach is the problem to match user preference in all situations. In this paper, we present two user-controllable hybrid recommender interfaces, which offer a set of sliders to dynamically tune the impact of different sources of relevance on the final ranking. Two user studies were performed to design and evaluate the proposed interfaces.
What recommender systems have in common is an emphasis on leveraging social processes for the purpose of improving information access. Typically, most of the current breed of recommender systems are Internet services with a twofold purpose: providing tailored recommendations and building communities. The issue we focus on here is how to make recommender systems work in organizations and for organizations. Moving from the Internet to Intranets requires shifting the primary focus from sharing recommendations to sharing knowledge and from community-building to community support. Moving recommender systems from the Internet onto Intranets also means turning "leisure-ware" into groupware, creating both new challenges and new opportunities.