Goto

Collaborating Authors

Deploying CommunityCommands: A Software Command Recommender System Case Study

AI Magazine

In 2009 we presented the idea of using collaborative filtering within a complex software application to help users learn new and relevant commands (Matejka et al. 2009). This project continued to evolve and we explored the design space of a contextual software command recommender system and completed a six-week user study (Li et al. 2011). We then expanded the scope of our project by implementing CommunityCommands, a fully functional and deployable recommender system. During a one-year period, the recommender system was used by more than 1100 users. In this article, we discuss how our practical system architecture was designed to leverage Autodesk's existing Customer Involvement Program (CIP) data to deliver in-product contextual recommendations to end-users.


Deploying CommunityCommands: A Software Command Recommender System Case Study

AI Magazine

This project continued to evolve and we explored the design space of a contextual software command recommender system and completed a six-week user study (Li et al. We then expanded the scope of our project by implementing CommunityCommands, a fully functional and deployable recommender system. During a one-year period, the recommender system was used by more than 1100 users. We also present our system usage data and payoff, and provide an in-depth discussion of the challenges and design issues associated with developing and deploying the software command recommender system.


Deploying CommunityCommands: A Software Command Recommender System Case Study

AAAI Conferences

In 2009 we presented the idea of using collaborative filtering within a complex software application to help users learn new and relevant commands (Matejka et al. 2009). This project continued to evolve and we explored the design space of a contextual software command recommender system and completed a four-week user study (Li et al. 2011). We then expanded the scope of our project by implementing CommunityCommands, a fully functional and deployable recommender system. CommunityCommands was made available as a publically available plug-in download for Autodesk’s flagship software application AutoCAD. During a one-year period, the recommender system was used by more than 1100 AutoCAD users. In this paper, we present our system usage data and payoff. We also provide an in-depth discussion of the challenges and design issues associated with developing and deploying the front end AutoCAD plug-in and its back end system. This includes a detailed description of the issues surrounding cold start and privacy. We also discuss how our practical system architecture was designed to leverage Autodesk’s existing Customer Involvement Program (CIP) data to deliver in-product contextual recommendations to end-users. Our work sets important groundwork for the future development of recommender systems within the domain of end-user software learning assistance.


The LKPY Package for Recommender Systems Experiments: Next-Generation Tools and Lessons Learned from the LensKit Project

arXiv.org Artificial Intelligence

Since 2010, we have built and maintained LensKit, an open-source toolkit for building, researching, and learning about recommender systems. We have successfully used the software in a wide range of recommender systems experiments, to support education in traditional classroom and online settings, and as the algorithmic backend for user-facing recommendation services in movies and books. This experience, along with community feedback, has surfaced a number of challenges with LensKit's design and environmental choices. In response to these challenges, we are developing a new set of tools that leverage the PyData stack to enable the kinds of research experiments and educational experiences that we have been able to deliver with LensKit, along with new experimental structures that the existing code makes difficult. The result is a set of research tools that should significantly increase research velocity and provide much smoother integration with other software such as Keras while maintaining the same level of reproducibility as a LensKit experiment. In this paper, we reflect on the LensKit project, particularly on our experience using it for offline evaluation experiments, and describe the next-generation LKPY tools for enabling new offline evaluations and experiments with flexible, open-ended designs and well-tested evaluation primitives.


How to evaluate Recommender Systems – Carlos Pinela – Medium

#artificialintelligence

We have seen a variety of Recommender Systems. But we left an important issue aside: How do we evaluate RecSys? Before answering that question per se, I want to make emphasys on something. Using just one error metric can give us a limited view of how these systems work. We should always try to evaluate with different methods our models, almost as picky as your ex, but prorizing quick iteration with the lowest cost possible.