Goto

Collaborating Authors

Semantic ratings and heuristic similarity for collaborative filtering

AAAI Conferences

Collaborative filtering (CF) is a technique for recommending items to a user's attention based on similarities between the past behavior of the user and that of other users. A canonical example is the GroupLens system that recommends news articles based on similarities between users' reading behavior (Resnick, et al. 1994). This technique has been applied to many areas from consumer products to web pages (Resnick Varian, 1997; Kautz, 1998), and has become standard marketing technique in electronic commerce. The input to a CF system is a triple consisting of a user, an object that the user has an opinion about, and a rating that captures that opinion: u, o, r(u,o) . As ratings for a given user are accumulated, it becomes possible to correlate users on the basis of similar ratings and make predictions about unrated items on the basis of historical similarity.


The Wasabi Personal Shopper: A Case-Based Recommender System

AAAI Conferences

The Wasabi Personal Shopper (WPS) is a domainindependent database browsing tool designed for online information access, particularly for electronic product catalogs. Typically, web-based catalogs rely either on text search or query formulation. WPS introduces an alternative form of access via preference-based navigation. WPS is based on a line of academic research called FindMe systems. These systems were built in a variety of different languages and used custom-built ad-hoc databases. WPS is written in C, and designed to be a commercial-grade software product, compatible with any SQL-accessible catalog. The paper describes the WPS and discusses some of the development issues involved in re-engineering our AI research system as a general-purpose commercial application.


Hybrid Recommender Systems for Electronic Commerce

AAAI Conferences

System: verifies that it is OK to use the CFRSS. However, after calculating the predicted ratings on items, it finds that none of the items are "good items" (ref.


User-Involved Preference Elicitation for Product Search and Recommender Systems

AI Magazine

As such systems must crucially rely on an accurate and complete model of user preferences, the acquisition of this model becomes the central subject of this article. Many tools used today do not satisfactorily assist users to establish this model because they do not adequately focus on fundamental decision objectives, help them reveal hidden preferences, revise conflicting preferences, or explicitly reason about tradeoffs. As a result, users fail to find the outcomes that best satisfy their needs and preferences. In this article, we provide some analyses of common areas of design pitfalls and derive a set of design guidelines that assist the user in avoiding these problems in three important areas: user preference elicitation, preference revision, and explanation interfaces. For each area, we describe the state of the art of the developed techniques and discuss concrete scenarios where they have been applied and tested. However, automated decision systems cannot effectively search the space of possible solutions without an accurate model of a user's preferences. Preference acquisition is therefore a fundamental problem of growing importance. Without an adequate interaction model and system guidance, it is difficult for users to establish a complete and accurate model of their preferences. More specifically, we face the following difficulties: First, inadequate elicitation tools can easily mislead users to focus on means objectives rather than fundamental decision objectives and force them to state preferences in the wrong order. For example, a user who commits to the choice of minivans (means objective) for spacious baggage space (fundamental) is not focusing on the values and could risk missing alternatives offered by station wagons. In value-focus thinking, Keeney (1992) suggests that the specification and clarification of values should not be overtaken by the set of alternatives too rapidly. This theory has a direct implication on the order in which the system initially elicits user preferences. Second, users are not aware of all preferences until they see them violated. For example, a user does not think of stating a preference for the intermediate airport until a solution proposes an airplane change in a place the user dislikes. This observation sheds light on the interaction design guideline on how to help users discover their hidden preferences. Finally, preferences can be inconsistent.


User-Involved Preference Elicitation for Product Search and Recommender Systems

AI Magazine

We address user system interaction issues in product search and recommender systems: how to help users select the most preferential item from a large collection of alternatives. As such systems must crucially rely on an accurate and complete model of user preferences, the acquisition of this model becomes the central subject of our paper. Many tools used today do not satisfactorily assist users to establish this model because they do not adequately focus on fundamental decision objectives, help them reveal hidden preferences, revise conflicting preferences, or explicitly reason about tradeoffs. As a result, users fail to find the outcomes that best satisfy their needs and preferences. In this article, we provide some analyses of common areas of design pitfalls and derive a set of design guidelines that assist the user in avoiding these problems in three important areas: user preference elicitation, preference revision, and explanation interfaces. For each area, we describe the state-of-the-art of the developed techniques and discuss concrete scenarios where they have been applied and tested.