Miller, Robert C.
A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
Organisciak, Peter (University of Illinois at Urbana-Champaign) | Teevan, Jaime (Microsoft Research) | Dumais, Susan (Microsoft Research) | Miller, Robert C. (MIT CSAIL) | Kalai, Adam Tauman (Microsoft Research)
Personalization is a way for computers to support people’s diverse interests and needs by providing content tailored to the individual. While strides have been made in algorithmic approaches to personalization, most require access to a significant amount of data. However, even when data is limited online crowds can be used to infer an individual’s personal preferences. Aided by the diversity of tastes among online crowds and their ability to understand others, we show that crowdsourcing is an effective on-demand tool for personalization. Unlike typical crowdsourcing approaches that seek a ground truth, we present and evaluate two crowdsourcing approaches designed to capture personal preferences. The first, taste-matching , identifies workers with similar taste to the requester and uses their taste to infer the requester’s taste. The second, taste-grokking , asks workers to explicitly predict the requester’s taste based on training examples. These techniques are evaluated on two subjective tasks, personalized image recommendation and tailored textual summaries. Taste-matching and taste-grokking both show improvement over the use of generic workers, and have different benefits and drawbacks depending on the complexity of the task and the variability of the taste space.
Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference Sessions
André, Paul (Carnegie Mellon University) | Zhang, Haoqi (Northwestern University) | Kim, Juho (Massachusetts Institute of Technology) | Chilton, Lydia (University of Washington) | Dow, Steven P. (Carnegie Mellon University) | Miller, Robert C. (Massachusetts Institute of Technology)
Creating sessions of related papers for a large conference is a complex and time-consuming task. Traditionally, a few conference organizers group papers into sessions manually. Organizers often fail to capture the affinities between papers beyond created sessions, making incoherent sessions difficult to fix and alternative groupings hard to discover. This paper proposes committeesourcing and authorsourcing approaches to session creation (a specific instance of clustering and constraint satisfaction) that tap into the expertise and interest of committee members and authors for identifying paper affinities. During the planning of ACM CHI'13, a large conference on human-computer interaction, we recruited committee members to group papers using two online distributed clustering methods. To refine these paper affinities — and to evaluate the committeesourcing methods against existing manual and automated approaches — we recruited authors to identify papers that fit well in a session with their own. Results show that authors found papers grouped by the distributed clustering methods to be as relevant as, or more relevant than, papers suggested through the existing in-person meeting. Results also demonstrate that communitysourced results capture affinities beyond sessions and provide flexibility during scheduling.
Personalized Human Computation
Organisciak, Peter (University of Illinois at Urbana-Champaign) | Teevan, Jaime (Microsoft Research) | Dumais, Susan (Microsoft Research) | Miller, Robert C. (MIT CSAIL) | Kalai, Adam Tauman (Microsoft Research)
Significant effort in machine learning and information retrieval has been devoted to identifying personalized content such as recommendations and search results. Personalized human computation has the potential to go beyond existing techniques like collaborative filtering to provide personalized results on demand, over personal data, and for complex tasks. This work-in-progress compares two approaches to personalized human computation. In both, users annotate a small set of training examples which are then used by the crowd to annotate unseen items. In the first approach, which we call taste-matching, crowd members are asked to annotate the same set of training examples, and the ratings of similar users on other items are then used to infer personalized ratings. In the second approach, taste-grokking, the crowd is presented with the training examples and asked to use them predict the ratings of the target user on other items.
Cobi: Community-Informed Conference Scheduling
Kim, Juho (MIT CSAIL) | Zhang, Haoqi (Northwestern University) | André, Paul (HCI Institute, CMU) | Chilton, Lydia B. (University of Washington) | Bhardwaj, Anant (MIT CSAIL) | Karger, David (MIT CSAIL) | Dow, Steven P. (HCI Institute, CMU) | Miller, Robert C. (MIT CSAIL)
Creating a schedule for a large multi-track conference requires considering the preferences and constraints of organizers, authors, and attendees. Traditionally, a few dedicated organizers manage the size and complexity of the schedule with limited information and coverage. Cobi presents an alternative approach to conference scheduling by engaging the entire community to take active roles in the planning process. It consists of a collection of crowdsourcing applications that elicit preferences and constraints from the community, and software that enable organizers and other community members to take informed actions based on collected information.