DFKI
Learning to Rank Effective Paraphrases from Query Logs for Community Question Answering
Figueroa, Alejandro (Yahoo! Research Latin America) | Neumann, Guenter (DFKI)
We present a novel method for ranking query paraphrases for effective search in community question answering (cQA). The method uses query logs from Yahoo! Search and Yahoo! Answers for automatically extracting a corpus of paraphrases of queries and questions using the query-question click history. Elements of this corpus are automatically ranked according to recall and mean reciprocal rank, and then used for learning two independent learning to rank models (SVMRank), whereby a set of new query paraphrases can be scored according to recall and MRR. We perform several automatic evaluation procedures using cross-validation for analyzing the behavior of various aspects of our learned ranking functions, which show that our method is useful and effective for search in cQA.
Introduction to the Special Issue on "Usable AI"
Jameson, Anthony David (DFKI) | Spaulding, Aaron (SRI International) | Yorke-Smith, Neil (American University of Beirut)
When creating algorithms or systems that are supposed to be used by people, we should be able to adopt a "binocular" view of users' interaction with intelligent systems: a view that regards the design of interaction and the design of intelligent algorithms as interrelated parts of a single design problem. This special issue offers a coherent set of articles on two levels of generality that illustrate the binocular view and help readers to adopt it.
Understanding and Dealing With Usability Side Effects of Intelligent Processing
Jameson, Anthony David (DFKI)
These unintended negative consequences of the introduction of intelligence often have no direct relationship with the intended benefits, just as the adverse effects of a medication may bear no obvious relationship to the intended benefits of taking that medicine. Therefore, these negative consequences can be seen as side effects. The purpose of this article is to give designers, developers, and users of interactive intelligent systems a detailed awareness of the potential side effects of AI. As with medications, awareness of the side effects can have different implications: We may be relieved to see that a given side effect is unlikely to occur in our particular case. We may become convinced that it will inevitably occur and therefore decide not to "take the medicine" (that is, decide to stick with mainstream systems). Or most likely and most constructively, by looking carefully at the causes of the side effects and the conditions under which they can occur, we can figure out how to exploit the benefits of AI in interactive systems while avoiding the side effects.
Introduction to the Special Issue on “Usable AI”
Jameson, Anthony David (DFKI) | Spaulding, Aaron (SRI International) | Yorke-Smith, Neil (American University of Beirut)
When creating algorithms or systems that are supposed to be used by people, we should be able to adopt a “binocular” view of users’ interaction with intelligent systems: a view that regards the design of interaction and the design of intelligent algorithms as interrelated parts of a single design problem. This special issue offers a coherent set of articles on two levels of generality that illustrate the binocular view and help readers to adopt it.