INFACT: An Online Human Evaluation Framework for Conversational Recommendation

Manzoor, Ahtsham, jannach, Dietmar

arXiv.org Artificial Intelligence 

Conversational recommender systems (CRS) are interactive agents that support their users in recommendation-related goals through multi-turn conversations. Generally, a CRS can be evaluated in various dimensions. Today's CRS mainly rely on offline (computational) measures to assess the performance of their algorithms in comparison to different baselines. However, offline measures can have limitations, for example, when the metrics for comparing a newly generated response with a ground truth do not correlate with human perceptions, because various alternative generated responses might be suitable too in a given dialog situation. Current research on machine learning-based CRS models therefore acknowledges the importance of humans in the evaluation process, knowing that pure offline measures may not be sufficient in evaluating a highly interactive system like a CRS. In this work, we provide a user-centric evaluation approach to conversational recommendation along with the INFACT, an onlIne humaN evaluation Framework for conversAtional reCommender sysTems, which can be used to assess the suitability of system responses in a given dialog situation. The INFACT framework is prepared to enable the crowdsourcing of the evaluation task, where various CRS can be integrated for comparison. We have successfully applied the INFACT framework for conducting a number of user studies in our previous research. We believe that our study design along with the INFACT framework can be helpful in facilitating user-centric studies in domains such as dialog systems, machine translation, or Q&A.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found