Plotting

 Cao, Yan


Adaptive Dialog Policy Learning with Hindsight and User Modeling

arXiv.org Artificial Intelligence

Reinforcement learning methods have been used to compute dialog policies from language-based interaction experiences. Efficiency is of particular importance in dialog policy learning, because of the considerable cost of interacting with people, and the very poor user experience from low-quality conversations. Aiming at improving the efficiency of dialog policy learning, we develop algorithm LHUA (Learning with Hindsight, User modeling, and Adaptation) that, for the first time, enables dialog agents to adaptively learn with hindsight from both simulated and real users. Simulation and hindsight provide the dialog agent with more experience and more (positive) reinforcements respectively. Experimental results suggest that, in success rate and policy quality, LHUA outperforms competitive baselines from the literature, including its no-simulation, no-adaptation, and no-hindsight counterparts.


TRM: Computing Reputation Score by Mining Reviews

AAAI Conferences

As the rapid development of e-commerce, reputation model has been proposed to help customers make effective purchase decisions. However, most of reputation models focus only on the overall ratings of products without considering reviews which provided by customers. We believe that textual reviews provided by buyers can express their real opinions more honestly. As so, in this paper, based on word2vector model, we propose a Textual Reputation Model (TRM) to obtain useful information from reviews, and evaluate the trustworthiness of objective product. Experimental results on real data demonstrate the effectiveness of our approach in capturing reputation information from reviews.