WeChat, Tencent
Elastic Responding Machine for Dialog Generation with Dynamically Mechanism Selecting
Zhou, Ganbin (Institute of Computing Technology, Chinese Academy of Sciences) | Luo, Ping (Institute of Computing Technology, Chinese Academy of Sciences) | Xiao, Yijun (University of California Santa Barbara) | Lin, Fen (WeChat, Tencent) | Chen, Bo (WeChat, Tencent) | He, Qing (Institute of Computing Technology, Chinese Academy of Sciences)
Neural models aiming at generating meaningful and diverse response is attracting increasing attention over recent years. For a given post, the conventional encoder-decoder models tend to learn high-frequency but trivial responses, or are difficult to determine which speaking styles are suitable to generate responses. To address this issue, we propose the elastic responding machine (ERM), which is based on a proposed encoder-diverter-filter-decoder framework. ERM models the multiple responding mechanisms to not only generate acceptable responses for a given post but also improve the diversity of responses. Here, the mechanisms could be regraded as some latent variables, and for a given post different responses may be generated by different mechanisms. The experiments demonstrate the quality and diversity of the generated responses, intuitively show how the learned model controls response mechanism when responding, and reveal some underlying relationship between mechanism and language style.
Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning With Confidence
Xie, Ruobing (WeChat, Tencent ) | Liu, Zhiyuan (Tsinghua University) | Lin, Fen (WeChat, Tencent) | Lin, Leyu (WeChat, Tencent)
Knowledge graphs (KGs), which could provide essential relational information between entities, have been widely utilized in various knowledge-driven applications. Since the overall human knowledge is innumerable that still grows explosively and changes frequently, knowledge construction and update inevitably involve automatic mechanisms with less human supervision, which usually bring in plenty of noises and conflicts to KGs. However, most conventional knowledge representation learning methods assume that all triple facts in existing KGs share the same significance without any noises. To address this problem, we propose a novel confidence-aware knowledge representation learning framework (CKRL), which detects possible noises in KGs while learning knowledge representations with confidence simultaneously. Specifically, we introduce the triple confidence to conventional translation-based methods for knowledge representation learning. To make triple confidence more flexible and universal, we only utilize the internal structural information in KGs, and propose three kinds of triple confidences considering both local and global structural information. In experiments, We evaluate our models on knowledge graph noise detection, knowledge graph completion and triple classification. Experimental results demonstrate that our confidence-aware models achieve significant and consistent improvements on all tasks, which confirms the capability of CKRL modeling confidence with structural information in both KG noise detection and knowledge representation learning.