Ding, Ning
Prototypical Representation Learning for Relation Extraction
Ding, Ning, Wang, Xiaobin, Fu, Yao, Xu, Guangwei, Wang, Rui, Xie, Pengjun, Shen, Ying, Huang, Fei, Zheng, Hai-Tao, Zhang, Rui
Recognizing relations between entities is a pivotal task of relational learning. Learning relation representations from distantly-labeled datasets is difficult because of the abundant label noise and complicated expressions in human language. This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data that are effective in different settings, including supervised, distantly supervised, and few-shot learning. Instead of solely relying on the supervision from noisy labels, we propose to learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations. Prototypes are representations in the feature space abstracting the essential semantics of relations between entities in sentences. We learn prototypes based on objectives with clear geometric interpretation, where the prototypes are unit vectors uniformly dispersed in a unit ball, and statement embeddings are centered at the end of their corresponding prototype vectors on the surface of the ball. This approach allows us to learn meaningful, interpretable prototypes for the final classification. Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art models. We further demonstrate the robustness of the encoder and the interpretability of prototypes with extensive experiments.
A Hybrid Task-Oriented Dialog System with Domain and Task Adaptive Pretraining
Zhang, Boliang, Lyu, Ying, Ding, Ning, Shen, Tianhao, Jia, Zhaoyang, Han, Kun, Knight, Kevin
This paper describes our submission for the End-to-end Multi-domain Task Completion Dialog shared task at the 9th Dialog System Technology Challenge (DSTC-9). Participants in the shared task build an end-to-end task completion dialog system which is evaluated by human evaluation and a user simulator based automatic evaluation. Different from traditional pipelined approaches where modules are optimized individually and suffer from cascading failure, we propose an end-to-end dialog system that 1) uses Generative Pretraining 2 (GPT-2) as the backbone to jointly solve Natural Language Understanding, Dialog State Tracking, and Natural Language Generation tasks, 2) adopts Domain and Task Adaptive Pretraining to tailor GPT-2 to the dialog domain before finetuning, 3) utilizes heuristic pre/post-processing rules that greatly simplify the prediction tasks and improve generalizability, and 4) equips a fault tolerance module to correct errors and inappropriate responses. Our proposed method significantly outperforms baselines and ties for first place in the official evaluation. We make our source code publicly available.
On Computing Optimal Strategies in Open List Proportional Representation: The Two Parties Case
Ding, Ning (Hong Kong University of Science and Technology) | Lin, Fangzhen (Hong Kong University of Science and Technology)
Open list proportional representation is an election mechanism used in many elections, including the 2012 Hong Kong Legislative Council Geographical Constituencies election. In this paper, we assume that there are just two parties in the election, and that the number of votes that a list would get is the sum of the numbers of votes that the candidates in the list would get if each of them would go alone in the election. Under these assumptions, we formulate the election as a mostly zero-sum game, and show that while the game always has a pure Nash equilibrium, it is NP-hard to compute it.