Collaborating Authors

Co-training an Unsupervised Constituency Parser with Weak Supervision Artificial Intelligence

We introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence. There are two types of classifiers, an inside classifier that acts on a span, and an outside classifier that acts on everything outside of a given span. Through self-training and co-training with the two classifiers, we show that the interplay between them helps improve the accuracy of both, and as a result, effectively parse. A seed bootstrapping technique prepares the data to train these classifiers. Our analyses further validate that such an approach in conjunction with weak supervision using prior branching knowledge of a known language (left/right-branching) and minimal heuristics injects strong inductive bias into the parser, achieving 63.1 F$_1$ on the English (PTB) test set. In addition, we show the effectiveness of our architecture by evaluating on treebanks for Chinese (CTB) and Japanese (KTB) and achieve new state-of-the-art results.\footnote{For code or data, please contact the authors.}

Learning Syntax from Naturally-Occurring Bracketings Artificial Intelligence

Naturally-occurring bracketings, such as answer fragments to natural language questions and hyperlinks on webpages, can reflect human syntactic intuition regarding phrasal boundaries. Their availability and approximate correspondence to syntax make them appealing as distant information sources to incorporate into unsupervised constituency parsing. But they are noisy and incomplete; to address this challenge, we develop a partial-brackets-aware structured ramp loss in learning. Experiments demonstrate that our distantly-supervised models trained on naturally-occurring bracketing data are more accurate in inducing syntactic structures than competing unsupervised systems. On the English WSJ corpus, our models achieve an unlabeled F1 score of 68.9 for constituency parsing.

Jointly Parse and Fragment Ungrammatical Sentences

AAAI Conferences

However, the sentences under analysis may experiments, we find that both joint methods produce tree not always be grammatically correct. When a dependency fragment sets that are more similar to those produced by the parser nonetheless produces fully connected, syntactically oracle method than the previous pipeline method; moreover, well-formed trees for these sentences, the trees may be inappropriate the seq2seq method's pruning decision has a significantly and lead to errors. In fact, researchers have raised higher accuracy. In terms of downstream applications, we valid questions about the merit of annotating dependency show that dependency arc pruning is helpful for two applications: trees for ungrammatical sentences (Ragheb and Dickinson sentential grammaticality judgment and semantic role 2012; Cahill 2015). On the other hand, previous work has labeling.

$R^3$: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge Artificial Intelligence

We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-and-edit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on the commonsense knowledge generates sarcasm of higher quality. Human evaluation shows that our system generates sarcasm better than human annotators 34% of the time, and better than a reinforced hybrid baseline 90% of the time.