Grammars & Parsing


Trump's staffers reportedly write his tweets with deliberate grammatical errors

Mashable

It's been long suspected that Trump doesn't write all his tweets. Some are penned by White House staffers, and according to a Boston Globe report, these tweets are complete with grammatical errors and irregularities, which are intentionally included to sound like they're written by Trump. SEE ALSO: The Royal Wedding was another perfect opportunity to troll Trump's inauguration It comes from two sources at the White House who spoke to the newspaper, who said staffers would copy Trump's expression. That includes the overuse of exclamation points, the capitalization of words for emphasis, fragments, and loosely connected ideas. While grammatical errors are present, staffers reportedly didn't intentionally misspell words or names.


Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback

arXiv.org Machine Learning

Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data.


crater: Automatic Content Scoring for Short Constructed Responses

AAAI Conferences

The education community is moving towards constructed or free-text responses and computer-based assessment. At the same time, progress in natural language processing and knowledge representation has made it possible to consider free-text or constructed responses without having to fully understand the text.


Computational Considerations in Correcting User-Language

AAAI Conferences

This study evaluates the robustness of established computational indices used to assess text relatedness in userlanguage. The original User-Language Paraphrase Corpus (ULPC) was compared to a corrected version, in which each paraphrase was corrected for typographical and grammatical errors. Error correction significantly affected values for each of five computational indices, indicating greater similarity of the target sentence to the corrected paraphrase than to the original paraphrase. Moreover, misspelled target words accounted for a large proportion of the differences. This study also evaluated potential effects on correlations between computational indices and human ratings of paraphrases. The corrections did not yield assessments that were any more or less comparable to trained human raters than were the original paraphrases containing typographical or grammatical errors. The results suggest that although correcting for errors may optimize certain computational indices, the corrections are not necessary for comparing the indices to expert ratings.


Progressive refinement: a method of coarse-to-fine image parsing using stacked network

arXiv.org Artificial Intelligence

To parse images into fine-grained semantic parts, the complex fine-grained elements will put it in trouble when using off-the-shelf semantic segmentation networks. In this paper, for image parsing task, we propose to parse images from coarse to fine with progressively refined semantic classes. It is achieved by stacking the segmentation layers in a segmentation network several times. The former segmentation module parses images at a coarser-grained level, and the result will be feed to the following one to provide effective contextual clues for the finer-grained parsing. To recover the details of small structures, we add skip connections from shallow layers of the network to fine-grained parsing modules. As for the network training, we merge classes in groundtruth to get coarse-to-fine label maps, and train the stacked network with these hierarchical supervision end-to-end. Our coarse-to-fine stacked framework can be injected into many advanced neural networks to improve the parsing results. Extensive evaluations on several public datasets including face parsing and human parsing well demonstrate the superiority of our method.


Weakly-supervised Semantic Parsing with Abstract Examples

arXiv.org Artificial Intelligence

Semantic parsers translate language utterances to programs, but are often trained from utterance-denotation pairs only. Consequently, parsers must overcome the problem of spuriousness at training time, where an incorrect program found at search time accidentally leads to a correct denotation. We propose that in small well-typed domains, we can semi-automatically generate an abstract representation for examples that facilitates information sharing across examples. This alleviates spuriousness, as the probability of randomly obtaining a correct answer from a program decreases across multiple examples. We test our approach on CNLVR, a challenging visual reasoning dataset, where spuriousness is central because denotations are either TRUE or FALSE, and thus random programs have high probability of leading to a correct denotation. We develop the first semantic parser for this task and reach 83.5% accuracy, a 15.7% absolute accuracy improvement compared to the best reported accuracy so far.


Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

arXiv.org Artificial Intelligence

Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to generalize to new domains. In this paper, we introduce a zero-shot approach to semantic parsing that can parse utterances in unseen domains while only being trained on examples in other source domains. First, we map an utterance to an abstract, domain-independent, logical form that represents the structure of the logical form, but contains slots instead of KB constants. Then, we replace slots with KB constants via lexical alignment scores and global inference. Our model reaches an average accuracy of 53.1% on 7 domains in the Overnight dataset, substantially better than other zero-shot baselines, and performs as good as a parser trained on over 30% of the target domain examples.


Stylistic Variation in Social Media Part-of-Speech Tagging

arXiv.org Artificial Intelligence

Social media features substantial stylistic variation, raising new challenges for syntactic analysis of online writing. However, this variation is often aligned with author attributes such as age, gender, and geography, as well as more readily-available social network metadata. In this paper, we report new evidence on the link between language and social networks in the task of part-of-speech tagging. We find that tagger error rates are correlated with network structure, with high accuracy in some parts of the network, and lower accuracy elsewhere. As a result, tagger accuracy depends on training from a balanced sample of the network, rather than training on texts from a narrow subcommunity. We also describe our attempts to add robustness to stylistic variation, by building a mixture-of-experts model in which each expert is associated with a region of the social network. While prior work found that similar approaches yield performance improvements in sentiment analysis and entity linking, we were unable to obtain performance improvements in part-of-speech tagging, despite strong evidence for the link between part-of-speech error rates and social network structure.


A Language for Function Signature Representations

arXiv.org Artificial Intelligence

Recent work by (Richardson and Kuhn, 2017a,b; Richardson et al., 2018) looks at semantic parser induction and question answering in the domain of source code libraries and APIs. In this brief note, we formalize the representations being learned in these studies and introduce a simple domain specific language and a systematic translation from this language to first-order logic. By recasting the target representations in terms of classical logic, we aim to broaden the applicability of existing code datasets for investigating more complex natural language understanding and reasoning problems in the software domain.


Reference-less Measure of Faithfulness for Grammatical Error Correction

arXiv.org Artificial Intelligence

We propose {\sc USim}, a semantic measure for Grammatical Error Correction (GEC) that measures the semantic faithfulness of the output to the source, thereby complementing existing reference-less measures (RLMs) for measuring the output's grammaticality. {\sc USim} operates by comparing the semantic symbolic structure of the source and the correction, without relying on manually-curated references. Our experiments establish the validity of {\sc USim}, by showing that (1) semantic annotation can be consistently applied to ungrammatical text; (2) valid corrections obtain a high {\sc USim} similarity score to the source; and (3) invalid corrections obtain a lower score.\footnote{Our code is available in \url{https://github.com/borgr/USim}.