Learning to combine Grammatical Error Corrections

arXiv.org Artificial Intelligence

The field of Grammatical Error Correction (GEC) has produced various systems to deal with focused phenomena or general text editing. We propose an automatic way to combine black-box systems. Our method automatically detects the strength of a system or the combination of several systems per error type, improving precision and recall while optimizing $F$ score directly. We show consistent improvement over the best standalone system in all the configurations tested. This approach also outperforms average ensembling of different RNN models with random initializations. In addition, we analyze the use of BERT for GEC - reporting promising results on this end. We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking. This paper describes a system submission to Building Educational Applications 2019 Shared Task: Grammatical Error Correction. Combining the output of top BEA 2019 shared task systems using our approach, currently holds the highest reported score in the open phase of the BEA 2019 shared task, improving F0.5 by 3.7 points over the best result reported.


Software for Data Mining, Analytics,Data Science, and Knowledge Discovery

AITopics Original Links

Classification software: building models to separate 2 or more discrete classes using Multiple methods Decision Tree Rules Neural Bayesian SVM Genetic, Rough Sets, Fuzzy Logic and other approaches Analysis of results, ROC Social Network Analysis, Link Analysis, and Visualization software Text Analysis, Text Mining, and Information Retrieval (IR) Web Analytics and Social Media Analytics software. BI (Business Intelligence), Database and OLAP software Data Transformation, Data Cleaning, Data Cleansing Libraries, Components and Developer Kits for creating embedded data mining applications Web Content Mining, web scraping, screen scraping.


Robust Maximum Likelihood Estimation of Sparse Vector Error Correction Model

arXiv.org Machine Learning

In econometrics and finance, the vector error correction model (VECM) is an important time series model for cointegration analysis, which is used to estimate the long-run equilibrium variable relationships. The traditional analysis and estimation methodologies assume the underlying Gaussian distribution but, in practice, heavy-tailed data and outliers can lead to the inapplicability of these methods. In this paper, we propose a robust model estimation method based on the Cauchy distribution to tackle this issue. In addition, sparse cointegration relations are considered to realize feature selection and dimension reduction. An efficient algorithm based on the majorization-minimization (MM) method is applied to solve the proposed nonconvex problem. The performance of this algorithm is shown through numerical simulations.


Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study

arXiv.org Artificial Intelligence

Neural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence's fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with convolutional seq2seq models, our approach achieves the state-of-the-art performance: 75.72 (F_{0.5}) on CoNLL-2014 10 annotation dataset and 62.42 (GLEU) on JFLEG test set respectively, becoming the first GEC system that reaches human-level performance (72.58 for CoNLL and 62.37 for JFLEG) on both of the benchmarks.


Designing communication systems via iterative improvement: error correction coding with Bayes decoder and codebook optimized for source symbol error

arXiv.org Artificial Intelligence

In error correction coding (ECC), the typical error metric is the bit error rate (BER) which measures the number of bit errors. For this metric, the positions of the bits are not relevant to the decoding, and in many noise models, not relevant to the BER either. In many applications this is unsatisfactory as typically all bits are not equal and have different significance. We look at ECC from a Bayesian perspective and introduce Bayes estimators with general loss functions to take into account the bit significance. We propose ECC schemes that optimize this error metric. As the problem is highly nonlinear, traditional ECC construction techniques are not applicable and we use iterative improvement search techniques to find good codebooks. We provide numerical experiments to show that they can be superior to classical linear block codes such as Hamming codes and decoding methods such as minimum distance decoding.