Goto

Collaborating Authors

error correction


Bucketed PCA Neural Networks with Neurons Mirroring Signals

arXiv.org Artificial Intelligence

The bucketed PCA neural network (PCA-NN) with transforms is developed here in an effort to benchmark deep neural networks (DNN's), for problems on supervised classification. Most classical PCA models apply PCA to the entire training data set to establish a reductive representation and then employ non-network tools such as high-order polynomial classifiers. In contrast, the bucketed PCA-NN applies PCA to individual buckets which are constructed in two consecutive phases, as well as retains a genuine architecture of a neural network. This facilitates a fair apple-to-apple comparison to DNN's, esp. to reveal that a major chunk of accuracy achieved by many impressive DNN's could possibly be explained by the bucketed PCA-NN (e.g., 96% out of 98% for the MNIST data set as an example). Compared with most DNN's, the three building blocks of the bucketed PCA-NN are easier to comprehend conceptually - PCA, transforms, and bucketing for error correction. Furthermore, unlike the somewhat quasi-random neurons ubiquitously observed in DNN's, the PCA neurons resemble or mirror the input signals and are more straightforward to decipher as a result.


A comparison of combined data assimilation and machine learning methods for offline and online model error correction

arXiv.org Machine Learning

Recent studies have shown that it is possible to combine machine learning methods with data assimilation to reconstruct a dynamical system using only sparse and noisy observations of that system. The same approach can be used to correct the error of a knowledge-based model. The resulting surrogate model is hybrid, with a statistical part supplementing a physical part. In practice, the correction can be added as an integrated term (i.e. in the model resolvent) or directly inside the tendencies of the physical model. The resolvent correction is easy to implement. The tendency correction is more technical, in particular it requires the adjoint of the physical model, but also more flexible. We use the two-scale Lorenz model to compare the two methods. The accuracy in long-range forecast experiments is somewhat similar between the surrogate models using the resolvent correction and the tendency correction. By contrast, the surrogate models using the tendency correction significantly outperform the surrogate models using the resolvent correction in data assimilation experiments. Finally, we show that the tendency correction opens the possibility to make online model error correction, i.e. improving the model progressively as new observations become available. The resulting algorithm can be seen as a new formulation of weak-constraint 4D-Var. We compare online and offline learning using the same framework with the two-scale Lorenz system, and show that with online learning, it is possible to extract all the information from sparse and noisy observations.


Advantages and Bottlenecks of Quantum Machine Learning for Remote Sensing

arXiv.org Artificial Intelligence

Building on recent theoretical proposals, classification techniques, so focusing on remote sensing applications, initial practical studies suggest that these concepts have the and discuss the bottlenecks of performing these algorithms possibility to be implemented in the laboratory, under strictly on currently available open source platforms. Initial controlled conditions [4], and open the way to the evolution results demonstrate feasibility. Next steps include expanding of their employment and validation.


Factual Error Correction of Claims

arXiv.org Artificial Intelligence

This paper introduces the task of factual error correction: performing edits to a claim so that the generated rewrite is supported by evidence. This serves two purposes: firstly this provides a mechanism to correct written texts that contain misinformation, and secondly, this acts as an inherent explanation for claims already partially supported by evidence. We demonstrate that factual error correction is possible without the need for any additional training data using distant-supervision and retrieved evidence. We release a dataset of 65,000 instances, based on a recent fact verification dataset, to compare our distantly-supervised method to a fully supervised ceiling system. Our manual evaluation indicates which automated evaluation metrics best correlate with human judgements of factuality and whether errors were actually corrected.


Vartani Spellcheck -- Automatic Context-Sensitive Spelling Correction of OCR-generated Hindi Text Using BERT and Levenshtein Distance

arXiv.org Artificial Intelligence

Traditional Optical Character Recognition (OCR) systems that generate text of highly inflectional Indic languages like Hindi tend to suffer from poor accuracy due to a wide alphabet set, compound characters and difficulty in segmenting characters in a word. Automatic spelling error detection and context-sensitive error correction can be used to improve accuracy by post-processing the text generated by these OCR systems. A majority of previously developed language models for error correction of Hindi spelling have been context-free. In this paper, we present Vartani Spellcheck - a context-sensitive approach for spelling correction of Hindi text using a state-of-the-art transformer - BERT in conjunction with the Levenshtein distance algorithm, popularly known as Edit Distance. We use a lookup dictionary and context-based named entity recognition (NER) for detection of possible spelling errors in the text. Our proposed technique has been tested on a large corpus of text generated by the widely used Tesseract OCR on the Hindi epic Ramayana. With an accuracy of 81%, the results show a significant improvement over some of the previously established context-sensitive error correction mechanisms for Hindi. We also explain how Vartani Spellcheck may be used for on-the-fly autocorrect suggestion during continuous typing in a text editor environment.


Real-time error correction and performance aid for MIDI instruments

arXiv.org Artificial Intelligence

Making a slight mistake during live music performance can easily be spotted by an astute listener, even if the performance is an improvisation or an unfamiliar piece. An example might be a highly dissonant chord played by mistake in a classical-era sonata, or a sudden off-key note in a recurring motif. The problem of identifying and correcting such errors can be approached with artificial intelligence -- if a trained human can easily do it, maybe a computer can be trained to spot the errors quickly and just as accurately. The ability to identify and auto-correct errors in real-time would be not only extremely useful to performing musicians, but also a valuable asset for producers, allowing much fewer overdubs and re-recording of takes due to small imperfections. This paper examines state-of-the-art solutions to related problems and explores novel solutions for music error detection and correction, focusing on their real-time applicability. The explored approaches consider error detection through music context and theory, as well as supervised learning models with no predefined musical information or rules, trained on appropriate datasets. Focusing purely on correcting musical errors, the presented solutions operate on a high-level representation of the audio (MIDI) instead of the raw audio domain, taking input from an electronic instrument (MIDI keyboard/piano) and altering it when needed before it is sent to the sampler. This work proposes multiple general recurrent neural network designs for real-time error correction and performance aid for MIDI instruments, discusses the results, limitations, and possible future improvements. It also emphasizes on making the research results easily accessible to the end user - music enthusiasts, producers and performers -- by using the latest artificial intelligence platforms and tools.


Will quantum computing disrupt any industries that matter, and how soon?

ZDNet

If quantum computers ever work well enough to be trusted for general purposes by users outside of academia, they will need to become reliable. Making a device that depends on quantum mechanics reliable is not all that much unlike taming a herd of wildebeest. Quantum computers offer great promise for cryptography and optimization problems. ZDNet explores what quantum computers will and won't be able to do, and the challenges we still face. You'd think, though, that high probabilities of reliability, of accuracy, and of resilience would be necessary to bring about any technology that purports to offer'digital transformation' -- a concept that implies not only a shift from state A to state B, but a considerable distance between the two.


A Representational Model of Grid Cells' Path Integration Based on Matrix Lie Algebras

arXiv.org Machine Learning

The grid cells in the mammalian medial entorhinal cortex exhibit striking hexagon firing patterns when the agent navigates in the open field. It is hypothesized that the grid cells are involved in path integration so that the agent is aware of its self-position by accumulating its self-motion. Assuming the grid cells form a vector representation of self-position, we elucidate a minimally simple recurrent model for grid cells' path integration based on two coupled matrix Lie algebras that underlie two coupled rotation systems that mirror the agent's self-motion: (1) When the agent moves along a certain direction, the vector is rotated by a generator matrix. (2) When the agent changes direction, the generator matrix is rotated by another generator matrix. Our experiments show that our model learns hexagonal grid response patterns that resemble the firing patterns observed from the grid cells in the brain. Furthermore, the learned model is capable of near exact path integration, and it is also capable of error correction. Our model is novel and simple, with explicit geometric and algebraic structures.


Graph Based Multi-layer K-means++ (G-MLKM) for Sensory Pattern Analysis in Constrained Spaces

arXiv.org Machine Learning

In this paper, we focus on developing a novel unsupervised machine learning algorithm, named graph based multi-layer k-means++ (G-MLKM), to solve data-target association problem when targets move on a constrained space and minimal information of the targets can be obtained by sensors. Instead of employing the traditional data-target association methods that are based on statistical probabilities, the G-MLKM solves the problem via data clustering. We first will develop the Multi-layer K-means++ (MLKM) method for data-target association at local space given a simplified constrained space situation. Then a p-dual graph is proposed to represent the general constrained space when local spaces are interconnected. Based on the dual graph and graph theory, we then generalize MLKM to G-MLKM by first understanding local data-target association and then extracting cross-local data-target association mathematically analyze the data association at intersections of that space. To exclude potential data-target association errors that disobey physical rules, we also develop error correction mechanisms to further improve the accuracy. Numerous simulation examples are conducted to demonstrate the performance of G-MLKM.


Will Quantum Computing Define The Future Of AI?

#artificialintelligence

Google, this week, has launched a new version of their TensorFlow framework -- TensorFlow Quantum (TFQ), which is an open-source library for prototyping quantum machine learning models. Quantum computers aren't mainstream yet; however, when they do arrive, they will need algorithms. So, TFQ will bridge that gap and will make it possible for developers/users to create hybrid AI algorithms combining both traditional and quantum computing techniques. TFQ, a smart amalgamation of TensorFlow and Cinq, will allow users to build deep learning models to run on a future quantum computer with minimal lines of Python. According to the Google AI blog post, TFQ has been designed to provide the necessary tools to bring in the techniques of quantum computing and machine learning research communities together in order to build and control natural and artificial quantum systems.