error correction
The race to solve the biggest problem in quantum computing
The errors that quantum computers make are holding the technology back. Quantum computers won't be truly useful until they can correct their mistakes Quantum computers are already here, but they make far too many errors. This is arguably the biggest obstacle to the technology really becoming useful, but recent breakthroughs suggest a solution may be on the horizon. Errors creep into traditional computers too, but there are well-established techniques for correcting them. They rely on redundancy, where extra bits are used to detect when 0s incorrectly swap to 1s or vice versa.
- North America > United States (0.05)
- Asia > China (0.05)
- Marketing (0.43)
- Health & Medicine > Therapeutic Area (0.31)
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (0.92)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Germany > Berlin (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (2 more...)
Inside the sub-zero lair of the world's most powerful computer
It looks like a golden chandelier and contains the coldest place in the universe. What I am looking at is not just the most powerful computer in the world, but technology pivotal to financial security, Bitcoin, government secrets, the world economy and more. Quantum computing holds the key to which companies and countries win - and lose - the rest of the 21st Century. In front of me suspended a metre in the air, in a Google facility in Santa Barbara California, is Willow. Frankly, it was not what I expected.
- North America > United States > California > Santa Barbara County > Santa Barbara (0.24)
- North America > Central America (0.14)
- Oceania > Australia (0.05)
- (14 more...)
- Leisure & Entertainment (1.00)
- Banking & Finance (1.00)
- Information Technology (0.95)
- Government > Regional Government > Europe Government > United Kingdom Government (0.95)
- Information Technology > Hardware (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > e-Commerce > Financial Technology (0.50)
FastCorrect: Fast Error Correction with Edit Alignment for Automatic Speech Recognition
Error correction techniques have been used to refine the output sentences from automatic speech recognition (ASR) models and achieve a lower word error rate (WER) than original ASR outputs. Previous works usually use a sequence-to-sequence model to correct an ASR output sentence autoregressively, which causes large latency and cannot be deployed in online ASR services. A straightforward solution to reduce latency, inspired by non-autoregressive (NAR) neural machine translation, is to use an NAR sequence generation model for ASR error correction, which, however, comes at the cost of significantly increased ASR error rate. In this paper, observing distinctive error patterns and correction operations (i.e., insertion, deletion, and substitution) in ASR, we propose FastCorrect, a novel NAR error correction model based on edit alignment. In training, FastCorrect aligns each source token from an ASR output sentence to the target tokens from the corresponding ground-truth sentence based on the edit distance between the source and target sentences, and extracts the number of target tokens corresponding to each source token during edition/correction, which is then used to train a length predictor and to adjust the source tokens to match the length of the target sentence for parallel generation. In inference, the token number predicted by the length predictor is used to adjust the source tokens for target sequence generation. Experiments on the public AISHELL-1 dataset and an internal industrial-scale ASR dataset show the effectiveness of FastCorrect for ASR error correction: 1) it speeds up the inference by 6-9 times and maintains the accuracy (8-14% WER reduction) compared with the autoregressive correction model; and 2) it outperforms the popular NAR models adopted in neural machine translation and text edition by a large margin.
Synthetic Error Injection Fails to Elicit Self-Correction In Language Models
Wu, David X., Kapur, Shreyas, Sahai, Anant, Russell, Stuart
Reinforcement learning has become the dominant paradigm for eliciting reasoning and self-correction capabilities in large language models, but its computational expense motivates exploration of alternatives. Inspired by techniques from autonomous driving and robotics, we investigate whether supervised learning with synthetic error injection can induce self-correction abilities in language models. Our approach inserts artificial errors into reasoning chains, masks them, and supervises the model to recognize and correct these mistakes. Despite the intuitive appeal of this method, we find that it fails to significantly improve performance even on simple synthetic tasks across multiple models. Moreover, even when the model catches its own error, it often parrots the original mistake. We find that the distribution shift of synthetic errors to on-policy errors significantly degrades the error-correction capabilities of the fine-tuned model, even with good synthetic coverage of on-policy errors. Our results help explain why on-policy reinforcement learning methods have proven uniquely effective for eliciting self-correction.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.75)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Minimal-Edit Instruction Tuning for Low-Resource Indic GEC
Grammatical error correction for Indic languages faces limited supervision, diverse scripts, and rich morphology. We propose an augmentation-free setup that uses instruction-tuned large language models and conservative decoding. A 12B GEMMA 3 model is instruction-tuned in bnb 4-bit precision with parameter-efficient fine-tuning (PEFT) and Alpaca-style formatting. Decoding follows a deterministic, constraint-aware procedure with a lightweight normaliser that encourages minimal, meaning-preserving edits. We operationalise inference, subsequent to instruction fine-tuning (IFT), via a fixed, language-specific prompt directly synthesised from a deterministic error classifier's taxonomy, label distributions, and precedence ordering computed on the training corpus. Under the official untuned GLEU evaluation, the system scores 92.41 on Malayalam, sixth overall, and 81.44 on Hindi, third overall. These results indicate that classifier-informed prompt design, adapter-based instruction tuning, and deterministic decoding provide a reproducible and a computationally efficient alternative to augmentation-centred pipelines for Indic GEC. The approach also motivates future work on stronger morphosyntactic constraints and human-centred evaluation of conservative edits.
Continual Error Correction on Low-Resource Devices
Paramonov, Kirill, Ozay, Mete, Mystakidis, Aristeidis, Tsalikidis, Nikolaos, Sotos, Dimitrios, Drosou, Anastasios, Tzovaras, Dimitrios, Kim, Hyunjun, Chang, Kiseok, Mo, Sangdok, Kim, Namwoong, Yoo, Woojong, Moon, Jijoong, Michieli, Umberto
The proliferation of AI models in everyday devices has highlighted a critical challenge: prediction errors that degrade user experience. While existing solutions focus on error detection, they rarely provide efficient correction mechanisms, especially for resource-constrained devices. We present a novel system enabling users to correct AI misclassifications through few-shot learning, requiring minimal computational resources and storage. Our approach combines server-side foundation model training with on-device prototype-based classification, enabling efficient error correction through prototype updates rather than model retraining. The system consists of two key components: (1) a server-side pipeline that leverages knowledge distillation to transfer robust feature representations from foundation models to device-compatible architectures, and (2) a device-side mechanism that enables ultra-efficient error correction through prototype adaptation. We demonstrate our system's effectiveness on both image classification and object detection tasks, achieving over 50% error correction in one-shot scenarios on Food-101 and Flowers-102 datasets while maintaining minimal forgetting (less than 0.02%) and negligible computational overhead. Our implementation, validated through an Android demonstration app, proves the system's practicality in real-world scenarios.
- Information Technology > Data Science > Data Quality > Data Cleaning (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- (2 more...)
ASR Error Correction in Low-Resource Burmese with Alignment-Enhanced Transformers using Phonetic Features
Lin, Ye Bhone, Aung, Thura, Thu, Ye Kyaw, Oo, Thazin Myint
Abstract--This paper investigates sequence-to-sequence T ransformer models for automatic speech recognition (ASR) error correction in low-resource Burmese, focusing on different feature integration strategies including IP A and alignment information. T o our knowledge, this is the first study addressing ASR error correction specifically for Burmese. W e evaluate five ASR backbones and show that our ASR Error Correction (AEC) approaches consistently improve word-and character-level accuracy over baseline outputs. The proposed AEC model, combining IP A and alignment features, reduced the average WER of ASR models from 51.56 to 39.82 before augmentation (and 51.56 to 43.59 after augmentation) and improving chrF++ scores from 0.5864 to 0.627, demonstrating consistent gains over the baseline ASR outputs without AEC. Our results highlight the robustness of AEC and the importance of feature design for improving ASR outputs in low-resource settings.