Reverse-Complement Consistency for DNA Language Models

Ma, Mingqian

arXiv.org Artificial Intelligence 

A fundamental property of DNA is that the reverse complement (RC) of a sequence often carries identical biological meaning. However, state-of-the-art DNA language models frequently fail to capture this symmetry, producing inconsistent predictions for a sequence and its RC counterpart, which undermines their reliability. In this work, we introduce Reverse-Complement Consistency Regularization (RCCR), a simple and model-agnostic fine-tuning objective that directly penalizes the divergence between a model's prediction on a sequence and the aligned prediction on its reverse complement. We evaluate RCCR across three diverse backbones (Nucleotide Transformer, HyenaDNA, DNABERT -2) on a wide range of genomic tasks, including sequence classification, scalar regression, and profile prediction. Our experiments show that RCCR substantially improves RC robustness by dramatically reducing prediction flips and errors, all while maintaining or improving task accuracy compared to baselines such as RC data augmentation and test-time averaging. By integrating a key biological prior directly into the learning process, RCCR produces a single, intrinsically robust, and computationally efficient model fine-tuning recipe for diverse biology tasks. DNA language models (DNA LMs) (Zhou et al., 2024; Dalla-Torre et al., 2025; Nguyen et al., 2023; Ma et al., 2025) have become general-purpose backbones for genomic prediction and sequence design: after pretraining on raw genomes, a single backbone can be fine-tuned for diverse downstream tasks. Many of these tasks possess an explicit symmetry: labels are reverse-complement (RC) invariant at the sequence level (e.g., promoter classification), or RC equivariant at the profile level, where outputs must be aligned by a task-specific operator Π (e.g., bin-wise outputs should be flipped along the sequence length axis, and strand channels swapped when present). Y et standard fine-tuning pipelines neither encode RC symmetry nor evaluate it systematically, leaving models sensitive to input orientation.