Goto

Collaborating Authors

 lstm 1


Delayed Attention Training Improves Length Generalization in Transformer--RNN Hybrids

Phan, Buu, Ebrahimi, Reza, Haresh, Sanjay, Memisevic, Roland

arXiv.org Artificial Intelligence

We study length generalization in sequence models on a composite problem involving both state tracking and associative recall. Prior work finds that recurrent networks handle state tracking well but struggle with recall, whereas Transformers excel at recall yet fail to extend state-tracking capabilities to longer sequences. Motivated by the complementary strengths of these architectures, we construct hybrid models integrating recurrent and attention-based components, and train them on the combined task to evaluate whether both capabilities can be preserved. Our results reveal that, in such hybrids, the Transformer component tends to exploit shortcut solutions, leading to poor length generalization. We identify this shortcut reliance as a key obstacle and propose a simple yet effective training strategy -- delaying the training of the attention layers -- that mitigates this effect and significantly improves length generalization performance. Our experiments show that this approach enables hybrid models to achieve near-perfect accuracy ($>90\%$) on hybrid sequences three times longer than those used during training.


Exploring Learnability in Memory-Augmented Recurrent Neural Networks: Precision, Stability, and Empirical Insights

Das, Shrabon, Mali, Ankur

arXiv.org Artificial Intelligence

Recurrent Neural Networks (RNNs) have been foundational in sequence modeling due to their ability to capture temporal dependencies. Architectures such as Elman RNNs, Gated Recurrent Units (GRUs), and Long Short-Term Memory networks (LSTMs) [1] are widely used in applications like speech recognition, machine translation, and time-series analysis. However, these models are constrained by their fixed memory capacity, limiting them to recognizing regular languages when implemented with finite precision [2, 3]. To enhance the computational capabilities of RNNs, researchers have explored augmenting them with external memory structures like stacks [4, 5, 6, 7, 8, 9, 10]. This approach extends the expressivity of RNNs to context-free languages (CFLs) [11], which are crucial in applications like natural language processing (NLP) where hierarchical structures are prevalent. Memory-augmented models have demonstrated significant improvements in recognizing complex formal languages by simulating operations similar to Pushdown Automata (PDA).


MisRoB{\AE}RTa: Transformers versus Misinformation

Truică, Ciprian-Octavian, Apostol, Elena-Simona

arXiv.org Artificial Intelligence

Misinformation is considered a threat to our democratic values and principles. The spread of such content on social media polarizes society and undermines public discourse by distorting public perceptions and generating social unrest while lacking the rigor of traditional journalism. Transformers and transfer learning proved to be state-of-the-art methods for multiple well-known natural language processing tasks. In this paper, we propose MisRoB{\AE}RTa, a novel transformer-based deep neural ensemble architecture for misinformation detection. MisRoB{\AE}RTa takes advantage of two transformers (BART \& RoBERTa) to improve the classification performance. We also benchmarked and evaluated the performances of multiple transformers on the task of misinformation detection. For training and testing, we used a large real-world news articles dataset labeled with 10 classes, addressing two shortcomings in the current research: increasing the size of the dataset from small to large, and moving the focus of fake news detection from binary classification to multi-class classification. For this dataset, we manually verified the content of the news articles to ensure that they were correctly labeled. The experimental results show that the accuracy of transformers on the misinformation detection problem was significantly influenced by the method employed to learn the context, dataset size, and vocabulary dimension. We observe empirically that the best accuracy performance among the classification models that use only one transformer is obtained by BART, while DistilRoBERTa obtains the best accuracy in the least amount of time required for fine-tuning and training. The proposed MisRoB{\AE}RTa outperforms the other transformer models in the task of misinformation detection. To arrive at this conclusion, we performed ample ablation and sensitivity testing with MisRoB{\AE}RTa on two datasets.