Polynomial-based Self-Attention for Table Representation learning

Kim, Jayoung, Shin, Yehjin, Choi, Jeongwhan, Wi, Hyowon, Park, Noseong

arXiv.org Artificial Intelligence 

Structured data, which constitutes a significant portion of existing data types, has been a long-standing research topic in the field of machine learning. Various representation learning methods for tabular data have been proposed, ranging from encoder-decoder structures to Transformers. Among these, Transformer-based methods have achieved state-of-the-art performance not only in tabular data but also in various other fields, including computer vision and natural language processing. However, recent studies have revealed that self-attention, a key component of Transformers, can lead to an oversmoothing issue. We show that Transformers for tabular data also face this problem, and to address the problem, we propose a novel matrix polynomial-based self-attention layer as a substitute for the original self-attention layer, which enhances model scalability. In our experiments with three representative table learning models equipped with our proposed layer, we illustrate that the layer effectively mitigates the oversmoothing problem and enhances the representation performance of the existing methods, outperforming the state-of-the-art table representation methods. However, recent studies have raised concerns about the potential limitations of self-attention, a fundamental component of Transformers, specifically an issue of oversmoothing (Dong et al., 2021; Wang et al., 2022; Guo et al., 2023; Xue et al., 2023). Gong et al. (2021); Zhou et al. (2021) has highlighted that at deeper layers of the Transformer architecture, all token representations tend to become nearly identical (Brunner et al., 2019). The problem poses challenges when it comes to expanding the scale of training Transformers, especially in terms of depth, since Transformers rely on a simple weighted average aggregation method for value vectors. In our preliminary experiments, we observe that Transformers designed for tabular data also exhibit the oversmoothing issue, as illustrated in Fig.1.