Goto

Collaborating Authors

 pathformer


PathFormer: A Transformer with 3D Grid Constraints for Digital Twin Robot-Arm Trajectory Generation

Alanazi, Ahmed, Ho, Duy, Lee, Yugyung

arXiv.org Artificial Intelligence

Robotic arms require precise, task-aware trajectory planning, yet sequence models that ignore motion structure often yield invalid or inefficient executions. We present a Path-based Transformer that encodes robot motion with a 3-grid (where/what/when) representation and constraint-masked decoding, enforcing lattice-adjacent moves and workspace bounds while reasoning over task graphs and action order. Trained on 53,755 trajectories (80% train / 20% validation), the model aligns closely with ground truth -- 89.44% stepwise accuracy, 93.32% precision, 89.44% recall, and 90.40% F1 -- with 99.99% of paths legal by construction. Compiled to motor primitives on an xArm Lite 6 with a depth-camera digital twin, it attains up to 97.5% reach and 92.5% pick success in controlled tests, and 86.7% end-to-end success across 60 language-specified tasks in cluttered scenes, absorbing slips and occlusions via local re-grounding without global re-planning. These results show that path-structured representations enable Transformers to generate accurate, reliable, and interpretable robot trajectories, bridging graph-based planning and sequence-based learning and providing a practical foundation for general-purpose manipulation and sim-to-real transfer.


Transformer representation learning is necessary for dynamic multi-modal physiological data on small-cohort patients

Wang, Bingxu, Ge, Min, Cai, Kunzhi, Zhang, Yuqi, Zhou, Zeyi, Li, Wenjiao, Guo, Yachong, Wang, Wei, Zhou, Qing

arXiv.org Artificial Intelligence

Transformer representation learning is necessary for dynamic multi-modal physiological data on small-cohort patients Bingxu Wang, Min Ge, Kunzhi Cai, Yuqi Zhang, Zeyi Zhou, Wenjiao Li, Yachong Guo,, Wei Wang,, and Qing Zhou, Department of Thoracic and Cardiovascular Surgery, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Kuang Yaming Honors School, Nanjing University, Nanjing 210023, China National Laboratory of Solid State Microstructure, Department of Physics, Nanjing University, Nanjing 210093, China E-mail: yguo@nju.edu.cn; Abstract Postoperative delirium (POD), a severe neuropsychiatric complication affecting nearly 50% of high-risk surgical patients, is defined as an acute disorder of attention and cognition, It remains significantly underdiagnosed in the intensive care units (ICUs) due to subjective monitoring methods. Early and accurate diagnosis of POD is critical and achievable. Here, we propose a POD prediction framework comprising a Transformer representation model followed by traditional machine learning algorithms. We curated the first multi-modal POD dataset encompass-1 ing two patient types and evaluated the various Transformer architectures for representation learning. Empirical results indicate a consistent improvements of sensitivity and Youden index in patient TYPE I using Transformer representations, particularly our fusion adaptation of Pathformer. By enabling effective delirium diagnosis from postoperative day 1 to 3, our extensive experimental findings emphasize the potential of multi-modal physiological data and highlight the necessity of representation learning via multi-modal Transformer architecture in clinical diagnosis. Introduction Postoperative delirium(POD), a prevalent acute neuropsychiatric syndrome 1,2, affects more than 50% of surgical patients and significantly elevates morbidity and mortality risks 3 . Early identification is crucial yet challenging 4, primarily due to subjective assessment criteria and incomplete understanding of underlying pathophysiological mechanisms 5 .


Pathformer: Recursive Path Query Encoding for Complex Logical Query Answering

Zhang, Chongzhi, Peng, Zhiping, Zheng, Junhao, Wang, Linghao, Shi, Ruifeng, Ma, Qianli

arXiv.org Artificial Intelligence

Complex Logical Query Answering (CLQA) over incomplete knowledge graphs is a challenging task. Recently, Query Embedding (QE) methods are proposed to solve CLQA by performing multi-hop logical reasoning. However, most of them only consider historical query context information while ignoring future information, which leads to their failure to capture the complex dependencies behind the elements of a query. In recent years, the transformer architecture has shown a strong ability to model long-range dependencies between words. The bidirectional attention mechanism proposed by the transformer can solve the limitation of these QE methods regarding query context. Still, as a sequence model, it is difficult for the transformer to model complex logical queries with branch structure computation graphs directly. To this end, we propose a neural one-point embedding method called Pathformer based on the tree-like computation graph, i.e., query computation tree. Specifically, Pathformer decomposes the query computation tree into path query sequences by branches and then uses the transformer encoder to recursively encode these path query sequences to obtain the final query embedding. This allows Pathformer to fully utilize future context information to explicitly model the complex interactions between various parts of the path query. Experimental results show that Pathformer outperforms existing competitive neural QE methods, and we found that Pathformer has the potential to be applied to non-one-point embedding space.


Pathformer: Multi-scale transformers with Adaptive Pathways for Time Series Forecasting

Chen, Peng, Zhang, Yingying, Cheng, Yunyao, Shu, Yang, Wang, Yihang, Wen, Qingsong, Yang, Bin, Guo, Chenjuan

arXiv.org Artificial Intelligence

Transformer-based models have achieved some success in time series forecasting. Existing methods mainly model time series from limited or fixed scales, making it challenging to capture different characteristics spanning various scales. In this paper, we propose multi-scale transformers with adaptive pathways (Pathformer). The proposed Transformer integrates both temporal resolution and temporal distance for multi-scale modeling. Multi-scale division divides the time series into different temporal resolutions using patches of various sizes. Based on the division of each scale, dual attention is performed over these patches to capture global correlations and local details as temporal dependencies. We further enrich the multi-scale transformer with adaptive pathways, which adaptively adjust the multi-scale modeling process based on the varying temporal dynamics in the input time series, improving the prediction accuracy and generalization of Pathformer. Extensive experiments on eleven real-world datasets demonstrate that Pathformer not only achieves state-of-the-art performance by surpassing all current models but also exhibits stronger generalization abilities under various transfer scenarios. Time series forecasting is an essential task for various industries, such as energy, finance, traffic, and cloud computing (Chen et al., 2012; Cirstea et al., 2022b; Qin et al., 2023; Pan et al., 2023). Motivated by its widespread application in sequence modeling and impressive success in various fields such as CV and NLP (Dosovitskiy et al., 2021; Brown et al., 2020), Transformer (Vaswani et al., 2017) receives emerging attention in time series (Wu et al., 2021; Liu et al., 2022c).