Chen, Fu
Quantum Complex-Valued Self-Attention Model
Chen, Fu, Zhao, Qinglin, Feng, Li, Tang, Longfei, Lin, Yangbin, Huang, Haitao
--Self-attention has revolutionized classical machine learning, yet existing quantum self-attention models underuti-lize quantum states' potential due to oversimplified or incomplete mechanisms. T o address this limitation, we introduce the Quantum Complex-V alued Self-Attention Model (QCSAM), the first framework to leverage complex-valued similarities, which captures amplitude and phase relationships between quantum states more comprehensively. T o achieve this, QCSAM extends the Linear Combination of Unitaries (LCUs) into the Complex LCUs (CLCUs) framework, enabling precise complex-valued weighting of quantum states and supporting quantum multi-head attention. Experiments on MNIST and Fashion-MNIST show that QCSAM outperforms recent quantum self-attention models, including QKSAN, QSAN, and GQHAN. With only 4 qubits, QCSAM achieves 100% and 99.2% test accuracies on MNIST and Fashion-MNIST, respectively. Furthermore, we evaluate scalability across 3-8 qubits and 2-4 class tasks, while ablation studies validate the advantages of complex-valued attention weights over real-valued alternatives. I NTRODUCTION The self-attention mechanism, as a key component of deep learning architectures, has significantly impacted the ways in which data is processed and features are learned [1]-[3]. By generating adaptive attention weights, self-attention not only highlights key features in the data but also integrates global contextual information, thereby improving the expressive power and computational efficiency of deep learning systems. For instance, in natural language processing [4]-[6], self-attention has enhanced language understanding and generation by capturing long-range dependencies and contextual information; in computer vision [7]-[9], it allows models to focus on key regions within images to optimize feature extraction; and in recommender systems [10], [11], it improves the accuracy of capturing user behavior and preferences, thereby enhancing the effectiveness of personalized recommendations. Large-scale models such as GPT -4 [12] have further exploited the potential of self-attention, allowing them to address multimodal tasks such as visual question answering, image captioning, and cross-modal reasoning. These developments demonstrate that the self-attention mechanism is a fundamental mechanism Corresponding author: Qinglin Zhao.(e-mail: qlzhao@must.edu.mo) Fu Chen, Qinglin Zhao, Li Feng and Haitao Huang are with Faculty of Innovation Engineering, Macau University of Science and Technology, 999078, China.
VMTS: Vision-Assisted Teacher-Student Reinforcement Learning for Multi-Terrain Locomotion in Bipedal Robots
Chen, Fu, Wan, Rui, Liu, Peidong, Zheng, Nanxing, Zhou, Bo
Bipedal robots, due to their anthropomorphic design, offer substantial potential across various applications, yet their control is hindered by the complexity of their structure. Currently, most research focuses on proprioception-based methods, which lack the capability to overcome complex terrain. While visual perception is vital for operation in human-centric environments, its integration complicates control further. Recent reinforcement learning (RL) approaches have shown promise in enhancing legged robot locomotion, particularly with proprioception-based methods. However, terrain adaptability, especially for bipedal robots, remains a significant challenge, with most research focusing on flat-terrain scenarios. In this paper, we introduce a novel mixture of experts teacher-student network RL strategy, which enhances the performance of teacher-student policies based on visual inputs through a simple yet effective approach. Our method combines terrain selection strategies with the teacher policy, resulting in superior performance compared to traditional models. Additionally, we introduce an alignment loss between the teacher and student networks, rather than enforcing strict similarity, to improve the student's ability to navigate diverse terrains. We validate our approach experimentally on the Limx Dynamic P1 bipedal robot, demonstrating its feasibility and robustness across multiple terrain types.
OTO Planner: An Efficient Only Travelling Once Exploration Planner for Complex and Unknown Environments
Zhou, Bo, Lu, Chuanzhao, Pan, Yan, Chen, Fu
Autonomous exploration in complex and cluttered environments is essential for various applications. However, there are many challenges due to the lack of global heuristic information. Existing exploration methods suffer from the repeated paths and considerable computational resource requirement in large-scale environments. To address the above issues, this letter proposes an efficient exploration planner that reduces repeated paths in complex environments, hence it is called "Only Travelling Once Planner". OTO Planner includes fast frontier updating, viewpoint evaluation and viewpoint refinement. A selective frontier updating mechanism is designed, saving a large amount of computational resources. In addition, a novel viewpoint evaluation system is devised to reduce the repeated paths utilizing the enclosed sub-region detection. Besides, a viewpoint refinement approach is raised to concentrate the redundant viewpoints, leading to smoother paths. We conduct extensive simulation and real-world experiments to validate the proposed method. Compared to the state-of-the-art approach, the proposed method reduces the exploration time and movement distance by 10%-20% and improves the speed of frontier detection by 6-9 times.
Quantum Mixed-State Self-Attention Network
Chen, Fu, Zhao, Qinglin, Feng, Li, Chen, Chuangtao, Lin, Yangbin, Lin, Jianhong
The rapid advancement of quantum computing has increasingly highlighted its potential in the realm of machine learning, particularly in the context of natural language processing (NLP) tasks. Quantum machine learning (QML) leverages the unique capabilities of quantum computing to offer novel perspectives and methodologies for complex data processing and pattern recognition challenges. This paper introduces a novel Quantum Mixed-State Attention Network (QMSAN), which integrates the principles of quantum computing with classical machine learning algorithms, especially self-attention networks, to enhance the efficiency and effectiveness in handling NLP tasks. QMSAN model employs a quantum attention mechanism based on mixed states, enabling efficient direct estimation of similarity between queries and keys within the quantum domain, leading to more effective attention weight acquisition. Additionally, we propose an innovative quantum positional encoding scheme, implemented through fixed quantum gates within the quantum circuit, to enhance the model's accuracy. Experimental validation on various datasets demonstrates that QMSAN model outperforms existing quantum and classical models in text classification, achieving significant performance improvements. QMSAN model not only significantly reduces the number of parameters but also exceeds classical self-attention networks in performance, showcasing its strong capability in data representation and information extraction. Furthermore, our study investigates the model's robustness in different quantum noise environments, showing that QMSAN possesses commendable robustness to low noise.
Improvements on Recommender System based on Mathematical Principles
Chen, Fu, Zou, Junkang, Zhou, Lingfeng, Xu, Zekai, Wu, Zhenyu
In this article, we will research the Recommender System's implementation about how it works and the algorithms used. We will explain the Recommender System's algorithms based on mathematical principles, and find feasible methods for improvements. The algorithms based on probability have its significance in Recommender System, we will describe how they help to increase the accuracy and speed of the algorithms. Both the weakness and the strength of two different mathematical distance used to describe the similarity will be detailed illustrated in this article.
Explainable Enterprise Credit Rating via Deep Feature Crossing Network
Guo, Weiyu, Yang, Zhijiang, Wu, Shu, Chen, Fu
Due to the powerful learning ability on high-rank and non-linear features, deep neural networks (DNNs) are being applied to data mining and machine learning in various fields, and exhibit higher discrimination performance than conventional methods. However, the applications based on DNNs are rare in enterprise credit rating tasks because most of DNNs employ the "end-to-end" learning paradigm, which outputs the high-rank representations of objects and predictive results without any explanations. Thus, users in the financial industry cannot understand how these high-rank representations are generated, what do they mean and what relations exist with the raw inputs. Then users cannot determine whether the predictions provided by DNNs are reliable, and not trust the predictions providing by such "black box" models. Therefore, in this paper, we propose a novel network to explicitly model the enterprise credit rating problem using DNNs and attention mechanisms. The proposed model realizes explainable enterprise credit ratings. Experimental results obtained on real-world enterprise datasets verify that the proposed approach achieves higher performance than conventional methods, and provides insights into individual rating results and the reliability of model training.