Goto

Collaborating Authors

 expressibility


Dissecting Quantum Reinforcement Learning: A Systematic Evaluation of Key Components

Lazaro, Javier, Vazquez, Juan-Ignacio, Garcia-Bringas, Pablo

arXiv.org Artificial Intelligence

Parameterised quantum circuit (PQC) based Quantum Reinforcement Learning (QRL) has emerged as a promising paradigm at the intersection of quantum computing and reinforcement learning (RL). By design, PQCs create hybrid quantum-classical models, but their practical applicability remains uncertain due to training instabilities, barren plateaus (BPs), and the difficulty of isolating the contribution of individual pipeline components. In this work, we dissect PQC based QRL architectures through a systematic experimental evaluation of three aspects recurrently identified as critical: (i) data embedding strategies, with Data Reuploading (DR) as an advanced approach; (ii) ansatz design, particularly the role of entanglement; and (iii) post-processing blocks after quantum measurement, with a focus on the underexplored Output Reuse (OR) technique. Using a unified PPO-CartPole framework, we perform controlled comparisons between hybrid and classical agents under identical conditions. Our results show that OR, though purely classical, exhibits distinct behaviour in hybrid pipelines, that DR improves trainability and stability, and that stronger entanglement can degrade optimisation, offsetting classical gains. Together, these findings provide controlled empirical evidence of the interplay between quantum and classical contributions, and establish a reproducible framework for systematic benchmarking and component-wise analysis in QRL.


Impact of Single Rotations and Entanglement Topologies in Quantum Neural Networks

Mordacci, Marco, Amoretti, Michele

arXiv.org Artificial Intelligence

In this work, an analysis of the performance of different Variational Quantum Circuits is presented, investigating how it changes with respect to entanglement topology, adopted gates, and Quantum Machine Learning tasks to be performed. The objective of the analysis is to identify the optimal way to construct circuits for Quantum Neural Networks. In the presented experiments, two types of circuits are used: one with alternating layers of rotations and entanglement, and the other, similar to the first one, but with an additional final layer of rotations. As rotation layers, all combinations of one and two rotation sequences are considered. Four different entanglement topologies are compared: linear, circular, pairwise, and full. Different tasks are considered, namely the generation of probability distributions and images, and image classification. Achieved results are correlated with the expressibility and entanglement capability of the different circuits to understand how these features affect performance.


QFFN-BERT: An Empirical Study of Depth, Performance, and Data Efficiency in Hybrid Quantum-Classical Transformers

Kang, Pilsung

arXiv.org Artificial Intelligence

Parameterized quantum circuits (PQCs) have recently emerged as promising components for enhancing the expressibility of neural architectures. In this work, we introduce QFFN-BERT, a hybrid quantum-classical transformer where the feedforward network (FFN) modules of a compact BERT variant are replaced by PQC-based layers. This design is motivated by the dominant parameter contribution of FFNs, which account for approximately two-thirds of the parameters within standard Transformer encoder blocks. While prior studies have primarily integrated PQCs into self-attention modules, our work focuses on the FFN and systematically investigates the trade-offs between PQC depth, expressibility, and trainability. Our final PQC architecture incorporates a residual connection, both $R_Y$ and $R_Z$ rotations, and an alternating entanglement strategy to ensure stable training and high expressibility. Our experiments, conducted on a classical simulator, on the SST-2 and DBpedia benchmarks demonstrate two key findings. First, a carefully configured QFFN-BERT achieves up to 102.0% of the baseline accuracy, surpassing its classical counterpart in a full-data setting while reducing FFN-specific parameters by over 99%. Second, our model exhibits a consistent and competitive edge in few-shot learning scenarios, confirming its potential for superior data efficiency. These results, supported by an ablation study on a non-optimized PQC that failed to learn, confirm that PQCs can serve as powerful and parameter-efficient alternatives to classical FFNs when co-designed with foundational deep learning principles.


Efficient Quantum Convolutional Neural Networks for Image Classification: Overcoming Hardware Constraints

Röseler, Peter, Schaudt, Oliver, Berg, Helmut, Bauckhage, Christian, Koch, Matthias

arXiv.org Artificial Intelligence

While classical convolutional neural networks (CNNs) have revolutionized image classification, the emergence of quantum computing presents new opportunities for enhancing neural network architectures. Quantum CNNs (QCNNs) leverage quantum mechanical properties and hold potential to outperform classical approaches. However, their implementation on current noisy intermediate-scale quantum (NISQ) devices remains challenging due to hardware limitations. In our research, we address this challenge by introducing an encoding scheme that significantly reduces the input dimensionality. We demonstrate that a primitive QCNN architecture with 49 qubits is sufficient to directly process $28\times 28$ pixel MNIST images, eliminating the need for classical dimensionality reduction pre-processing. Additionally, we propose an automated framework based on expressibility, entanglement, and complexity characteristics to identify the building blocks of QCNNs, parameterized quantum circuits (PQCs). Our approach demonstrates advantages in accuracy and convergence speed with a similar parameter count compared to both hybrid QCNNs and classical CNNs. We validated our experiments on IBM's Heron r2 quantum processor, achieving $96.08\%$ classification accuracy, surpassing the $71.74\%$ benchmark of traditional approaches under identical training conditions. These results represent one of the first implementations of image classifications on real quantum hardware and validate the potential of quantum computing in this area.


Enhancing Circuit Trainability with Selective Gate Activation Strategy

Cho, Jeihee, Lee, Junyong, Justice, Daniel, Kim, Shiho

arXiv.org Artificial Intelligence

Quantum computing has shown promise in solving complex Techniques such as layerwise training (Skolik et al. problems in domains such as quantum chemistry, optimization, 2021) and parameter initialization schemes based on symmetry and machine learning, leveraging Variational Quantum considerations (Pesah et al. 2021) have been proposed Algorithms (VQAs) such as Quantum Approximate to achieve this. Optimization Algorithms (QAOA) (Farhi, Goldstone, and Local cost functions, selective parameter training, and Gutmann 2014; Pagano et al. 2020), Variational Quantum structured initialization methods have shown promise in mitigating Eigensolvers (VQE) (Kandala et al. 2017; Tilly et al. 2022), trainability challenges without significantly compromising and recently, quantum neural networks (QNNs) (Schuld and circuit expressibility. Moreover, techniques like symmetric Killoran 2019; Killoran et al. 2019) as a hybrid quantumclassical pruning (Wang et al. 2023), which leverage circuit framework in the Noisy Intermediate-Scale Quantum symmetries to reduce the effective parameter space, have (NISQ) era.


Context-aware Multimodal AI Reveals Hidden Pathways in Five Centuries of Art Evolution

Kim, Jin, Lee, Byunghwee, You, Taekho, Yun, Jinhyuk

arXiv.org Artificial Intelligence

The rise of multimodal generative AI is transforming the intersection of technology and art, offering deeper insights into large-scale artwork. Although its creative capabilities have been widely explored, its potential to represent artwork in latent spaces remains underexamined. We use cutting-edge generative AI, specifically Stable Diffusion, to analyze 500 years of Western paintings by extracting two types of latent information with the model: formal aspects (e.g., colors) and contextual aspects (e.g., subject). Our findings reveal that contextual information differentiates between artistic periods, styles, and individual artists more successfully than formal elements. Additionally, using contextual keywords extracted from paintings, we show how artistic expression evolves alongside societal changes. Our generative experiment, infusing prospective contexts into historical artworks, successfully reproduces the evolutionary trajectory of artworks, highlighting the significance of mutual interaction between society and art. This study demonstrates how multimodal AI expands traditional formal analysis by integrating temporal, cultural, and historical contexts.


Reviews: Improved Expressivity Through Dendritic Neural Networks

Neural Information Processing Systems

This paper presents D-Nets, an architecture loosely inspired by the dendrites of biological neurons. In a D-Net, each neuron receives input from the previous layer as the maxpool of linear combinations of disjoint random subsets of that layer's outputs. The authors show that this approach outperforms self-normalizing neural networks and other advanced approaches on the UCI collection of datasets (as well as outperforming simple non-convolutional approaches to MNIST and CIFAR). They provide an intuition that greater fan-in to non-linearities leads to a greater number of linear regions and thus, perhaps, greater expressibility. I am still quite surprised that such a simple method performs so well, but the experimental setup seems sound. For example, how does the optimal number of branches grow with the size of the layer?


Integrated Encoding and Quantization to Enhance Quanvolutional Neural Networks

Bosco, Daniele Lizzio, Portelli, Beatrice, Serra, Giuseppe

arXiv.org Artificial Intelligence

Image processing is one of the most promising applications for quantum machine learning (QML). Quanvolutional Neural Networks with non-trainable parameters are the preferred solution to run on current and near future quantum devices. The typical input preprocessing pipeline for quanvolutional layers comprises of four steps: optional input binary quantization, encoding classical data into quantum states, processing the data to obtain the final quantum states, decoding quantum states back to classical outputs. In this paper we propose two ways to enhance the efficiency of quanvolutional models. First, we propose a flexible data quantization approach with memoization, applicable to any encoding method. This allows us to increase the number of quantization levels to retain more information or lower them to reduce the amount of circuit executions. Second, we introduce a new integrated encoding strategy, which combines the encoding and processing steps in a single circuit. This method allows great flexibility on several architectural parameters (e.g., number of qubits, filter size, and circuit depth) making them adjustable to quantum hardware requirements. We compare our proposed integrated model with a classical convolutional neural network and the well-known rotational encoding method, on two different classification tasks. The results demonstrate that our proposed model encoding exhibits a comparable or superior performance to the other models while requiring fewer quantum resources.


On the Expressibility of the Reconstructional Color Refinement

Arvind, V., Köbler, Johannes, Verbitsky, Oleg

arXiv.org Artificial Intelligence

One of the most basic facts related to the famous Ulam reconstruction conjecture is that the connectedness of a graph can be determined by the deck of its vertex-deleted subgraphs, which are considered up to isomorphism. We strengthen this result by proving that connectedness can still be determined when the subgraphs in the deck are given up to equivalence under the color refinement isomorphism test. Consequently, this implies that connectedness is recognizable by Reconstruction Graph Neural Networks, a recently introduced GNN architecture inspired by the reconstruction conjecture (Cotta, Morris, Ribeiro 2021).


Graph Neural Networks for Parameterized Quantum Circuits Expressibility Estimation

Aktar, Shamminuj, Bärtschi, Andreas, Oyen, Diane, Eidenbenz, Stephan, Badawy, Abdel-Hameed A.

arXiv.org Artificial Intelligence

Parameterized quantum circuits (PQCs) are fundamental to quantum machine learning (QML), quantum optimization, and variational quantum algorithms (VQAs). The expressibility of PQCs is a measure that determines their capability to harness the full potential of the quantum state space. It is thus a crucial guidepost to know when selecting a particular PQC ansatz. However, the existing technique for expressibility computation through statistical estimation requires a large number of samples, which poses significant challenges due to time and computational resource constraints. This paper introduces a novel approach for expressibility estimation of PQCs using Graph Neural Networks (GNNs). We demonstrate the predictive power of our GNN model with a dataset consisting of 25,000 samples from the noiseless IBM QASM Simulator and 12,000 samples from three distinct noisy quantum backends. The model accurately estimates expressibility, with root mean square errors (RMSE) of 0.05 and 0.06 for the noiseless and noisy backends, respectively. We compare our model's predictions with reference circuits [Sim and others, QuTe'2019] and IBM Qiskit's hardware-efficient ansatz sets to further evaluate our model's performance. Our experimental evaluation in noiseless and noisy scenarios reveals a close alignment with ground truth expressibility values, highlighting the model's efficacy. Moreover, our model exhibits promising extrapolation capabilities, predicting expressibility values with low RMSE for out-of-range qubit circuits trained solely on only up to 5-qubit circuit sets. This work thus provides a reliable means of efficiently evaluating the expressibility of diverse PQCs on noiseless simulators and hardware.