Mourya, Sharan
Contextual Quantum Neural Networks for Stock Price Prediction
Mourya, Sharan, Leipold, Hannes, Adhikari, Bibhas
In this paper, we apply quantum machine learning (QML) to predict the stock prices of multiple assets using a contextual quantum neural network. Our approach captures recent trends to predict future stock price distributions, moving beyond traditional models that focus on entire historical data, enhancing adaptability and precision. Utilizing the principles of quantum superposition, we introduce a new training technique called the quantum batch gradient update (QBGU), which accelerates the standard stochastic gradient descent (SGD) in quantum applications and improves convergence. Consequently, we propose a quantum multi-task learning (QMTL) architecture, specifically, the share-and-specify ansatz, that integrates task-specific operators controlled by quantum labels, enabling the simultaneous and efficient training of multiple assets on the same quantum circuit as well as enabling efficient portfolio representation with logarithmic overhead in the number of qubits. This architecture represents the first of its kind in quantum finance, offering superior predictive power and computational efficiency for multi-asset stock price forecasting. Through extensive experimentation on S\&P 500 data for Apple, Google, Microsoft, and Amazon stocks, we demonstrate that our approach not only outperforms quantum single-task learning (QSTL) models but also effectively captures inter-asset correlations, leading to enhanced prediction accuracy. Our findings highlight the transformative potential of QML in financial applications, paving the way for more advanced, resource-efficient quantum algorithms in stock price prediction and other complex financial modeling tasks.
Spectral Temporal Graph Neural Network for massive MIMO CSI Prediction
Mourya, Sharan, Reddy, Pavan, Amuru, SaiDhiraj, Kuchi, Kiran Kumar
In the realm of 5G communication systems, the accuracy of Channel State Information (CSI) prediction is vital for optimizing performance. This letter introduces a pioneering approach: the Spectral-Temporal Graph Neural Network (STEM GNN), which fuses spatial relationships and temporal dynamics of the wireless channel using the Graph Fourier Transform. We compare the STEM GNN approach with conventional Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) models for CSI prediction. Our findings reveal a significant enhancement in overall communication system performance through STEM GNNs. For instance, in one scenario, STEM GNN achieves a sum rate of 5.009 bps/Hz which is $11.9\%$ higher than that of LSTM and $35\%$ higher than that of RNN. The spectral-temporal analysis capabilities of STEM GNNs capture intricate patterns often overlooked by traditional models, offering improvements in beamforming, interference mitigation, and ultra-reliable low-latency communication (URLLC).
Graph Neural Networks-Based User Pairing in Wireless Communication Systems
Mourya, Sharan, Reddy, Pavan, Amuru, SaiDhiraj, Kuchi, Kiran Kumar
Recently, deep neural networks have emerged as a solution to solve NP-hard wireless resource allocation problems in real-time. However, multi-layer perceptron (MLP) and convolutional neural network (CNN) structures, which are inherited from image processing tasks, are not optimized for wireless network problems. As network size increases, these methods get harder to train and generalize. User pairing is one such essential NP-hard optimization problem in wireless communication systems that entails selecting users to be scheduled together while minimizing interference and maximizing throughput. In this paper, we propose an unsupervised graph neural network (GNN) approach to efficiently solve the user pairing problem. Our proposed method utilizes the Erdos goes neural pipeline to significantly outperform other scheduling methods such as k-means and semi-orthogonal user scheduling (SUS). At 20 dB SNR, our proposed approach achieves a 49% better sum rate than k-means and a staggering 95% better sum rate than SUS while consuming minimal time and resources. The scalability of the proposed method is also explored as our model can handle dynamic changes in network size without experiencing a substantial decrease in performance. Moreover, our model can accomplish this without being explicitly trained for larger or smaller networks facilitating a dynamic functionality that cannot be achieved using CNNs or MLPs.