Goto

Collaborating Authors

 Goan, Hsi-Sheng


BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model

arXiv.org Machine Learning

The field of developing deep learning architectures for tabular data is recently experiencing rapid advancements [Arik and Pfister, 2021, Gorishniy et al., 2021, Huang et al., 2020, Somepalli et al., 2021]. The primary driving force behind this trend is the limitations of the current dominant methods for tabular data: tree-based methods. Specifically, while tree-based methods excel in tabular learning, tree-based methods lack the capability to integrate with deep learning architectures. Therefore, the pursuit of deep tabular learning is not just a matter of enhancing performance but is also crucial to bridge the existing gap. However, a recent tabular benchmark study [Grinsztajn et al., 2022] reveals that tree-based methods still surpass deep learning models, underscoring two main challenges for deep tabular learning, as highlighted by Grinsztajn et al. [2022, Section 5.3 & 5.4]: (C1) Non-Rotationally Invariant Data Structure: The non-rotationally invariant structure of tabular data weakens the effectiveness of deep learning models that have rotational invariant learning procedures.


Variational Quantum Reinforcement Learning via Evolutionary Optimization

arXiv.org Artificial Intelligence

Recent advance in classical reinforcement learning (RL) and quantum computation (QC) points to a promising direction of performing RL on a quantum computer. However, potential applications in quantum RL are limited by the number of qubits available in the modern quantum devices. Here we present two frameworks of deep quantum RL tasks using a gradient-free evolution optimization: First, we apply the amplitude encoding scheme to the Cart-Pole problem; Second, we propose a hybrid framework where the quantum RL agents are equipped with hybrid tensor network-variational quantum circuit (TN-VQC) architecture to handle inputs with dimensions exceeding the number of qubits. This allows us to perform quantum RL on the MiniGrid environment with 147-dimensional inputs. We demonstrate the quantum advantage of parameter saving using the amplitude encoding. The hybrid TN-VQC architecture provides a natural way to perform efficient compression of the input dimension, enabling further quantum RL applications on noisy intermediate-scale quantum devices.


Variational Quantum Circuits and Deep Reinforcement Learning

arXiv.org Artificial Intelligence

Recently, machine learning has prevailed in many academia and industrial applications. At the same time, quantum computing, once seen as not realizable, has been brought to markets by several tech giants. However, these machines are not fault-tolerant and can not execute very deep circuits. Therefore, it is urgent to design suitable algorithms and applications implementable on these machines. In this work, we demonstrate a novel approach which applies variational quantum circuits to deep reinforcement learning. With the proposed method, we can implement famous deep reinforcement learning algorithms such as experience replay and target network with variational quantum circuits. In this framework, with appropriate information encoding scheme, the possible quantum advantage is the number of circuit parameters with $poly(\log{} N)$ compared to $poly(N)$ in conventional neural network where $N$ is the dimension of input vectors. Such an approach can be deployed on near-term noisy intermediate-scale quantum machines.