Quantum Attention for Vision Transformers in High Energy Physics
Tesi, Alessandro, Dahale, Gopal Ramesh, Gleyzer, Sergei, Kong, Kyoungchul, Magorsch, Tom, Matchev, Konstantin T., Matcheva, Katia
–arXiv.org Artificial Intelligence
The anticipated launch of the High Luminosity Large Hadron Collider (HL-LHC) [1] by CERN at the end of this decade is expected to generate an unprecedented volume of data, necessitating advanced computational frameworks and strategies to handle, process, and analyze this immense dataset efficiently. Classical computing resources, while effective, face significant limitations in scaling to the data and computational demands projected by such high-dimensional tasks. Addressing this challenge, quantum machine learning (QML) [2, 3] has emerged as a promising solution. Quantum vision transformers (QViTs) [4, 5, 6, 7] have recently been proposed as hybrid architectures that integrate quantum circuits within classical vision transformer (ViT) [8] frameworks to reduce time complexity and improve performance in machine learning tasks involving high-dimensional data. Traditional ViTs employ self-attention mechanisms [9] and multi-layer perceptrons (MLPs) [10] to learn from image data, which has shown promising results in computer vision tasks across various domains.
arXiv.org Artificial Intelligence
Nov-20-2024
- Country:
- North America > United States > Florida (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.47)
- Technology: