Not enough data to create a plot.
Try a different view from the menu above.
Wang, Aaron
Interpreting Transformers for Jet Tagging
Wang, Aaron, Gandrakota, Abhijith, Ngadiuba, Jennifer, Sahu, Vivekanand, Bhatnagar, Priyansh, Khoda, Elham E, Duarte, Javier
Machine learning (ML) algorithms, particularly attention-based transformer models, have become indispensable for analyzing the vast data generated by particle physics experiments like ATLAS and CMS at the CERN LHC. Particle Transformer (ParT), a state-of-the-art model, leverages particle-level attention to improve jet-tagging tasks, which are critical for identifying particles resulting from proton collisions. This study focuses on interpreting ParT by analyzing attention heat maps and particle-pair correlations on the $\eta$-$\phi$ plane, revealing a binary attention pattern where each particle attends to at most one other particle. At the same time, we observe that ParT shows varying focus on important particles and subjets depending on decay, indicating that the model learns traditional jet substructure observables. These insights enhance our understanding of the model's internal workings and learning process, offering potential avenues for improving the efficiency of transformer architectures in future high-energy physics applications.
Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Khoda, Elham E, Rankin, Dylan, de Lima, Rafael Teixeira, Harris, Philip, Hauck, Scott, Hsu, Shih-Chieh, Kagan, Michael, Loncar, Vladimir, Paikara, Chaitanya, Rao, Richa, Summers, Sioni, Vernieri, Caterina, Wang, Aaron
Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neural network layers -- long short-term memory and gated recurrent unit -- within the hls4ml framework. We demonstrate that our implementation is capable of producing effective designs for both small and large models, and can be customized to meet specific design requirements for inference latencies and FPGA resources. We show the performance and synthesized designs for multiple neural networks, many of which are trained specifically for jet identification tasks at the CERN Large Hadron Collider.