Hsu, Shih-Chieh
Building Machine Learning Challenges for Anomaly Detection in Science
Campolongo, Elizabeth G., Chou, Yuan-Tang, Govorkova, Ekaterina, Bhimji, Wahid, Chao, Wei-Lun, Harris, Chris, Hsu, Shih-Chieh, Lapp, Hilmar, Neubauer, Mark S., Namayanja, Josephine, Subramanian, Aneesh, Harris, Philip, Anand, Advaith, Carlyn, David E., Ghosh, Subhankar, Lawrence, Christopher, Moreno, Eric, Raikman, Ryan, Wu, Jiaman, Zhang, Ziheng, Adhi, Bayu, Gharehtoragh, Mohammad Ahmadi, Monsalve, Saúl Alonso, Babicz, Marta, Baig, Furqan, Banerji, Namrata, Bardon, William, Barna, Tyler, Berger-Wolf, Tanya, Dieng, Adji Bousso, Brachman, Micah, Buat, Quentin, Hui, David C. Y., Cao, Phuong, Cerino, Franco, Chang, Yi-Chun, Chaulagain, Shivaji, Chen, An-Kai, Chen, Deming, Chen, Eric, Chou, Chia-Jui, Ciou, Zih-Chen, Cochran-Branson, Miles, Choi, Artur Cordeiro Oudot, Coughlin, Michael, Cremonesi, Matteo, Dadarlat, Maria, Darch, Peter, Desai, Malina, Diaz, Daniel, Dillmann, Steven, Duarte, Javier, Duporge, Isla, Ekka, Urbas, Heravi, Saba Entezari, Fang, Hao, Flynn, Rian, Fox, Geoffrey, Freed, Emily, Gao, Hang, Gao, Jing, Gonski, Julia, Graham, Matthew, Hashemi, Abolfazl, Hauck, Scott, Hazelden, James, Peterson, Joshua Henry, Hoang, Duc, Hu, Wei, Huennefeld, Mirco, Hyde, David, Janeja, Vandana, Jaroenchai, Nattapon, Jia, Haoyi, Kang, Yunfan, Kholiavchenko, Maksim, Khoda, Elham E., Kim, Sangin, Kumar, Aditya, Lai, Bo-Cheng, Le, Trung, Lee, Chi-Wei, Lee, JangHyeon, Lee, Shaocheng, van der Lee, Suzan, Lewis, Charles, Li, Haitong, Li, Haoyang, Liao, Henry, Liu, Mia, Liu, Xiaolin, Liu, Xiulong, Loncar, Vladimir, Lyu, Fangzheng, Makarov, Ilya, Mao, Abhishikth Mallampalli Chen-Yu, Michels, Alexander, Migala, Alexander, Mokhtar, Farouk, Morlighem, Mathieu, Namgung, Min, Novak, Andrzej, Novick, Andrew, Orsborn, Amy, Padmanabhan, Anand, Pan, Jia-Cheng, Pandya, Sneh, Pei, Zhiyuan, Peixoto, Ana, Percivall, George, Leung, Alex Po, Purushotham, Sanjay, Que, Zhiqiang, Quinnan, Melissa, Ranjan, Arghya, Rankin, Dylan, Reissel, Christina, Riedel, Benedikt, Rubenstein, Dan, Sasli, Argyro, Shlizerman, Eli, Singh, Arushi, Singh, Kim, Sokol, Eric R., Sorensen, Arturo, Su, Yu, Taheri, Mitra, Thakkar, Vaibhav, Thomas, Ann Mariam, Toberer, Eric, Tsai, Chenghan, Vandewalle, Rebecca, Verma, Arjun, Venterea, Ricco C., Wang, He, Wang, Jianwu, Wang, Sam, Wang, Shaowen, Watts, Gordon, Weitz, Jason, Wildridge, Andrew, Williams, Rebecca, Wolf, Scott, Xu, Yue, Yan, Jianqi, Yu, Jai, Zhang, Yulei, Zhao, Haoran, Zhao, Ying, Zhong, Yibo
Scientific discoveries are often made by finding a pattern or object that was not predicted by the known rules of science. Oftentimes, these anomalous events or objects that do not conform to the norms are an indication that the rules of science governing the data are incomplete, and something new needs to be present to explain these unexpected outliers. The challenge of finding anomalies can be confounding since it requires codifying a complete knowledge of the known scientific behaviors and then projecting these known behaviors on the data to look for deviations. When utilizing machine learning, this presents a particular challenge since we require that the model not only understands scientific data perfectly but also recognizes when the data is inconsistent and out of the scope of its trained behavior. In this paper, we present three datasets aimed at developing machine learning-based anomaly detection for disparate scientific domains covering astrophysics, genomics, and polar science. We present the different datasets along with a scheme to make machine learning challenges around the three datasets findable, accessible, interoperable, and reusable (FAIR). Furthermore, we present an approach that generalizes to future machine learning challenges, enabling the possibility of large, more compute-intensive challenges that can ultimately lead to scientific discovery.
FAIR Universe HiggsML Uncertainty Challenge Competition
Bhimji, Wahid, Calafiura, Paolo, Chakkappai, Ragansu, Chang, Po-Wen, Chou, Yuan-Tang, Diefenbacher, Sascha, Dudley, Jordan, Farrell, Steven, Ghosh, Aishik, Guyon, Isabelle, Harris, Chris, Hsu, Shih-Chieh, Khoda, Elham E, Lyscar, Rémy, Michon, Alexandre, Nachman, Benjamin, Nugent, Peter, Reymond, Mathis, Rousseau, David, Sluijter, Benjamin, Thorne, Benjamin, Ullah, Ihsan, Zhang, Yulei
The FAIR Universe -- HiggsML Uncertainty Challenge focuses on measuring the physics properties of elementary particles with imperfect simulators due to differences in modelling systematic errors. Additionally, the challenge is leveraging a large-compute-scale AI platform for sharing datasets, training models, and hosting machine learning competitions. Our challenge brings together the physics and machine learning communities to advance our understanding and methodologies in handling systematic (epistemic) uncertainties within AI techniques.
CaloChallenge 2022: A Community Challenge for Fast Calorimeter Simulation
Krause, Claudius, Giannelli, Michele Faucci, Kasieczka, Gregor, Nachman, Benjamin, Salamani, Dalila, Shih, David, Zaborowska, Anna, Amram, Oz, Borras, Kerstin, Buckley, Matthew R., Buhmann, Erik, Buss, Thorsten, Cardoso, Renato Paulo Da Costa, Caterini, Anthony L., Chernyavskaya, Nadezda, Corchia, Federico A. G., Cresswell, Jesse C., Diefenbacher, Sascha, Dreyer, Etienne, Ekambaram, Vijay, Eren, Engin, Ernst, Florian, Favaro, Luigi, Franchini, Matteo, Gaede, Frank, Gross, Eilam, Hsu, Shih-Chieh, Jaruskova, Kristina, Käch, Benno, Kalagnanam, Jayant, Kansal, Raghav, Kim, Taewoo, Kobylianskii, Dmitrii, Korol, Anatolii, Korcari, William, Krücker, Dirk, Krüger, Katja, Letizia, Marco, Li, Shu, Liu, Qibin, Liu, Xiulong, Loaiza-Ganem, Gabriel, Madula, Thandikire, McKeown, Peter, Melzer-Pellmann, Isabell-A., Mikuni, Vinicius, Nguyen, Nam, Ore, Ayodele, Schweitzer, Sofia Palacios, Pang, Ian, Pedro, Kevin, Plehn, Tilman, Pokorski, Witold, Qu, Huilin, Raikwar, Piyush, Raine, John A., Reyes-Gonzalez, Humberto, Rinaldi, Lorenzo, Ross, Brendan Leigh, Scham, Moritz A. W., Schnake, Simon, Shimmin, Chase, Shlizerman, Eli, Soybelman, Nathalie, Srivatsa, Mudhakar, Tsolaki, Kalliopi, Vallecorsa, Sofia, Yeo, Kyongmin, Zhang, Rui
We present the results of the "Fast Calorimeter Simulation Challenge 2022" -- the CaloChallenge. We study state-of-the-art generative models on four calorimeter shower datasets of increasing dimensionality, ranging from a few hundred voxels to a few tens of thousand voxels. The 31 individual submissions span a wide range of current popular generative architectures, including Variational AutoEncoders (VAEs), Generative Adversarial Networks (GANs), Normalizing Flows, Diffusion models, and models based on Conditional Flow Matching. We compare all submissions in terms of quality of generated calorimeter showers, as well as shower generation time and model size. To assess the quality we use a broad range of different metrics including differences in 1-dimensional histograms of observables, KPD/FPD scores, AUCs of binary classifiers, and the log-posterior of a multiclass classifier. The results of the CaloChallenge provide the most complete and comprehensive survey of cutting-edge approaches to calorimeter fast simulation to date. In addition, our work provides a uniquely detailed perspective on the important problem of how to evaluate generative models. As such, the results presented here should be applicable for other domains that use generative AI and require fast and faithful generation of samples in a large phase space.
Calo-VQ: Vector-Quantized Two-Stage Generative Model in Calorimeter Simulation
Liu, Qibin, Shimmin, Chase, Liu, Xiulong, Shlizerman, Eli, Li, Shu, Hsu, Shih-Chieh
We introduce a novel machine learning method developed for the fast simulation of calorimeter detector response, adapting vector-quantized variational autoencoder (VQ-VAE). Our model adopts a two-stage generation strategy: initially compressing geometry-aware calorimeter data into a discrete latent space, followed by the application of a sequence model to learn and generate the latent tokens. Extensive experimentation on the Calo-challenge dataset underscores the efficiency of our approach, showcasing a remarkable improvement in the generation speed compared with conventional method by a factor of 2000. Remarkably, our model achieves the generation of calorimeter showers within milliseconds. Furthermore, comprehensive quantitative evaluations across various metrics are performed to validate physics performance of generation.
FPGA Deployment of LFADS for Real-time Neuroscience Experiments
Liu, Xiaohan, Chen, ChiJui, Huang, YanLun, Yang, LingChi, Khoda, Elham E, Chen, Yihui, Hauck, Scott, Hsu, Shih-Chieh, Lai, Bo-Cheng
Large-scale recordings of neural activity are providing new opportunities to study neural population dynamics. A powerful method for analyzing such high-dimensional measurements is to deploy an algorithm to learn the low-dimensional latent dynamics. LFADS (Latent Factor Analysis via Dynamical Systems) is a deep learning method for inferring latent dynamics from high-dimensional neural spiking data recorded simultaneously in single trials. This method has shown a remarkable performance in modeling complex brain signals with an average inference latency in milliseconds. As our capacity of simultaneously recording many neurons is increasing exponentially, it is becoming crucial to build capacity for deploying low-latency inference of the computing algorithms. To improve the real-time processing ability of LFADS, we introduce an efficient implementation of the LFADS models onto Field Programmable Gate Arrays (FPGA). Our implementation shows an inference latency of 41.97 $\mu$s for processing the data in a single trial on a Xilinx U55C.
Ultra Fast Transformers on FPGAs for Particle Physics Experiments
Jiang, Zhixing, Yin, Dennis, Khoda, Elham E, Loncar, Vladimir, Govorkova, Ekaterina, Moreno, Eric, Harris, Philip, Hauck, Scott, Hsu, Shih-Chieh
This work introduces a highly efficient implementation of the transformer architecture on a Field-Programmable Gate Array (FPGA) by using the \texttt{hls4ml} tool. Given the demonstrated effectiveness of transformer models in addressing a wide range of problems, their application in experimental triggers within particle physics becomes a subject of significant interest. In this work, we have implemented critical components of a transformer model, such as multi-head attention and softmax layers. To evaluate the effectiveness of our implementation, we have focused on a particle physics jet flavor tagging problem, employing a public dataset. We recorded latency under 2 $\mu$s on the Xilinx UltraScale+ FPGA, which is compatible with hardware trigger requirements at the CERN Large Hadron Collider experiments.
Reconstruction of Unstable Heavy Particles Using Deep Symmetry-Preserving Attention Networks
Fenton, Michael James, Shmakov, Alexander, Okawa, Hideki, Li, Yuji, Hsiao, Ko-Yang, Hsu, Shih-Chieh, Whiteson, Daniel, Baldi, Pierre
Reconstructing unstable heavy particles requires sophisticated techniques to sift through the large number of possible permutations for assignment of detector objects to the underlying partons. An approach based on a generalized attention mechanism, symmetry preserving attention networks (Spa-Net), has been previously applied to top quark pair decays at the Large Hadron Collider which produce only hadronic jets. Here we extend the Spa-Net architecture to consider multiple input object types, such as leptons, as well as global event features, such as the missing transverse momentum. In addition, we provide regression and classification outputs to supplement the parton assignment. We explore the performance of the extended capability of Spa-Net in the context of semi-leptonic decays of top quark pairs as well as top quark pairs produced in association with a Higgs boson. We find significant improvements in the power of three representative studies: a search for ttH, a measurement of the top quark mass, and a search for a heavy Z' decaying to top quark pairs. We present ablation studies to provide insight on what the network has learned in each case.
Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs
Huang, Shi-Yu, Yang, Yun-Chen, Su, Yu-Ru, Lai, Bo-Cheng, Duarte, Javier, Hauck, Scott, Hsu, Shih-Chieh, Hu, Jin-Xuan, Neubauer, Mark S.
In-time particle trajectory reconstruction in the Large Hadron Collider is challenging due to the high collision rate and numerous particle hits. Using GNN (Graph Neural Network) on FPGA has enabled superior accuracy with flexible trajectory classification. However, existing GNN architectures have inefficient resource usage and insufficient parallelism for edge classification. This paper introduces a resource-efficient GNN architecture on FPGAs for low latency particle tracking. The modular architecture facilitates design scalability to support large graphs. Leveraging the geometric properties of hit detectors further reduces graph complexity and resource usage. Our results on Xilinx UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU and GPU respectively.
SPANet: Generalized Permutationless Set Assignment for Particle Physics using Symmetry Preserving Attention
Shmakov, Alexander, Fenton, Michael James, Ho, Ta-Wei, Hsu, Shih-Chieh, Whiteson, Daniel, Baldi, Pierre
The creation of unstable heavy particles at the Large Hadron Collider is the most direct way to address some of the deepest open questions in physics. Collisions typically produce variable-size sets of observed particles which have inherent ambiguities complicating the assignment of observed particles to the decay products of the heavy particles. Current strategies for tackling these challenges in the physics community ignore the physical symmetries of the decay products and consider all possible assignment permutations and do not scale to complex configurations. Attention based deep learning methods for sequence modelling have achieved state-of-the-art performance in natural language processing, but they lack built-in mechanisms to deal with the unique symmetries found in physical set-assignment problems. We introduce a novel method for constructing symmetry-preserving attention networks which reflect the problem's natural invariances to efficiently find assignments without evaluating all permutations. This general approach is applicable to arbitrarily complex configurations and significantly outperforms current methods, improving reconstruction efficiency between 19\% - 35\% on typical benchmark problems while decreasing inference time by two to five orders of magnitude on the most complex events, making many important and previously intractable cases tractable. A full code repository containing a general library, the specific configuration used, and a complete dataset release, are avaiable at https://github.com/Alexanders101/SPANet
Permutationless Many-Jet Event Reconstruction with Symmetry Preserving Attention Networks
Fenton, Michael James, Shmakov, Alexander, Ho, Ta-Wei, Hsu, Shih-Chieh, Whiteson, Daniel, Baldi, Pierre
Top quarks, produced in large numbers at the Large Hadron Collider, have a complex detector signature and require special reconstruction techniques. The most common decay mode, the "all-jet" channel, results in a 6-jet final state which is particularly difficult to reconstruct in $pp$ collisions due to the large number of permutations possible. We present a novel approach to this class of problem, based on neural networks using a generalized attention mechanism, that we call Symmetry Preserving Attention Networks (SPA-Net). We train one such network to identify the decay products of each top quark unambiguously and without combinatorial explosion as an example of the power of this technique.This approach significantly outperforms existing state-of-the-art methods, correctly assigning all jets in $93.0%$ of $6$-jet, $87.8%$ of $7$-jet, and $82.6%$ of $\geq 8$-jet events respectively.