Wu, Wenjing
Brain-inspired sparse training enables Transformers and LLMs to perform as fully connected
Zhang, Yingtao, Zhao, Jialin, Wu, Wenjing, Liao, Ziheng, Michieli, Umberto, Cannistraci, Carlo Vittorio
This study aims to enlarge our current knowledge on application of brain-inspired network science principles for training artificial neural networks (ANNs) with sparse connectivity. Dynamic sparse training (DST) can reduce the computational demands in ANNs, but faces difficulties to keep peak performance at high sparsity levels. The Cannistraci-Hebb training (CHT) is a brain-inspired method for growing connectivity in DST. CHT leverages a gradient-free, topology-driven link regrowth, which has shown ultra-sparse (1% connectivity or lower) advantage across various tasks compared to fully connected networks. Yet, CHT suffers two main drawbacks: (i) its time complexity is O(Nd^3) - N node network size, d node degree - hence it can apply only to ultra-sparse networks. (ii) it selects top link prediction scores, which is inappropriate for the early training epochs, when the network presents unreliable connections. We propose a GPU-friendly approximation of the CH link predictor, which reduces the computational complexity to O(N^3), enabling a fast implementation of CHT in large-scale models. We introduce the Cannistraci-Hebb training soft rule (CHTs), which adopts a strategy for sampling connections in both link removal and regrowth, balancing the exploration and exploitation of network topology. To improve performance, we integrate CHTs with a sigmoid gradual density decay (CHTss). Empirical results show that, using 1% of connections, CHTs outperforms fully connected networks in MLP on visual classification tasks, compressing some networks to < 30% nodes. Using 5% of the connections, CHTss outperforms fully connected networks in two Transformer-based machine translation tasks. Using 30% of the connections, CHTss achieves superior performance compared to other dynamic sparse training methods in language modeling, and it surpasses the fully connected counterpart in zero-shot evaluations.
Machine Learning in High Energy Physics Community White Paper
Albertsson, Kim, Altoe, Piero, Anderson, Dustin, Andrews, Michael, Espinosa, Juan Pedro Araque, Aurisano, Adam, Basara, Laurent, Bevan, Adrian, Bhimji, Wahid, Bonacorsi, Daniele, Calafiura, Paolo, Campanelli, Mario, Capps, Louis, Carminati, Federico, Carrazza, Stefano, Childers, Taylor, Coniavitis, Elias, Cranmer, Kyle, David, Claire, Davis, Douglas, Duarte, Javier, Erdmann, Martin, Eschle, Jonas, Farbin, Amir, Feickert, Matthew, Castro, Nuno Filipe, Fitzpatrick, Conor, Floris, Michele, Forti, Alessandra, Garra-Tico, Jordi, Gemmler, Jochen, Girone, Maria, Glaysher, Paul, Gleyzer, Sergei, Gligorov, Vladimir, Golling, Tobias, Graw, Jonas, Gray, Lindsey, Greenwood, Dick, Hacker, Thomas, Harvey, John, Hegner, Benedikt, Heinrich, Lukas, Hooberman, Ben, Junggeburth, Johannes, Kagan, Michael, Kane, Meghan, Kanishchev, Konstantin, Karpiński, Przemysław, Kassabov, Zahari, Kaul, Gaurav, Kcira, Dorian, Keck, Thomas, Klimentov, Alexei, Kowalkowski, Jim, Kreczko, Luke, Kurepin, Alexander, Kutschke, Rob, Kuznetsov, Valentin, Köhler, Nicolas, Lakomov, Igor, Lannon, Kevin, Lassnig, Mario, Limosani, Antonio, Louppe, Gilles, Mangu, Aashrita, Mato, Pere, Meenakshi, Narain, Meinhard, Helge, Menasce, Dario, Moneta, Lorenzo, Moortgat, Seth, Neubauer, Mark, Newman, Harvey, Pabst, Hans, Paganini, Michela, Paulini, Manfred, Perdue, Gabriel, Perez, Uzziel, Picazio, Attilio, Pivarski, Jim, Prosper, Harrison, Psihas, Fernanda, Radovic, Alexander, Reece, Ryan, Rinkevicius, Aurelius, Rodrigues, Eduardo, Rorie, Jamal, Rousseau, David, Sauers, Aaron, Schramm, Steven, Schwartzman, Ariel, Severini, Horst, Seyfert, Paul, Siroky, Filip, Skazytkin, Konstantin, Sokoloff, Mike, Stewart, Graeme, Stienen, Bob, Stockdale, Ian, Strong, Giles, Thais, Savannah, Tomko, Karen, Upfal, Eli, Usai, Emanuele, Ustyuzhanin, Andrey, Vala, Martin, Vallecorsa, Sofia, Verzetti, Mauro, Vilasís-Cardona, Xavier, Vlimant, Jean-Roch, Vukotic, Ilija, Wang, Sean-Jiun, Watts, Gordon, Williams, Michael, Wu, Wenjing, Wunsch, Stefan, Zapata, Omar
Machine learning is an important research area in particle physics, beginning with applications to high-level physics analysis in the 1990s and 2000s, followed by an explosion of applications in particle and event identification and reconstruction in the 2010s. In this document we discuss promising future research and development areas in machine learning in particle physics with a roadmap for their implementation, software and hardware resource requirements, collaborative initiatives with the data science community, academia and industry, and training the particle physics community in data science. The main objective of the document is to connect and motivate these areas of research and development with the physics drivers of the High-Luminosity Large Hadron Collider and future neutrino experiments and identify the resource needs for their implementation. Additionally we identify areas where collaboration with external communities will be of great benefit.