Prabhat, null
Learning from learning machines: a new generation of AI technology to meet the needs of science
Pion-Tonachini, Luca, Bouchard, Kristofer, Martin, Hector Garcia, Peisert, Sean, Holtz, W. Bradley, Aswani, Anil, Dwivedi, Dipankar, Wainwright, Haruko, Pilania, Ghanshyam, Nachman, Benjamin, Marrone, Babetta L., Falco, Nicola, Prabhat, null, Arnold, Daniel, Wolf-Yadlin, Alejandro, Powers, Sarah, Climer, Sharlee, Jackson, Quinn, Carlson, Ty, Sohn, Michael, Zwart, Petrus, Kumar, Neeraj, Justice, Amy, Tomlin, Claire, Jacobson, Daniel, Micklem, Gos, Gkoutos, Georgios V., Bickel, Peter J., Cazier, Jean-Baptiste, Müller, Juliane, Webb-Robertson, Bobbie-Jo, Stevens, Rick, Anderson, Mark, Kreutz-Delgado, Ken, Mahoney, Michael W., Brown, James B.
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery. The distinct goals of AI for industry versus the goals of AI for science create tension between identifying patterns in data versus discovering patterns in the world from data. If we address the fundamental challenges associated with "bridging the gap" between domain-driven scientific models and data-driven AI learning machines, then we expect that these AI models can transform hypothesis generation, scientific discovery, and the scientific process itself.
MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework
Jiang, Chiyu Max, Esmaeilzadeh, Soheil, Azizzadenesheli, Kamyar, Kashinath, Karthik, Mustafa, Mustafa, Tchelepi, Hamdi A., Marcus, Philip, Prabhat, null, Anandkumar, Anima
From a numerical perspective, resolving the wide range of spatiotemporal scales within such physical systems is challenging since extremely small spatial and temporal numerical We propose MeshfreeFlowNet, a novel deep learningbased stencils would be required. In order to alleviate the super-resolution framework to generate continuous computational burden of fully resolving such a wide range (grid-free) spatiotemporal solutions from the low-resolution of spatial and temporal scales, multiscale computational approaches inputs. While being computationally efficient, MeshfreeFlowNet have been developed. For instance, in the subsurface accurately recovers the fine-scale quantities flow problem, the main idea of the multiscale approach of interest. MeshfreeFlowNet allows for: (i) the output is to build a set of operators that map between the unknowns to be sampled at all spatiotemporal resolutions, (ii) a set associated with the computational cells in a fine-grid and the of Partial Differential Equation (PDE) constraints to be imposed, unknowns on a coarser grid. The operators are computed and (iii) training on fixed-size inputs on arbitrarily numerically by solving localized flow problems. The multiscale sized spatiotemporal domains owing to its fully convolutional basis functions have subgrid-scale resolutions, ensuring encoder.
Track Seeding and Labelling with Embedded-space Graph Neural Networks
Choma, Nicholas, Murnane, Daniel, Ju, Xiangyang, Calafiura, Paolo, Conlon, Sean, Farrell, Steven, Prabhat, null, Cerati, Giuseppe, Gray, Lindsey, Klijnsma, Thomas, Kowalkowski, Jim, Spentzouris, Panagiotis, Vlimant, Jean-Roch, Spiropulu, Maria, Aurisano, Adam, Hewes, V, Tsaris, Aristeidis, Terao, Kazuhiro, Usher, Tracy
To address the unprecedented scale of HL-LHC data, the Exa.TrkX project is investigating a variety of machine learning approaches to particle track reconstruction. The most promising of these solutions, graph neural networks (GNN), process the event as a graph that connects track measurements (detector hits corresponding to nodes) with candidate line segments between the hits (corresponding to edges). Detector information can be associated with nodes and edges, enabling a GNN to propagate the embedded parameters around the graph and predict node-, edge- and graph-level observables. Previously, message-passing GNNs have shown success in predicting doublet likelihood, and we here report updates on the state-of-the-art architectures for this task. In addition, the Exa.TrkX project has investigated innovations in both graph construction, and embedded representations, in an effort to achieve fully learned end-to-end track finding. Hence, we present a suite of extensions to the original model, with encouraging results for hitgraph classification. In addition, we explore increased performance by constructing graphs from learned representations which contain non-linear metric structure, allowing for efficient clustering and neighborhood queries of data points. We demonstrate how this framework fits in with both traditional clustering pipelines, and GNN approaches. The embedded graphs feed into high-accuracy doublet and triplet classifiers, or can be used as an end-to-end track classifier by clustering in an embedded space. A set of post-processing methods improve performance with knowledge of the detector physics. Finally, we present numerical results on the TrackML particle tracking challenge dataset, where our framework shows favorable results in both seeding and track finding.
Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale
Baydin, Atılım Güneş, Shao, Lei, Bhimji, Wahid, Heinrich, Lukas, Meadows, Lawrence, Liu, Jialin, Munk, Andreas, Naderiparizi, Saeid, Gram-Hansen, Bradley, Louppe, Gilles, Ma, Mingfei, Zhao, Xiaohui, Torr, Philip, Lee, Victor, Cranmer, Kyle, Prabhat, null, Wood, Frank
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN--LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.
Enforcing Statistical Constraints in Generative Adversarial Networks for Modeling Chaotic Dynamical Systems
Wu, Jin-Long, Kashinath, Karthik, Albert, Adrian, Chirila, Dragos, Prabhat, null, Xiao, Heng
Simulating complex physical systems often involves solving partial differential equations (PDEs) with some closures due to the presence of multi-scale physics that cannot be fully resolved. Therefore, reliable and accurate closure models for unresolved physics remains an important requirement for many computational physics problems, e.g., turbulence simulation. Recently, several researchers have adopted generative adversarial networks (GANs), a novel paradigm of training machine learning models, to generate solutions of PDEs-governed complex systems without having to numerically solve these PDEs. However, GANs are known to be difficult in training and likely to converge to local minima, where the generated samples do not capture the true statistics of the training data. In this work, we present a statistical constrained generative adversarial network by enforcing constraints of covariance from the training data, which results in an improved machine-learning-based emulator to capture the statistics of the training data generated by solving fully resolved PDEs. We show that such a statistical regularization leads to better performance compared to standard GANs, measured by (1) the constrained model's ability to more faithfully emulate certain physical properties of the system and (2) the significantly reduced (by up to 80%) training time to reach the solution. We exemplify this approach on the Rayleigh-Benard convection, a turbulent flow system that is an idealized model of the Earth's atmosphere. With the growth of high-fidelity simulation databases of physical systems, this work suggests great potential for being an alternative to the explicit modeling of closures or parameterizations for unresolved physics, which are known to be a major source of uncertainty in simulating multi-scale physical systems, e.g., turbulence or Earth's climate.
Spherical CNNs on Unstructured Grids
Jiang, Chiyu "Max", Huang, Jingwei, Kashinath, Karthik, Prabhat, null, Marcus, Philip, Niessner, Matthias
We present an efficient convolution kernel for Convolutional Neural Networks (CNNs) on unstructured grids using parameterized differential operators while focusing on spherical signals such as panorama images or planetary signals. To this end, we replace conventional convolution kernels with linear combinations of differential operators that are weighted by learnable parameters. Differential operators can be efficiently estimated on unstructured grids using one-ring neighbors, and learnable parameters can be optimized through standard back-propagation. As a result, we obtain extremely efficient neural networks that match or outperform state-of-the-art network architectures in terms of performance but with a significantly lower number of network parameters. We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. Overall, we present (1) a novel CNN approach on unstructured grids using parameterized differential operators for spherical signals, and (2) we show that our unique kernel parameterization allows our model to achieve the same or higher accuracy with significantly fewer network parameters.
Graph Neural Networks for IceCube Signal Classification
Choma, Nicholas, Monti, Federico, Gerhardt, Lisa, Palczewski, Tomasz, Ronaghi, Zahra, Prabhat, null, Bhimji, Wahid, Bronstein, Michael M., Klein, Spencer R., Bruna, Joan
Tasks involving the analysis of geometric (graph- and manifold-structured) data have recently gained prominence in the machine learning community, giving birth to a rapidly developing field of geometric deep learning. In this work, we leverage graph neural networks to improve signal detection in the IceCube neutrino observatory. The IceCube detector array is modeled as a graph, where vertices are sensors and edges are a learned function of the sensors' spatial coordinates. As only a subset of IceCube's sensors is active during a given observation, we note the adaptive nature of our GNN, wherein computation is restricted to the input signal support. We demonstrate the effectiveness of our GNN architecture on a task classifying IceCube events, where it outperforms both a traditional physics-based method as well as classical 3D convolution neural networks.
Efficient Probabilistic Inference in the Quest for Physics Beyond the Standard Model
Baydin, Atilim Gunes, Heinrich, Lukas, Bhimji, Wahid, Gram-Hansen, Bradley, Louppe, Gilles, Shao, Lei, Prabhat, null, Cranmer, Kyle, Wood, Frank
We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a cross-platform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the tau lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. High-energy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.
Approximate Inference for Constructing Astronomical Catalogs from Images
Regier, Jeffrey, Miller, Andrew C., Schlegel, David, Adams, Ryan P., McAuliffe, Jon D., Prabhat, null
We present a new, fully generative model for constructing astronomical catalogs from optical telescope image sets. Each pixel intensity is treated as a Poisson random variable with a rate parameter that depends on the latent properties of stars and galaxies. These latent properties are themselves random, with scientific prior distributions constructed from large ancillary datasets. We compare two procedures for posterior inference: Markov chain Monte Carlo (MCMC) and variational inference (VI). MCMC excels at quantifying uncertainty while VI is 1000x faster. Both procedures outperform the current state-of-the-art method for measuring celestial bodies' colors, shapes, and morphologies. On a supercomputer, the VI procedure efficiently uses 665,000 CPU cores (1.3 million hardware threads) to construct an astronomical catalog from 50 terabytes of images.
ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events
Racah, Evan, Beckham, Christopher, Maharaj, Tegan, Kahou, Samira Ebrahimi, Prabhat, null, Pal, Christopher
Then detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect.