Scientific Computing: Overviews
Synergizing Deep Learning and Full-Waveform Inversion: Bridging Data-Driven and Theory-Guided Approaches for Enhanced Seismic Imaging
Zerafa, Christopher, Galea, Pauline, Sebu, Cristiana
This review explores the integration of deep learning (DL) with full-waveform inversion (FWI) for enhanced seismic imaging and subsurface characterization. It covers FWI and DL fundamentals, geophysical applications (velocity estimation, deconvolution, tomography), and challenges (model complexity, data quality). The review also outlines future research directions, including hybrid, generative, and physics-informed models for improved accuracy, efficiency, and reliability in subsurface property estimation. The synergy between DL and FWI has the potential to transform geophysics, providing new insights into Earth's subsurface.
How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits
Mohseni, Masoud, Scherer, Artur, Johnson, K. Grace, Wertheim, Oded, Otten, Matthew, Aadit, Navid Anjum, Alexeev, Yuri, Bresniker, Kirk M., Camsari, Kerem Y., Chapman, Barbara, Chatterjee, Soumitra, Dagnew, Gebremedhin A., Esposito, Aniello, Fahim, Farah, Fiorentino, Marco, Gajjar, Archit, Khalid, Abdullah, Kong, Xiangzhou, Kulchytskyy, Bohdan, Kyoseva, Elica, Li, Ruoyu, Lott, P. Aaron, Markov, Igor L., McDermott, Robert F., Pedretti, Giacomo, Rao, Pooja, Rieffel, Eleanor, Silva, Allyson, Sorebo, John, Spentzouris, Panagiotis, Steiner, Ziv, Torosov, Boyan, Venturelli, Davide, Visser, Robert J., Webb, Zak, Zhan, Xin, Cohen, Yonatan, Ronagh, Pooya, Ho, Alan, Beausoleil, Raymond G., Martinis, John M.
In the span of four decades, quantum computation has evolved from an intellectual curiosity to a potentially realizable technology. Today, small-scale demonstrations have become possible for quantum algorithmic primitives on hundreds of physical qubits and proof-of-principle error-correction on a single logical qubit. Nevertheless, despite significant progress and excitement, the path toward a full-stack scalable technology is largely unknown. There are significant outstanding quantum hardware, fabrication, software architecture, and algorithmic challenges that are either unresolved or overlooked. These issues could seriously undermine the arrival of utility-scale quantum computers for the foreseeable future. Here, we provide a comprehensive review of these scaling challenges. We show how the road to scaling could be paved by adopting existing semiconductor technology to build much higher-quality qubits, employing system engineering approaches, and performing distributed quantum computation within heterogeneous high-performance computing infrastructures. These opportunities for research and development could unlock certain promising applications, in particular, efficient quantum simulation/learning of quantum data generated by natural or engineered quantum systems. To estimate the true cost of such promises, we provide a detailed resource and sensitivity analysis for classically hard quantum chemistry calculations on surface-code error-corrected quantum computers given current, target, and desired hardware specifications based on superconducting qubits, accounting for a realistic distribution of errors. Furthermore, we argue that, to tackle industry-scale classical optimization and machine learning problems in a cost-effective manner, heterogeneous quantum-probabilistic computing with custom-designed accelerators should be considered as a complementary path toward scalability.
Differentiable Programming for Differential Equations: A Review
Sapienza, Facundo, Bolibar, Jordi, Schäfer, Frank, Groenke, Brian, Pal, Avik, Boussange, Victor, Heimbach, Patrick, Hooker, Giles, Pérez, Fernando, Persson, Per-Olof, Rackauckas, Christopher
The differentiable programming paradigm is a cornerstone of modern scientific computing. It refers to numerical methods for computing the gradient of a numerical model's output. Many scientific models are based on differential equations, where differentiable programming plays a crucial role in calculating model sensitivities, inverting model parameters, and training hybrid models that combine differential equations with data-driven approaches. Furthermore, recognizing the strong synergies between inverse methods and machine learning offers the opportunity to establish a coherent framework applicable to both fields. Differentiating functions based on the numerical solution of differential equations is non-trivial. Numerous methods based on a wide variety of paradigms have been proposed in the literature, each with pros and cons specific to the type of problem investigated. Here, we provide a comprehensive review of existing techniques to compute derivatives of numerical solutions of differential equations. We first discuss the importance of gradients of solutions of differential equations in a variety of scientific domains. Second, we lay out the mathematical foundations of the various approaches and compare them with each other. Third, we cover the computational considerations and explore the solutions available in modern scientific software. Last but not least, we provide best-practices and recommendations for practitioners. We hope that this work accelerates the fusion of scientific models and data, and fosters a modern approach to scientific modelling.
Simulation Intelligence: Towards a New Generation of Scientific Methods
Lavin, Alexander, Zenil, Hector, Paige, Brooks, Krakauer, David, Gottschlich, Justin, Mattson, Tim, Anandkumar, Anima, Choudry, Sanjay, Rocki, Kamil, Baydin, Atılım Güneş, Prunkl, Carina, Paige, Brooks, Isayev, Olexandr, Peterson, Erik, McMahon, Peter L., Macke, Jakob, Cranmer, Kyle, Zhang, Jiaxin, Wainwright, Haruko, Hanuka, Adi, Veloso, Manuela, Assefa, Samuel, Zheng, Stephan, Pfeffer, Avi
The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science.
Randomized Algorithms for Scientific Computing (RASC)
Buluc, Aydin, Kolda, Tamara G., Wild, Stefan M., Anitescu, Mihai, DeGennaro, Anthony, Jakeman, John, Kamath, Chandrika, Ramakrishnan, null, Kannan, null, Lopes, Miles E., Martinsson, Per-Gunnar, Myers, Kary, Nelson, Jelani, Restrepo, Juan M., Seshadhri, C., Vrabie, Draguna, Wohlberg, Brendt, Wright, Stephen J., Yang, Chao, Zwart, Peter
Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science. Future advancements in DOE Office of Science priority areas such as climate science, astrophysics, fusion, advanced materials, combustion, and quantum computing all require randomized algorithms for surmounting challenges of complexity, robustness, and scalability. This report summarizes the outcomes of that workshop, "Randomized Algorithms for Scientific Computing (RASC)," held virtually across four days in December 2020 and January 2021.
Seamlessly scaling HPC and AI initiatives with HPE leading-edge technology
Accelerate your HPC and AI workloads with new products, advanced technologies, and services from HPE. A growing number of commercial businesses are implementing HPC solutions to derive actionable business insights, to run higher performance applications and to gain a competitive advantage. In fact, according to Hyperion Research, the HPC market exceeded expectations with 6.8% growth in 2018 with continued growth expected through 2023.1 Complexities abound as HPC becomes more pervasive across industries and markets, especially as you adopt, scale and optimize HPC and AI workloads. HPE is in lockstep with you along your AI journey. We help you get started with your AI transformation and scale more quickly, saving time and resources.
Larry Smarr on Supercomputing and the Human Brain Singularity University
Larry Smarr discusses the state of the art in supercomputing, with a focus on how current computation compares to the human brain and when supercomputers will surpass human processing power. Current supercomputers are estimated to match the human visual cortex and will reach human brain's computational ability within the next twenty years. We provide educational programs, innovative partnerships and a startup accelerator to help individuals, businesses, institutions, investors, NGOs and governments understand cutting-edge technologies, and how to utilize these technologies to positively impact billions of people.