nersc
Machine learning explores materials science questions and solves difficult search problems
Using computing resources at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (Berkeley Lab), researchers at Argonne National Laboratory have succeeded in exploring important materials science questions and demonstrated progress using machine learning to solve difficult search problems. By adapting a machine-learning algorithm from board games such as AlphaGo, the researchers developed force fields for nanoclusters of 54 elements across the periodic table, a dramatic leap toward understanding their unique properties and proof of concept for their search method. The team published its results in Nature Communications in January. Depending on their scale--bulk systems of 100 nanometers versus nanoclusters of less than 100 nanometers--materials can display dramatically different properties, including optical and magnetic properties, discrete energy levels, and enhanced photoluminescence. These properties may lend themselves to new scientific and industry applications, and scientists can learn about them by developing force fields--computational models that estimate the potential energies between atoms in a molecule and between molecules--for each element or compound.
- Energy (1.00)
- Leisure & Entertainment > Games (0.56)
- Energy (0.77)
- Information Technology (0.51)
- Education > Educational Setting (0.31)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.52)
Researchers have switched on the world's fastest AI supercomputer
Researchers have switched on the world's fastest AI supercomputer, delivering nearly four exaFLOPS of AI performance for more than 7,000 researchers. Perlmutter, officially dedicated today at the National Energy Research Scientific Computing Centre (NERSC), is a supercomputer that will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources, and much more. The supercomputer is made up of 6,159 NVIDIA A100 Tensor Core GPUs, which makes it the largest A100-powered system in the world. Over two dozen applications are getting ready to be among the first to use the system based at Lawrence Berkeley National Lab. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date.
- Energy > Renewable (0.72)
- Government > Regional Government > North America Government > United States Government (0.54)
Codeplay inks landmark deal with U.S. government to enable next-generation supercomputer
The National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory, in collaboration with the Argonne Leadership Computing Facility, is partnering with UK-based Codeplay Software to enhance GPU compiler capabilities for NVIDIA. This collaboration will help NERSC and ALCF users, along with researchers in the high-performance computing community, to produce high-performance applications that are portable across compute architectures from multiple vendors. Today, most artificial intelligence software, including for cars, is developed using graphics processors designed for video games, according to Codeplay. The company provides tools designed to enable software to be accelerated by graphics processors or the latest specialized AI processors. NERSC supercomputers are used for scientific research by researchers working in diverse areas such as alternative energy, environment, high-energy and nuclear physics, advanced computing, materials science and chemistry.
- North America > United States (0.40)
- Europe > United Kingdom (0.26)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.40)
A Novice's Guide to Hyperparameter Optimization at Scale
Despite the tremendous success of machine learning (ML), modern algorithms still depend on a variety of free non-trainable hyperparameters. Ultimately, our ability to select quality hyperparameters governs the performance for a given model. In the past, and even some currently, hyperparameters were hand selected through trial and error. An entire field has been dedicated to improving this selection process; it is referred to as hyperparameter optimization (HPO). Inherently, HPO requires testing many different hyperparameter configurations and as a result can benefit tremendously from massively parallel resources like the Perlmutter system we are building at the National Energy Research Scientific Computing Center (NERSC).
- Information Technology > Artificial Intelligence > Machine Learning (0.57)
- Information Technology > Software (0.41)
Etalumis 'Reverses' Simulations to Reveal New Science
Scientists have built simulations to help explain behavior in the real world, including modeling for disease transmission and prevention, autonomous vehicles, climate science, and in the search for the fundamental secrets of the universe. But how to interpret vast volumes of experimental data in terms of these detailed simulations remains a key challenge. Probabilistic programming offers a solution--essentially reverse-engineering the simulation--but this technique has long been limited due to the need to rewrite the simulation in custom computer languages, plus the intense computing power required. To address this challenge, a multinational collaboration of researchers using computing resources at Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) has developed the first probabilistic programming framework capable of controlling existing simulators and running at large-scale on HPC platforms. The system, called Etalumis ("simulate" spelled backwards), was developed by a group of scientists from the University of Oxford, University of British Columbia (UBC), Intel, New York University, CERN, and NERSC as part of a Big Data Center project.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.57)
- North America > United States > New York (0.25)
- North America > Canada > British Columbia (0.25)
Deep Learning Stretches Up to Scientific Supercomputers
The team achieved a peak rate between 11.73 and 15.07 petaflops (single-precision) when running its data set on the Cori supercomputer. Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that.
- Energy (0.78)
- Government > Regional Government (0.35)
Machine Learning Sifts & Searches Complex Scientific Data
As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools enabled by machine learning. With this in mind, a team of researchers from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley are developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific data, called Science Search, that the Berkeley team is building. As a proof-of-concept, the team is working with staff at Berkeley Lab's Molecular Foundry, to demonstrate the concepts of Science Search on the images captured by the facility's instruments. A beta version of the platform has been made available to Foundry researchers.
- Energy (0.89)
- Education > Educational Setting > Higher Education (0.35)
Researchers use machine learning to search science data
As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools. With this in mind, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley are developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific data, called Science Search, that the Berkeley team is building. As a proof-of-concept, the team is working with staff at the Department of Energy's (DOE) Molecular Foundry, located at Berkeley Lab, to demonstrate the concepts of Science Search on the images captured by the facility's instruments. A beta version of the platform has been made available to Foundry researchers.
- Energy (0.90)
- Education > Educational Setting > Higher Education (0.35)
Berkeley Lab researchers use machine learning to search science data
IMAGE: This is a screenshot of the Science Search interface. In this case, the user did an image search of nanoparticles. As scientific datasets increase in both size and complexity, the ability to label, filter and search this deluge of information has become a laborious, time-consuming and sometimes impossible task, without the help of automated tools. With this in mind, a team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley are developing innovative machine learning tools to pull contextual information from scientific datasets and automatically generate metadata tags for each file. Scientists can then search these files via a web-based search engine for scientific data, called Science Search, that the Berkeley team is building.
- Energy (0.49)
- Education > Educational Setting > Higher Education (0.35)