Goto

Collaborating Authors

SMART Algorithm Makes Beamline Data Collection Smarter

#artificialintelligence

Synthetic test function in two dimensions that is continuous and also smooth. The "data deluge" in scientific research stems in large part from the growing sophistication of experimental instrumentation and optimizing tools -- often using machine- and deep-learning methods -- to analyze increasingly large data sets. But what is equally important for improving scientific productivity is the optimization of data collection -- aka "data taking" -- methods. Toward this end, Marcus Noack, a postdoctoral scholar at Lawrence Berkeley National Laboratory in the Center for Advanced Mathematics for Energy Research Applications (CAMERA), and James Sethian, director of CAMERA and Professor of Mathematics at UC Berkeley, have been working with beamline scientists at Brookhaven National Laboratory to develop and test SMART (Surrogate Model Autonomous Experiment), a mathematical method that enables autonomous experimental decision making without human interaction. A paper describing SMART and its application in experiments at Brookhaven's National Synchrotron Light Source II (NSLS-II) are described in "A Kriging-Based Approach to Autonomous Experimentation with Applications to X-Ray Scattering," published in Scientific Reports.


Autonomous Materials Discovery Driven by Gaussian Process Regression with Inhomogeneous Measurement Noise and Anisotropic Kernels

arXiv.org Machine Learning

A majority of experimental disciplines face the challenge of exploring large and high-dimensional parameter spaces in search of new scientific discoveries. Materials science is no exception; the wide variety of synthesis, processing, and environmental conditions that influence material properties gives rise to particularly vast parameter spaces. Recent advances have led to an increase in efficiency of materials discovery by increasingly automating the exploration processes. Methods for autonomous experimentation have become more sophisticated recently, allowing for multi-dimensional parameter spaces to be explored efficiently and with minimal human intervention, thereby liberating the scientists to focus on interpretations and big-picture decisions. Gaussian process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments. We have recently demonstrated the positive impact of GPR-driven decision-making algorithms on autonomously steering experiments at a synchrotron beamline. However, due to the complexity of the experiments, GPR often cannot be used in its most basic form, but rather has to be tuned to account for the special requirements of the experiments. Two requirements seem to be of particular importance, namely inhomogeneous measurement noise (input dependent or non-i.i.d.) and anisotropic kernel functions, which are the two concepts that we tackle in this paper. Our synthetic and experimental tests demonstrate the importance of both concepts for experiments in materials science and the benefits that result from including them in the autonomous decision-making process.


Layered Graphene with a Twist Displays Unique Quantum Confinement Effects in 2-D

#artificialintelligence

Understanding how electrons move in 2-D layered material systems could lead to advances in quantum computing and communication. Scientists studying two different configurations of bilayer graphene--the two-dimensional (2-D), atom-thin form of carbon--have detected electronic and optical interlayer resonances. In these resonant states, electrons bounce back and forth between the two atomic planes in the 2-D interface at the same frequency. By characterizing these states, they found that twisting one of the graphene layers by 30 degrees relative to the other, instead of stacking the layers directly on top of each other, shifts the resonance to a lower energy. From this result, just published in Physical Review Letters, they deduced that the distance between the two layers increased significantly in the twisted configuration, compared to the stacked one.


Machine Learning Enhances Light-Beam Performance at the ALS

#artificialintelligence

This image shows the profile of an electron beam at Berkeley Lab's Advanced Light Source synchrotron, represented as pixels measured by a charged coupled device (CCD) sensor. When stabilized by a machine-learning algorithm, the beam has a horizontal size dimension of 49 microns (root mean squared) and vertical size dimension of 48 microns (root mean squared). Demanding experiments require that the corresponding light-beam size be stable on time scales ranging from less than seconds to hours to ensure reliable data. Synchrotron light sources are powerful facilities that produce light in a variety of "colors," or wavelengths – from the infrared to X-rays – by accelerating electrons to emit light in controlled beams. Synchrotrons like the Advanced Light Source at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) allow scientists to explore samples in a variety of ways using this light, in fields ranging from materials science, biology, and chemistry to physics and environmental science.


Machine learning enhances light-beam performance at the advanced light source

#artificialintelligence

Synchrotron light sources are powerful facilities that produce light in a variety of "colors," or wavelengths--from the infrared to X-rays--by accelerating electrons to emit light in controlled beams. Synchrotrons like the Advanced Light Source at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) allow scientists to explore samples in a variety of ways using this light, in fields ranging from materials science, biology, and chemistry to physics and environmental science. Researchers have found ways to upgrade these machines to produce more intense, focused, and consistent light beams that enable new, and more complex and detailed studies across a broad range of sample types. Many of these synchrotron facilities deliver different types of light for dozens of simultaneous experiments. And little tweaks to enhance light-beam properties at these individual beamlines can feed back into the overall light-beam performance across the entire facility.