Applying just a bit of strain to a piece of semiconductor or other crystalline material can deform the orderly arrangement of atoms in its structure enough to cause dramatic changes in its properties, such as the way it conducts electricity, transmits light, or conducts heat. Now, a team of researchers at MIT and in Russia and Singapore have found ways to use artificial intelligence to help predict and control these changes, potentially opening up new avenues of research on advanced materials for future high-tech devices. The findings appeared in early February in the Proceedings of the National Academy of Sciences, in a paper authored by MIT professor of nuclear science and engineering and of materials science and engineering Ju Li, MIT Principal Research Scientist Ming Dao, and MIT graduate student Zhe Shi, with Evgenii Tsymbalov and Alexander Shapeev at the Skolkovo Institute of Science and Technology in Russia, and Subra Suresh, the Vannevar Bush Professor Emeritus and former dean of engineering at MIT and current president of Nanyang Technological University in Singapore. Already, based on earlier work at MIT, some degree of elastic strain has been incorporated in some silicon processor chips. Even a 1% change in the structure can in some cases improve the speed of the device by 50 percent, by allowing electrons to move through the material faster.
Scientists know that you can dramatically alter a crystalline material's properties by applying a bit of strain to it, but finding the right strain is another matter when there are virtually limitless possibilities. There may a straightforward solution, though: let AI do the heavy lifting. An international team of researchers has devised a way for machine learning to find strains that will achieve the best results. Their neural network algorithm predicts how the direction and degree of strain will affect a key property governing the efficiency of semiconductors, making them far more efficient without requiring educated guesses from humans. The technology could lead to semiconductor-based inventions that are far more powerful than usual with only minor changes.
Over the past five years, rapid progress in photovoltaic technology has been further accelerated by materials called perovskites. They require only common ingredients and relatively easy manufacturing methods, holding out the possibility of cheap thin-film cells on a variety of surfaces or combined with silicon in large panels. In the laboratory, small-area cells made with these materials already feature solar-conversion efficiencies as high as 22%, rivaling those of traditional silicon solar cells.
Scientists have developed a way to make diamond bend like rubber. The breakthrough, albeit seen on an extremely small scale, could pave the way for devices made from ultra-strong and flexible diamond-based materials. In the study, an international team of researchers found that tiny diamond needles measuring just a few micrometers tall could bend by as much as 9 percent without snapping – and, they reverted to their original shape afterward. Scientists have developed a way to make diamond bend like rubber. The study used extremely small, nanoscale diamond needles.
Material scientists are increasingly adopting the use of machine learning (ML) for making potentially important decisions, such as, discovery, development, optimization, synthesis and characterization of materials. However, despite ML's impressive performance in commercial applications, several unique challenges exist when applying ML in materials science applications. In such a context, the contributions of this work are twofold. First, we identify common pitfalls of existing ML techniques when learning from underrepresented/imbalanced material data. Specifically, we show that with imbalanced data, standard methods for assessing quality of ML models break down and lead to misleading conclusions. Furthermore, we found that the model's own confidence score cannot be trusted and model introspection methods (using simpler models) do not help as they result in loss of predictive performance (reliability-explainability trade-off). Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning framework. Specifically, we propose a novel pipeline that employs an ensemble of simpler models to reliably predict material properties. We also propose a transfer learning technique and show that the performance loss due to models' simplicity can be overcome by exploiting correlations among different material properties. A new evaluation metric and a trust score to better quantify the confidence in the predictions are also proposed. To improve the interpretability, we add a rationale generator component to our framework which provides both model-level and decision-level explanations. Finally, we demonstrate the versatility of our technique on two applications: 1) predicting properties of crystalline compounds, and 2) identifying novel potentially stable solar cell materials.