Goto

Collaborating Authors

 nature computational science


How Machine Learning Could Predict Rare Disastrous Events – Like Earthquakes or Pandemics

#artificialintelligence

A team of researchers has developed a new framework which utilizes advanced machine learning and statistical algorithms to predict rare events without the need for large data sets. Scientists can use a combination of advanced machine learning and sequential sampling techniques to predict extreme events without the need for large data sets, according to researchers from Brown and MIT. When it comes to predicting disasters brought on by extreme events (think earthquakes, pandemics, or "rogue waves" that could destroy coastal structures), computational modeling faces an almost insurmountable challenge: Statistically speaking, these events are so rare that there's just not enough data on them to use predictive models to accurately forecast when they'll happen next. However, a group of scientists from Brown University and Massachusetts Institute of Technology suggests that it doesn't have to be that way. In a study published in Nature Computational Science, the researchers explain how they utilized statistical algorithms which require less data for accurate predictions, in combination with a powerful machine learning technique developed at Brown University.


Enhancing computational fluid dynamics with machine learning - Nature Computational Science

#artificialintelligence

Machine learning is rapidly becoming a core technology for scientific computing, with numerous opportunities to advance the field of computational fluid dynamics. Here we highlight some of the areas of highest potential impact, including to accelerate direct numerical simulations, to improve turbulence closure modeling and to develop enhanced reduced-order models. We also discuss emerging areas of machine learning that are promising for computational fluid dynamics, as well as some potential limitations that should be taken into account. Machine learning has been used to accelerate the simulation of fluid dynamics. However, despite the recent developments in this field, there are still challenges to be addressed by the community, a fact that creates research opportunities.


Challenges and opportunities in quantum machine learning - Nature Computational Science

#artificialintelligence

At the intersection of machine learning and quantum computing, quantum machine learning has the potential of accelerating data analysis, especially for quantum data, with applications for quantum materials, biochemistry and high-energy physics. Nevertheless, challenges remain regarding the trainability of quantum machine learning models. Here we review current methods and applications for quantum machine learning. We highlight differences between quantum and classical machine learning, with a focus on quantum neural networks and quantum deep learning. Finally, we discuss opportunities for quantum advantage with quantum machine learning. Quantum machine learning has become an essential tool to process and analyze the increased amount of quantum data. Despite recent progress, there are still many challenges to be addressed and myriad future avenues of research.


Artificial Intelligence Discovers Alternative Physics

#artificialintelligence

Latent embeddings from our framework colored by physical state variables. A new Columbia University AI program observed physical phenomena and uncovered relevant variables--a necessary precursor to any physics theory. But the variables it discovered were unexpected. These three variables make up Einstein's iconic equation E MC2. But how did Albert Einstein know about these concepts in the first place?


Using metric learning to identify the lab-of-origin of engineered DNA - Nature Computational Science

#artificialintelligence

Determining the origin of engineered DNA can help to foster responsible innovation within the biotechnology community. A convolutional neural network approach that learns distances between engineered DNA sequences and various labs that could have created them is used to accurately predict the lab-of-origin.


Opportunities for neuromorphic computing algorithms and applications - Nature Computational Science

#artificialintelligence

With the end of Moore's law approaching and Dennard scaling ending, the computing community is increasingly looking at new technologies to enable continued performance improvements. Neuromorphic computers are one such new computing technology. The term neuromorphic was coined by Carver Mead in the late 1980s1,2, and at that time primarily referred to mixed analogue–digital implementations of brain-inspired computing; however, as the field has continued to evolve and with the advent of large-scale funding opportunities for brain-inspired computing systems such as the DARPA Synapse project and the European Union's Human Brain Project, the term neuromorphic has come to encompass a wider variety of hardware implementations. We define neuromorphic computers as non-von Neumann computers whose structure and function are inspired by brains and that are composed of neurons and synapses. Von Neumann computers are composed of separate CPUs and memory units, where data and instructions are stored in the latter.


Moving towards reproducible machine learning - Nature Computational Science

#artificialintelligence

An important step when constructing a model is the collection and selection of the datasets, as the quality of the model greatly depends on the quality and characteristics of the data. The data collection process needs to be properly discussed and reported, as there can be biases (intentional and/or unintentional) with regards to the selected data sources. Any identified biases and attempts to mitigate them should also be properly discussed, so that other researchers can be aware of the limitations when using the reported models. If synthetic data is used, the data generation process, including any assumptions that are considered, needs to be described in detail. Raw datasets are in fact rarely used, since they may have several inconsistencies, errors, and outliers that can ultimately impact the quality of the model. In addition, data might need to be converted to a specific format and representation in order to be used for a specific model.