If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Inherent batch-to-batch variability, aging, and contamination are major factors contributing to variability in oil-field cement-slurry performance. Of particular concern are problems encountered when a slurry is formulated with one cement sample and used with a batch having different properties. Such variability imposes a heavy burden on performance testing and is often a major factor in operational failure. We describe methods that allow the identification, characterization, and prediction of the variability of oil-field cements. Our approach involves predicting cement compositions, particle-size distributions, and thickening-time curves from the diffuse reflectance infrared Fourier transform spectrum of neat cement powders.
In this article, we first survey the three major types of computer music systems based on AI techniques: (1) compositional, (2) improvisational, and (3) performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer's "touch," that is, the knowledge applied when performing a score.
Philip Sheppard has recorded solo cello albums, composed more than 60 soundtracks, and adapted 206 national anthems for Olympic Games medal ceremonies. He knows every corner of the famed Abbey Road Studios. His work is the process of creating music, but sometimes he just wants to listen, preferably while walking in the woods. During one of those walks, in 2016, he didn't want to be bothered choosing what music to play. Instead he imagined some sort of magical accompaniment piped into his earphones that would dynamically reflect his surroundings and his mood, a literal soundtrack for his saunter.
Over the last century, scientists have developed methods to map the structures within the Earth's crust, in order to identify resources such as oil reserves, geothermal sources, and, more recently, reservoirs where excess carbon dioxide could potentially be sequestered. They do so by tracking seismic waves that are produced naturally by earthquakes or artificially via explosives or underwater air guns. The way these waves bounce and scatter through the Earth can give scientists an idea of the type of structures that lie beneath the surface. There is a narrow range of seismic waves -- those that occur at low frequencies of around 1 hertz -- that could give scientists the clearest picture of underground structures spanning wide distances. But these waves are often drowned out by Earth's noisy seismic hum, and are therefore difficult to pick up with current detectors.
Researchers have developed a new machine learning algorithm that can determine whether ancient excrement was deposited by a human or a dog. Bones and artifacts are great, but ancient poop can offer archaeologists tremendous insights, too -- insights into dietary patterns, parasite evolution and more. The only problem is that it can be hard to identify the owner of really old feces. Specifically, scientists have trouble differentiating between ancient human and dog feces. Dogs have been hanging out around humans for thousands of years.
It's great to see more research and more datasets on complex QA and reasoning tasks. Whereas last year we saw a surge of multi-hop reading comprehension datasets (e.g., HotpotQA), this year at ICLR there is a strong line-up of papers dedicated to studying compositionality and logical complexity: and here KGs are of big help! Keysers et al study how to measure compositional generalization of QA models, i.e., when train and test splits operate on the same set of entities (broadly, logical atoms), but the composition of such atoms is different. The authors design a new large KGQA dataset CFQ (Compositional Freebase Questions) comprised of about 240K questions of 35K SPARQL query patterns. Several fascinating points 1) the questions are annotated with EL Description Logic (yes, those were the times around 2005 when DL meant mostly Description Logic, not Deep Learning); 2) as the dataset is positioned towards semantic parsing, all questions already have linked Freebase IDs (URIs), so you don't need to plug in your favourite Entity Linking system (like ElasticSearch).
How smart is artificial intelligence and how fast is it advancing? In a recent post I mentioned machine learning research that taught an algorithm to master 40-year-old video games. If Atari 1982 video games are the peak of 2020 research, how hard is it to train an algorithm to play Pong, which came out in 1972? Machine learning can master the game in about 250 lines of code. In fact, Pong is one of the most popular ways of teaching reinforcement learning theory and practice.
The current tsunami of deep learning (the hyper-vitamined return of artificial neural networks) applies not only to traditional statistical machine learning tasks: prediction and classification (e.g., for weather prediction and pattern recognition), but has already conquered other areas, such as translation. A growing area of application is the generation of creative content: in particular the case of music, the topic of this paper. The motivation is in using the capacity of modern deep learning techniques to automatically learn musical styles from arbitrary musical corpora and then to generate musical samples from the estimated distribution, with some degree of control over the generation. This article provides a survey of music generation based on deep learning techniques. After a short introduction to the topic illustrated by a recent exemple, the article analyses some early works from the late 1980s using artificial neural networks for music generation and how their pioneering contributions foreshadowed current techniques. Then, we introduce some conceptual framework to analyze the various concepts and dimensions involved. Various examples of recent systems are introduced and analyzed to illustrate the variety of concerns and of techniques.
The computation and memory needed for Convolutional Neural Network (CNN) inference can be reduced by pruning weights from the trained network. Pruning is guided by a pruning saliency, which heuristically approximates the change in the loss function associated with the removal of specific weights. Many pruning signals have been proposed, but the performance of each heuristic depends on the particular trained network. This leaves the data scientist with a difficult choice. When using any one saliency metric for the entire pruning process, we run the risk of the metric assumptions being invalidated, leading to poor decisions being made by the metric. Ideally we could combine the best aspects of different saliency metrics. However, despite an extensive literature review, we are unable to find any prior work on composing different saliency metrics. The chief difficulty lies in combining the numerical output of different saliency metrics, which are not directly comparable. We propose a method to compose several primitive pruning saliencies, to exploit the cases where each saliency measure does well. Our experiments show that the composition of saliencies avoids many poor pruning choices identified by individual saliencies. In most cases our method finds better selections than even the best individual pruning saliency.
Structure is the most basic and important property of crystalline solids; it determines directly or indirectly most materials characteristics. However, predicting crystal structure of solids remains a formidable and not fully solved problem. Standard theoretical tools for this task are computationally expensive and at times inaccurate. Here we present an alternative approach utilizing machine learning for crystal structure prediction. We developed a tool called Crystal Structure Prediction Network (CRYSPNet) that can predict the Bravais lattice, space group, and lattice parameters of an inorganic material based only on its chemical composition. CRYSPNet consists of a series of neural network models, using as inputs predictors aggregating the properties of the elements constituting the compound. It was trained and validated on more than 100,000 entries from the Inorganic Crystal Structure Database. The tool demonstrates robust predictive capability and outperforms alternative strategies by a large margin. Made available to the public (at https://github.com/AuroraLHT/cryspnet), it can be used both as an independent prediction engine or as a method to generate candidate structures for further computational and/or experimental validation.