"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
RADIUS guest contributor Gary Grossman currently leads the Edelman AI Center of Excellence. As part of that, he led development of the 2019 Edelman Artificial Intelligence Survey that can be viewed here. Just how important is artificial intelligence (AI)? Microsoft's Chief Envisioning Officer, Dave Coplin, said recently that AI is "the most important technology that anybody on the planet is working on today." A PwC report estimates that global GDP will be 14 percent higher in 2030 as a result of AI--the equivalent of $15.7 trillion, which is more than the current output of China and India combined.
Summary: Forrester has just released its "New Wave: Automation-Focused Machine Learning Solutions, Q2 2019" report on leading stand-alone automated machine learning platforms. This is our first good side-by-side comparison. You might also want to consider some who were not included. You know you've come of age when the major review publications like Gartner and Forrester publish a study on your segment. Just released is "The Forrester New Wave: Automation-Focused Machine Learning Solutions, Q2 2019".
"AI art", or more precisely art created with neural networks, has recently started to receive broad media coverage in newspapers (New York Times), magazines (The Atlantic), and countless blogs. Combined with the ongoing general "AI hype" and multiple recent museum and gallery exhibitions, this coverage has produced the impression of a new star rising in the art world: that of machine-generated art. It has also led to the popularization of an ever-growing list of philosophical questions surrounding the use of computers for the creation of art. This brief article provides a pragmatic evaluation of the new genre of AI art from the perspective of art history. It attempts to show that most of the philosophical questions commonly cited as unique issues of AI art have been addressed before with respect to previous iterations of generative art starting in the late 1950s. In other words: while AI art has certainly produced novel and interesting works, from an art historical perspective it is not the revolution as which it is portrayed.
Imagine you're sleeping, and you hear strange noises in your front lawn. You're very sleepy, so you hypothesize that the strange noises are being generated by a hungry dinosaur. You think to yourself, 'this is exactly what I would hear if there was a dinosaur outside in my front lawn'. But then as you think more about it, you realize that the likelihood of there actually being a dinosaur in your front lawn is extremely low; whereas the likelihood of hearing strange noises from the front lawn is likely pretty high. So you exhale as you realize that the actual probability of there being a dinosaur in your front lawn, aka your original hypothesis, given the evidence is extremely low.
Over the recent number of years, neural networks have come to play an increasingly central role in natural language processing. Owing in large part to milestones such as word embeddings, and the explosion of chatbots powered by language models built, at least in part, by neural networks, the achievements of neural networks in the domain are come increasingly quickly. Trying to keep up with these advancements can be troublesome. That's where the today's spotlighted resource comes in. NLP Overview: Modern Deep Learning Techniques Applied to Natural Language Processing is a living resource maintained by Elvis Saravia and Soujanya Poria -- with a major part of the project having been directly borrowed from the work of Young et al. (2017), as per the resource maintainers.
Almost two years ago, I paused thinking about the future of AI and drew down some "predictions" about where I thought the field was going. One of those forecasts concerned reaching a general intelligence in several years, not through a super powerful 100-layers deep learning algorithm, but rather through something called collective intelligence. However, except for very obvious applications (e.g., drones), I have not read or seen any big development in the field and I thus thought to dig a bit into that to check what is currently going on. As part of the AI Knowledge Map then, I will have a look here not only at Swarm Intelligence (SI) but more generally at Distributed AI, which also includes Agent-Based Modeling (ABM) and Multi-Agent Systems (MAS). Let's start from the broader classification.
"If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don't know how to make the cake." By 2016, Yann LeCun began to hedge with his use of the term "unsupervised learning". In NIPS 2016, he started to call it in even more nebulous terms "predictive learning": I have always had trouble with the use of the term "Unsupervised Learning". In 2017, I had predicted that Unsupervised Learning will not progress much and said "there seems to be a massive conceptual disconnect as to how exactly it should work" and that it was the "dark matter" of machine learning.
Abstract: Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences.This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML.
Text embedding representing natural language documents in a semantic vector space can be used for document retrieval using nearest neighbor lookup. In order to study the feasibility of neural models specialized for retrieval in a semantically meaningful way, we suggest the use of the Stanford Question Answering Dataset (SQuAD) in an open-domain question answering context, where the first task is to find paragraphs useful for answering a given question. First, we compare the quality of various text-embedding methods on the performance of retrieval and give an extensive empirical comparison on the performance of various non-augmented base embedding with, and without IDF weighting. Our main results are that by training deep residual neural models, specifically for retrieval purposes, can yield significant gains when it is used to augment existing embeddings. We also establish that deeper models are superior to this task. The best base baseline embeddings augmented by our learned neural approach improves the top-1 paragraph recall of the system by 14%.