"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
It feels as though 2019 has gone by in a flash, that said, it has been a year in which we have seen great advancement in AI application methods and technical discovery, paving the way for future development. We are incredibly grateful to have had the leading minds in AI & Deep Learning present their latest work at our summits in San Francisco, Boston, Montreal and more, so we thought we would share thirty of our highlight videos with you as we think everybody needs to see them!. We were delighted to be joined by Dawn at the Deep Reinforcement Learning Summit in June of 2019, presenting the latest industry research on Secure Deep Reinforcement Learning, covering both the lessons leant in the lead up to her presentation, current challenges faced for advancement, and the future direction of which her research is set to take. You can see Dawn's full presentation from June here. Reinforcement Learning is somewhat of a hotbed for research, this year alone we have seen several presentations that have broken down the ins and outs of RL, that said, Doina's talk just last month gave us some new angles on the latest algorithmic development.
The field of machine learning has experienced significant growth in the past two decades as new algorithms and techniques have been developed and new research and applications have emerged. This series reflects the latest advances and applications in machine learning and pattern recognition through the publication of a broad range of reference works, textbooks, and handbooks. The inclusion of concrete examples, applications, and methods is highly encouraged. The scope of the series includes, but is not limited to, titles in the areas of machine learning, pattern recognition, computational intelligence, robotics, computational/statistical learning theory, natural language processing, computer vision, game AI, game theory, neural networks, and computational neuroscience. We are also willing to consider other relevant topics, such as machine learning applied to bioinformatics or cognitive science, which might be proposed by potential contributors.
Earlier this year, artificial intelligence yielded a practical insight: people like to drink coffee in the morning, so workplaces should find efficient ways to serve coffee. That raised a question that's surprisingly deep -- and can cost serious money to ignore: Is AI actually necessary for this problem? is a question that remains largely unasked in Silicon Valley today. We think it's worth asking. To be sure, modern data products owe a lot of their success to artificial intelligence. Well-considered AI unlocks entirely new types of data-driven insights and cuts the time and money needed for manual data analysis.
There is a famous scene in the movie "Harry Potter and the Half‐Blood Prince": A student has been cursed, investigations are under way. All at once, Harry shouts "It was Malfoy." McGonagall replies "This is a very serious accusation, Potter." "Indeed," agrees Snape and continues "Your evidence?" Harry immediately responds, "I just know."
Recent developments in high throughput profiling of individual neurons have spurred data driven exploration of the idea that there exist natural groupings of neurons referred to as cell types. The promise of this idea is that the immense complexity of brain circuits can be reduced, and effectively studied by means of interactions between cell types. While clustering of neuron populations based on a particular data modality can be used to define cell types, such definitions are often inconsistent across different characterization modalities. We pose this issue of cross-modal alignment as an optimization problem and develop an approach based on coupled training of autoencoders as a framework for such analyses. We apply this framework to a Patch-seq dataset consisting of transcriptomic and electrophysiological profiles for the same set of neurons to study consistency of representations across modalities, and evaluate cross-modal data prediction ability.
We review the current state of automatic differentiation (AD) for array programming in machine learning (ML), including the different approaches such as operator overloading (OO) and source transformation (ST) used for AD, graph-based intermediate representations for programs, and source languages. Based on these insights, we introduce a new graph-based intermediate representation (IR) which specifically aims to efficiently support fully-general AD for array programming. Unlike existing dataflow programming representations in ML frameworks, our IR naturally supports function calls, higher-order functions and recursion, making ML models easier to implement. The ability to represent closures allows us to perform AD using ST without a tape, making the resulting derivative (adjoint) program amenable to ahead-of-time optimization using tools from functional language compilers, and enabling higher-order derivatives. Lastly, we introduce a proof of concept compiler toolchain called Myia which uses a subset of Python as a front end.
The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation.
We have all heard about image style transfer: extracting the style from a famous painting and applying it to another image is a task that has been achcieved with a number of different methods. Generative Adversarial Networks (GANs in short) are also being used on images for generation, image-to-image translation and more. On the surface, you might think that audio is completely different from images, and that all the different techniques that have been explored for image-related tasks can't also be applied to sounds. But what if we could find a way to convert audio signals to image-like 2-dimensional representations? This kind of sound representation is what we call "Spectrogram", and it is the key that will allow us to make use of algorithms specifically designed to work with images for our audio-related task.
Learning and memory in the brain are implemented by complex, time-varying changes in neural circuitry. The computational rules according to which synaptic weights change over time are the subject of much research, and are not precisely understood. Until recently, limitations in experimental methods have made it challenging to test hypotheses about synaptic plasticity on a large scale. However, as such data become available and these barriers are lifted, it becomes necessary to develop analysis techniques to validate plasticity models. Here, we present a highly extensible framework for modeling arbitrary synaptic plasticity rules on spike train data in populations of interconnected neurons.