"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones.
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. The tutorial was organized by Matthew Peters, Swabha Swayamdipta, Thomas Wolf, and me. In this post, I highlight key insights and takeaways and provide updates based on recent work. The slides, a Colaboratory notebook, and code of the tutorial are available online. For an overview of what transfer learning is, have a look at this blog post. Transfer learning is a means to extract knowledge from a source setting and apply it to a different target setting. In the span of little more than a year, transfer learning in the form of pretrained language models has become ubiquitous in NLP and has contributed to the state of the art on a wide range of tasks.
This post expands on the NAACL 2019 tutorial on Transfer Learning in NLP. The tutorial was organized by Matthew Peters, Swabha Swayamdipta, Thomas Wolf, and me. In this post, I highlight key insights and takeaways and provide updates based on recent work. The slides, a Colaboratory notebook, and code of the tutorial are available online. For an overview of what transfer learning is, have a look at this blog post. In the span of little more than a year, transfer learning in the form of pretrained language models has become ubiquitous in NLP and has contributed to the state of the art on a wide range of tasks.
Historically, we carried out content moderation using third party vendors, but with the increasing volume of the images (and text content) we started to automate as much of this work as possible with the help of machine learning models. In the next few sections, we will provide an overview of our modeling framework, data collection, and evaluation frameworks. One challenge we faced when we started this project was the lack of enough labeled data with granular categories for user generated content. In the past, Expedia teams labeled content using crowd-sourcing, but in many cases we found that images had only been labeled as approved or rejected without specifying the reason. This meant we lacked the training data to inform models why an image was rejected (an image can be rejected because it had low quality, or because it contains identifiable children, or for many other reasons).
Recent years have seen a rising interest in developing AI algorithms for real world big data domains ranging from autonomous cars to personalized assistants. At the core of these algorithms are architectures that combine deep neural networks, for approximating the underlying multidimensional state-spaces, with reinforcement learning, for controlling agents that learn to operate in said state-spaces towards achieving a given objective. The talk will first outline notable past and future efforts in deep reinforcement learning as well as identify fundamental problems that this technology has been struggling to overcome. Towards mitigating these problems (and open up an alternative path to general artificial intelligence), I will then summarize a brain computing model of intelligence, rooted in the latest findings in neuroscience. The talk will conclude with an overview of the recent research efforts in the field of multi-agent systems, to provide the future teams of humans and agents with the necessary tools that allow them to safely co-exist.
Artificial intelligence is a trending technology from quite a few years now. You must have heard a lot about it in tech news and blogs. There are various predictions about the future of Artificial intelligence but have you ever been keen to about its initial stages? In contemporary times, AI along with its subsets machine learning and deep learning are ruling the innovations in the software industry market. In fact, the magic of AI is such that 41 percent of consumers are expecting that their life will change with AI in the future.
The sooner fraud detection occurs the better as the likelihood of further losses is lower, potential recoveries are higher, and security issues can be addressed more rapidly. Catching fraud in an early stage, though, is more difficult than detecting it later, and requires specific techniques. Packed with numerous real-world examples, Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques authoritatively shows you how to put historical data to work against fraud. Authors Bart Baesens, Véronique Van Vlasselaer, and Wouter Verbeke expertly discuss the use of unsupervised learning, supervised learning, and social network learning using techniques across a wide variety of fraud applications, such as insurance fraud, credit card fraud, anti-money laundering, healthcare fraud, telecommunications fraud, click fraud, and tax evasion. This book provides the essential guidance you need to examine fraud patterns from historical data in order to detect fraud early in the process.
The program opens with four days of tutorials that will provide an introduction to major themes of the entire program and the four workshops. The goal is to build a foundation for the participants of this program who have diverse scientific backgrounds. The tutorials will focus on the theoretical and conceptual foundations of machine learning, as well as several of the application areas that will be discussed during the program. For those participating in the long program, please plan to attend Opening Day on September 4, 2019, as well. Others may participate in Opening Day by invitation from the organizing committee.
The last few years have seen an explosion of interest in quantum machine learning to accelerate scientific discovery in a range of fields, from quantum computing to the development of new materials and medicines. That effort deepened in July as researchers from industry and academia gathered for the week-long workshop "Machine Learning for Quantum Design" at Perimeter Institute. Conference co-organizer Roger Melko said the conference demonstrated the remarkable progress researchers have made in just a few years since the previous gathering of its kind at Perimeter. "We first had this conference on quantum machine learning three years ago, and it was largely blue-sky proposals and ideas back then," he said. "Now, the scientists here are actually implementing those ideas. The field is changing fast and the pace of that change is accelerating."
When I started learning RL three years ago, it was really hard to get practical information about the methods and ways that they could be implemented. Sparse blog posts about individual methods and theoretical papers, without code examples, were the only source of knowledge. To get something to experiment with, lots of time and effort was needed, fighting with weird bugs and misunderstanding mystic math in papers. With the rising popularity of RL, the situation has improved slightly, but, still, there is a lack of structured overview of the modern deep RL methods with a unified code base. This book fills the gap between theory and practice, providing a structured overview of recent RL methods, using clear examples written in uniform style.