Goto

Collaborating Authors

Results


The Growing Role of Machine Learning in Fraud Detection

#artificialintelligence

Machine learning (ML) can quickly detect fraud, saving organizations and consumers time and money when implemented correctly. As organizations grapple with how to keep up with consumers during the Covid-19 pandemic, they are also dealing with an evolving digital landscape, with online payment fraud losses alone set to exceed $206 billion between 2021 and 2025. While machine learning can save organizations exponential amounts of time and money when implemented correctly, it can also come with some initial challenges. The key to any accurate machine learning model is the input data. Not only does enough historical data need to exist for the model to derive an accurate representation but the data also needs to be accessible.


Machine Learning: Harnessing the Predictive Power of Computers

#artificialintelligence

It has worked its way into our daily lives, from voice assistants like Siri and Alexa to traffic apps that guide us around gridlock, cars that drive themselves and news stories that pop up on our social media feeds. Researchers in the University of Maryland's College of Computer, Mathematical, and Natural Sciences work at the forefront of machine learning technology, where computers analyze data to identify patterns and make decisions with minimal human intervention. These faculty members are using machine learning for applications that touch many aspects of our lives--from weather prediction and health care to transportation, finance and wildlife conservation. Along the way, they are advancing the science of exactly how computers learn. The shift from a cash economy to one reliant on electronic transactions has left many consumers feeling vulnerable to identity theft and bank fraud. And it's no wonder--in 2018, the Federal Trade Commission received over 440,000 reports of identity theft, largely from stolen credit card and social security numbers. For any consumer, that figure is concerning.


The pandemic has changed how criminals hide their cash--and AI tools are trying to sniff it out

MIT Technology Review

The pandemic has forced criminal gangs to come up with new ways to move money around. In turn, this has upped the stakes for anti-money laundering (AML) teams tasked with detecting suspicious financial transactions and following them back to their source. Key to their strategies are new AI tools. While some larger, older financial institutions have been slower to adapt their rule-based legacy systems, smaller, newer firms are using machine learning to look out for anomalous activity, whatever it might be. It is hard to assess the exact scale of the problem.


Top 100 Artificial Intelligence Companies 2020

#artificialintelligence

As artificial intelligence has become a growing force in business, today's top AI companies are leaders in this emerging technology. Often leveraging cloud computing, AI companies mix and match myriad technologies. Foremost among these is machine learning, but today's AI leading firms tech ranging from predictive analytics to business intelligence to data warehouse tools to deep learning. Entire industries are being reshaped by AI. RPA companies have completely shifted their platforms. AI in healthcare is changing patient care in numerous – and major – ways. AI companies are attracting massive investment from venture capitalist firms and giant firms like Microsoft and Google. Academic AI research is growing, as are AI job openings across a multitude of industries. All of this is documented in the AI Index, produced by Stanford University's Human-Centered AI Institute. Consulting giant Accenture believes AI has the potential to boost rates of profitability by an average of 38 percentage points and could lead to an economic boost of $14 trillion in additional gross value added (GVA) by 2035. In truth, artificial intelligence holds not just possibilities, but a plethora of risks. "It will have a huge economic impact but also change society, and it's hard to make strong predictions, but clearly job markets will be affected," said Yoshua Bengio, a professor at the University of Montreal, and head of the Montreal Institute for Learning Algorithms. To keep up with the AI market, we have updated our list of top AI companies playing a key role in shaping the future of AI. We feature artificial intelligence companies that are commercially successful as well as those that have invested significantly in artificial intelligence. AI companies in the years ahead are forecast to see exponential growth in deep learning, machine learning and natural language processing.


Deep Learning for Anomaly Detection: A Survey

arXiv.org Machine Learning

Anomaly detection is an important problem that has been well-studied within diverse research areas and application domains. The aim of this survey is twofold, firstly we present a structured and comprehensive overviewof research methods in deep learning-based anomaly detection. Furthermore, we review the adoption of these methods for anomaly across various application domains and assess their effectiveness. We have grouped state-of-the-art deep anomaly detection research techniques into different categories based on the underlying assumptions and approach adopted. Within each category, we outline the basic anomaly detection technique, along with its variants and present key assumptions, to differentiate between normal and anomalous behavior. Besides, for each category, we also present the advantages and limitations and discuss the computational complexity of the techniques inreal application domains. Finally, we outline open issues in research and challenges faced while adopting deep anomaly detection techniques for real-world problems.


Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

Neural Information Processing Systems

In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emph{absent} (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically \emph{absent} is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.


Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

Neural Information Processing Systems

In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emph{absent} (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically \emph{absent} is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.


Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

arXiv.org Artificial Intelligence

In this paper we propose a novel method that provides contrastive explanations justifying the classification of an input by a black box classifier such as a deep neural network. Given an input we find what should be %necessarily and minimally and sufficiently present (viz. important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emph{absent} (viz. certain background pixels). We argue that such explanations are natural for humans and are used commonly in domains such as health care and criminology. What is minimally but critically \emph{absent} is an important part of an explanation, which to the best of our knowledge, has not been explicitly identified by current explanation methods that explain predictions of neural networks. We validate our approach on three real datasets obtained from diverse domains; namely, a handwritten digits dataset MNIST, a large procurement fraud dataset and a brain activity strength dataset. In all three cases, we witness the power of our approach in generating precise explanations that are also easy for human experts to understand and evaluate.


How the machine 'thinks': Understanding opacity in machine learning algorithms

#artificialintelligence

This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm. This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These are just some examples of mechanisms of classification that the personal and trace data we generate is subject to every day in network-connected, advanced capitalist societies. These mechanisms of classification all frequently rely on computational algorithms, and lately on machine learning algorithms to do this work. Opacity seems to be at the very heart of new concerns about'algorithms' among legal scholars and social scientists. The algorithms in question operate on data. Using this data as input, they produce an output; specifically, a classification (i.e. They are opaque in the sense that if one is a recipient of the output of the algorithm (the classification decision), rarely does one have any concrete sense of how or why a particular classification has been arrived at from inputs.


A Comprehensive Survey of Data Mining-based Fraud Detection Research

arXiv.org Artificial Intelligence

This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.