As we make more cashless payments for retail purchases, restaurants, and transportation – not to mention the increase in online shopping – wallets loaded with legal tender may become a thing of the past. According to 2018 research by BigCommerce, software vendor and Square payment processing solution provider, 51 percent of Americans think that online shopping is the best option. Last year, 1.66 billion people worldwide bought goods online. And the number of digital buyers is expected to exceed 2.14 billion. Unfortunately, growing sales may mean not only greater revenue but also bigger losses due to fraud.
Even though these numbers are rough estimates rather than exact measurements, they are based on evidence and do indicate the importance and impact of the phenomenon, and therefore as well the need for organizations and governments to actively fight and prevent fraud with all means they have at their disposal. These numbers indicate that it is likely worthwhile to invest in fraud detection and prevention systems, since a significant financial return on investment can be made. However, estimating the return on investment in analytical approaches to fighting fraud is not self-evident, requiring an assessment of the total cost of ownership of analytical models as well as the full impact of fraud on the organization and the total utility of fraud detection and investigation. The Total Cost of Ownership (TCO) of a fraud analytical model refers to the cost of owning and operating the analytical model over its expected lifetime, from inception to retirement. It should consider both quantitative and qualitative costs and is a key input to make strategic decisions about how to optimally invest in fraud analytics.
All things change, and you must adapt over time. Ongoing monitoring of machine learning fraud detection systems is imperative for success. As populations and the underlying data shift, expected system inputs degrade and therefore have an impact on overall performance. This isn't unique to machine learning systems; rule-based systems have the same challenge. But newer machine learning methods can adapt to new and unidentified patterns as underlying changes occur. This eliminates some, but not all, of the machine learning retraining and evaluation steps.
Fraud Analytics Using Descriptive, Predictive, and Social Network Techniques is an authoritative guidebook for setting up a comprehensive fraud detection analytics solution. Early detection is a key factor in mitigating fraud damage, but it involves more specialized techniques than detecting fraud at the more advanced stages. This invaluable guide details both the theory and technical aspects of these techniques, and provides expert insight into streamlining implementation. Coverage includes data gathering, preprocessing, model building, and post-implementation, with comprehensive guidance on various learning techniques and the data types utilized by each. These techniques are effective for fraud detection across industry boundaries, including applications in insurance fraud, credit card fraud, anti-money laundering, healthcare fraud, telecommunications fraud, click fraud, tax evasion, and more, giving you a highly practical framework for fraud prevention.
Webinar: Tuesday, February 13, 1:00 pm ET / 10:00 am PT Register now Building predictive applications allows companies to respond to new threats and take advantage of developing opportunities. But executing these new applications against high-volume event streams with sub-second latency requires a powerful combination of machine learning and streaming analytics. In this webinar, you'll learn how to create and evaluate new machine learning models with DataRobot and deploy them within the SQLstream Blaze streaming analytics engine - so that you can identify risk in real-time and prevent fraud as it happens - rather than after the fact. On this 45-minute webinar, you'll discover how Automated Machine Learning and Streaming Analytics provides: - Automated machine learning models that can be created by anyone - Rapid deployment against incoming, high-volume events with extremely low-latency - The ability to update those models seamlessly - with no downtime - Deep transparency, including prediction reason codes, to enable rapid, targeted investigations Speakers: Greg Michaelson, PhD - Head of DataRobot Labs David Hickman - Senior Director, Product Marketing, SQLstream Register now