The E-learning course starts by refreshing the basic concepts of the analytics process model: data preprocessing, analytics and post processing. We then discuss decision trees and ensemble methods (bagging, boosting, random forests), neural networks, support vector machines (SVMs), Bayesian networks, survival analysis, social networks, monitoring and backtesting analytical models. Throughout the course, we extensively refer to our industry and research experience. The E-learning course consists of more than 20 hours of movies, each 5 minutes on average. Quizzes are included to facilitate the understanding of the material.
Even though these numbers are rough estimates rather than exact measurements, they are based on evidence and do indicate the importance and impact of the phenomenon, and therefore as well the need for organizations and governments to actively fight and prevent fraud with all means they have at their disposal. These numbers indicate that it is likely worthwhile to invest in fraud detection and prevention systems, since a significant financial return on investment can be made. However, estimating the return on investment in analytical approaches to fighting fraud is not self-evident, requiring an assessment of the total cost of ownership of analytical models as well as the full impact of fraud on the organization and the total utility of fraud detection and investigation. The Total Cost of Ownership (TCO) of a fraud analytical model refers to the cost of owning and operating the analytical model over its expected lifetime, from inception to retirement. It should consider both quantitative and qualitative costs and is a key input to make strategic decisions about how to optimally invest in fraud analytics.
Professor Bart Baesens is a professor at KU Leuven (Belgium), and a lecturer at the University of Southampton (United Kingdom). He has done extensive research on analytics, customer relationship management, web analytics, fraud detection, and credit risk management. His findings have been published in well-known international journals (e.g. Machine Learning, Management Science, IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Evolutionary Computation, Journal of Machine Learning Research, …) and presented at international top conferences. He is also author of the books Credit Risk Management: Basic Concepts, published by Oxford University Press in 2008; and Analytics in a Big Data World published by Wiley in 2014.
On November 15th, my credit risk analytics course will be available as e-Learning. Send me an email at Bart.Baesens@gmail.com Bart Baesens holds a master's degree in Business Engineering (option: Management Informatics) and a PhD in Applied Economic Sciences from KU Leuven University (Belgium). He is currently an associate professor at KU Leuven, and a guest lecturer at the University of Southampton (United Kingdom). He has done extensive research on data mining and its applications.
This article is based upon our upcoming book Principles of Database Management: The Practical Guide to Storing, Managing and Analyzing Big and Small Data, www.pdbmbook.com See also our corresponding YouTube channel with free video lectures. Relational database systems (RDBMS) pay a lot of attention to data consistency and compliance with a formal database schema. New data or modifications to existing data are not accepted unless they satisfy constraints represented in this schema in terms of data types, referential integrity etc. The way in which RDBMS coordinate their transactions guarantees that the entire database is consistent at all times, the well-known ACID properties: atomicity, consistency, isolation and durability.