Goto

Collaborating Authors

 Simitsis, Alkis


FRAUD-RLA: A new reinforcement learning adversarial attack against credit card fraud detection

arXiv.org Artificial Intelligence

The main works [10, 11] attack the same realistic fraud detection Adversarial attacks pose a significant threat to data-driven engine called BankSealer [9]. In both works, the authors systems, and researchers have spent considerable resources rightfully consider domain-specific challenges generally absent studying them. Despite its economic relevance, this trend in other adversarial works, such as the intricate feature largely overlooked the issue of credit card fraud detection. To engineering process performed in fraud detection. However, address this gap, we propose a new threat model that demonstrates they operate under the assumption that fraudsters can access the limitations of existing attacks and highlights the the customers' transaction history. As the authors point out, necessity to investigate new approaches. We then design a this may be achieved through the introduction of malware into new adversarial attack for credit card fraud detection, employing the victim's devices. However, this considerably increases the reinforcement learning to bypass classifiers. This attack, difficulty of performing any attack, as fraudsters must first called FRAUD-RLA, is designed to maximize the attacker's compromise the customer's device and observe past transaction reward by optimizing the exploration-exploitation tradeoff history, which constitutes a significantly more complex and working with significantly less required knowledge than undertaking than stealing or cloning a card.


Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives

arXiv.org Artificial Intelligence

Data economy relies on data-driven systems and complex machine learning applications are fueled by them. Unfortunately, however, machine learning models are exposed to fraudulent activities and adversarial attacks, which threaten their security and trustworthiness. In the last decade or so, the research interest on adversarial machine learning has grown significantly, revealing how learning applications could be severely impacted by effective attacks. Although early results of adversarial machine learning indicate the huge potential of the approach to specific domains such as image processing, still there is a gap in both the research literature and practice regarding how to generalize adversarial techniques in other domains and applications. Fraud detection is a critical defense mechanism for data economy, as it is for other applications as well, which poses several challenges for machine learning. In this work, we describe how attacks against fraud detection systems differ from other applications of adversarial machine learning, and propose a number of interesting directions to bridge this gap.