FRAUD-RLA: A new reinforcement learning adversarial attack against credit card fraud detection
Lunghi, Daniele, Molinghen, Yannick, Simitsis, Alkis, Lenaerts, Tom, Bontempi, Gianluca
–arXiv.org Artificial Intelligence
The main works [10, 11] attack the same realistic fraud detection Adversarial attacks pose a significant threat to data-driven engine called BankSealer [9]. In both works, the authors systems, and researchers have spent considerable resources rightfully consider domain-specific challenges generally absent studying them. Despite its economic relevance, this trend in other adversarial works, such as the intricate feature largely overlooked the issue of credit card fraud detection. To engineering process performed in fraud detection. However, address this gap, we propose a new threat model that demonstrates they operate under the assumption that fraudsters can access the limitations of existing attacks and highlights the the customers' transaction history. As the authors point out, necessity to investigate new approaches. We then design a this may be achieved through the introduction of malware into new adversarial attack for credit card fraud detection, employing the victim's devices. However, this considerably increases the reinforcement learning to bypass classifiers. This attack, difficulty of performing any attack, as fraudsters must first called FRAUD-RLA, is designed to maximize the attacker's compromise the customer's device and observe past transaction reward by optimizing the exploration-exploitation tradeoff history, which constitutes a significantly more complex and working with significantly less required knowledge than undertaking than stealing or cloning a card.
arXiv.org Artificial Intelligence
Feb-4-2025
- Genre:
- Research Report (1.00)
- Industry:
- Energy > Oil & Gas
- Upstream (0.34)
- Information Technology > Security & Privacy (1.00)
- Law Enforcement & Public Safety > Fraud (1.00)
- Energy > Oil & Gas
- Technology: