Adversarial Attacks on Deep Learning-Based False Data Injection Detection in Differential Relays

Saber, Ahmad Mohammad, Maheshwari, Aditi, Youssef, Amr, Kundur, Deepa

arXiv.org Artificial Intelligence 

However, none have considered the dual challenge of attacking both DL-based detection models and triggering the physical relay operation, as is required for attacks on LCDRs. To our knowledge, no prior work investigated the vulnerabilities of DL-based FDIA detection systems in LCDRs against adversarial attacks, despite the critical role LCDRs play in line protection. This problem also encompasses a unique additional set of objectives and constraints that must be taken into consideration to design successful adversarial attacks against the LCDR. For instance, for an adversarial attack to succeed, it must not only deceive the DLS but also trigger the LCDR to trip, adding complexity beyond scenarios where decision-making relies solely on a machine-learning model. Herein, the adversarial samples must be misclassified by the DLS as faults, since the primary objective of the attacker is to cause the LCDR to trip unnecessarily in the absence of a real fault. Moreover, the problem is constrained by the requirement that only features from remote measurements can be manipulated, while local measurement features remain unchanged. Local measurements, being closely tied to the relay, are difficult to manipulate as they are transmitted directly through copper wires, whereas remote measurements, which traverse longer distances and potentially vulnerable media, offer a broader attack surface. This distinction highlights the need for robust detection systems capable of withstanding targeted adversarial attacks. Finally, for LCDRs, these robust detection systems must not negatively impact the LCDR's ability to detect actual faults.