AI Could Make Cyberattacks More Dangerous, Harder to Detect

#artificialintelligence 

Researchers say hackers could weaponize artificial intelligence to conceal and accelerate cyberattacks, and potentially escalate their damage. Scientists warn that hackers could weaponize artificial intelligence (AI) to conceal and accelerate cyberattacks and potentially escalate their damage. IBM researchers last month demonstrated "DeepLocker" AI-powered malware designed to hide its damaging payload until it reaches a specific victim, identifying its target with indicators like facial- and voice-recognition and geolocation. IBM's Marc Stoecklin said with DeepLocker, "AI becomes the decision maker to determine when to unlock the malicious behavior." Meanwhile, the Stevens Institute of Technology's Giuseppe Ateniese has investigated the use of generative adversarial networks (GANs), which contain two neural networks that collaborate to deceive safeguards like passwords; he designed a GAN that fed leaked passwords found online into an AI model, to analyze patterns and narrow down likely passwords faster than brute-force attacks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found