deeplocker
Are these the edge-case trends of AI in 2020? - Tech Wire Asia
Artificial intelligence (AI) continues to hold its title as the top buzzword of enterprise tech, but its appeal is well-founded. We now seem to be shifting from the era of businesses simply talking about AI, to actually getting hands-on, exploring the ways it can be used to tackle real-world challenges. AI is increasingly providing a solution to problems old and new, then again, while the technology is proving itself incredibly powerful, not all of its potential is necessarily positive. Here, we explore some of the more edge-case applications of AI taking place this year. Advances in deep-learning and AI continue to make deepfakes more realistic.
- Europe > United Kingdom (0.05)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- Media (0.96)
Exploring the edge cases of artificial intelligence in 2020 - TechHQ
Artificial intelligence (AI) is at the top of the buzzword bingo reel in the world of tech, and for good reason. We're seemingly shifting from the era of businesses (and the public) talking about AI and marvelling at its mysterious power, to wondering how it can be used to best tackle real-world challenges day to day. That said, with the fine-tuning of the technology comes increasing attempts to exploit some of its frailties. So just how will the world harness, advance and protect AI technology within the year to come? Here are few of the more edge-case applications of AI taking place.
- Europe > United Kingdom (0.05)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- Media (0.96)
AI Against AI - Blog - Connected World
What comes to mind when you think of deepfakes? A report by CB Insights got me thinking the other day about deepfakes and their impact on AI (artificial intelligence), quantum, and more. In case you didn't know, deepfakes combine the expressions deep learning and fake and artificial intelligence; and that's what we're talking about with next-gen hack tactics using AI. There are a lot of market numbers about AI-as-a-Service market, AI in financial services, AI in the medical sector, AI in the automotive market, AI in marketing, and AI at the edge. There's just so much to discuss when it comes to AI, and we talk about it relatively frequently in an attempt to try to cover it from all sides.
How Artificial Intelligence Will Shape the Future of Malware
As we move into the future, the prospect of AI-driven systems becomes more appealing. Artificial Intelligence will help us make decisions, power our smart cities, and--unfortunately--infect our computers with nasty strains of malware. Let's explore what the future of AI means for malware. When we use the term "AI-driven malware," it's easy to imagine a Terminator-style case of an AI "gone rogue" and causing havoc. In reality, a malicious AI-controlled program wouldn't be sending robots back through time; it would be sneakier than that.
How AI can be used for Malicious Purposes - Deep Instinct
In recent years, deep learning and machine learning have gained traction in so many areas that have a direct positive effect on our lives as well as complex tasks such as computer vision (image recognition), machine translation, and natural language processing. And with like so many other technologies that are changing our lives for good, it has the destructive potential to change it for bad, there is no reason why it won't also be used for malicious activities as well. Up until now, we haven't seen the use of AI for malicious activities in cybersecurity due to the high costs, lack of skills and the tools available. But just like any other technology, it's a matter of time before it happens in cybersecurity. Think about what would happen when attackers start using the power of deep learning and machine learning for their advantage?
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.57)
The security threats of neural networks and deep learning algorithms
History shows that cybersecurity threats evolve along with new technological advances. Relational databases brought SQL injection attacks, web scripting programming languages spurred cross-site scripting attacks, IoT devices ushered in new ways to create botnets, and the internet in general opened a Pandora's box of digital security ills. Social media created new ways to manipulate people through micro-targeted content delivery and made it easier to gather information for phishing attacks. And bitcoin enabled the delivery of crypto-ransowmare attacks. The point is, every new technology entails new security threats that were previously unimaginable.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.90)
The Looming Rise of AI-Powered Malware
In the past two years, we've learned that machine learning algorithms can manipulate public opinion, cause fatal car crashes, create fake porn, and manifest extremely sexist and racist behavior. And now, the cybersecurity threats of deep learning and neural networks are emerging. We're just beginning to catch glimpses of a future in which cybercriminals trick neural networks into making fatal mistakes and use deep learning to hide their malware and find their target among millions of users. Part of the challenge of securing artificial intelligence applications lies in the fact it's hard to explain how they work, and even the people who create them are often hard-pressed to make sense of their inner workings. But unless we prepare ourselves for what is to come, we'll learn to appreciate and react to these threats the hard way.
- Asia > Middle East > Iran (0.05)
- North America > United States > Michigan (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > Middle East > Israel (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.70)
AI Could Make Cyberattacks More Dangerous, Harder to Detect
Researchers say hackers could weaponize artificial intelligence to conceal and accelerate cyberattacks, and potentially escalate their damage. Scientists warn that hackers could weaponize artificial intelligence (AI) to conceal and accelerate cyberattacks and potentially escalate their damage. IBM researchers last month demonstrated "DeepLocker" AI-powered malware designed to hide its damaging payload until it reaches a specific victim, identifying its target with indicators like facial- and voice-recognition and geolocation. IBM's Marc Stoecklin said with DeepLocker, "AI becomes the decision maker to determine when to unlock the malicious behavior." Meanwhile, the Stevens Institute of Technology's Giuseppe Ateniese has investigated the use of generative adversarial networks (GANs), which contain two neural networks that collaborate to deceive safeguards like passwords; he designed a GAN that fed leaked passwords found online into an AI model, to analyze patterns and narrow down likely passwords faster than brute-force attacks.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.91)
Security experts create DeepLocker - the AI-based malware
The past 100 years have seen an incredible rise in technology advancement, and Artificial Intelligence is part of it. While humans strive to make their lives easier, like using self-driving cars or relying on Cortana, Alexa[1] and Siri to do their daily tasks, the computing technologies can also be used for far worse purposes. While others worry about machines taking over the world and destroying humanity, security researchers at IBM[2] considered a far likely scenario in the near future and created DeepLocker – an AI-powered malware that is capable of using evasive techniques to obfuscate its presence and avoid security software entirely. The most notorious malware like WannaCry, Trickbot,[3] and Zeus devastated the most influential organizations, resulted in millions of damages, and disrupted the work of vital sectors like hospitals all over the world. While such attacks can be prevented by using safety measures and adequate security software, the AI-based malware can result in an attack the world has never seen before.
Researchers Developed Artificial Intelligence-Powered Stealthy Malware
Artificial Intelligence (AI) has been seen as a potential solution for automatically detecting and combating malware, and stop cyber attacks before they affect any organization. However, the same technology can also be weaponized by threat actors to power a new generation of malware that can evade even the best cyber-security defenses and infects a computer network or launch an attack only when the target's face is detected by the camera. To demonstrate this scenario, security researchers at IBM Research came up with DeepLocker--a new breed of "highly targeted and evasive" attack tool powered by AI," which conceals its malicious intent until it reached a specific victim. According to the IBM researcher, DeepLocker flies under the radar without being detected and "unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition." Describing it as the "spray and pray" approach of traditional malware, researchers believe that this kind of stealthy AI-powered malware is particularly dangerous because, like nation-state malware, it could infect millions of systems without being detected. The malware can hide its malicious payload in benign carrier applications, like video conferencing software, to avoid detection by most antivirus and malware scanners until it reaches specific victims, who are identified via indicators such as voice recognition, facial recognition, geolocation and other system-level features. Also Read: Artificial Intelligence Based System That Can Detect 85% of Cyber Attacks "What is unique about DeepLocker is that the use of AI makes the "trigger conditions" to unlock the attack almost impossible to reverse engineer," the researchers explain. "The malicious payload will only be unlocked if the intended target is reached." To demonstrate DeepLocker's capabilities, the researchers designed a proof of concept, camouflaging well-known WannaCry ransomware in a video conferencing app so that it remains undetected by security tools, including antivirus engines and malware sandboxes. With the built-in triggering condition, DeepLocker did not unlock and execute the ransomware on the system until it recognized the face of the target, which can be matched using publicly available photos of the target. "Imagine that this video conferencing application is distributed and downloaded by millions of people, which is a plausible scenario nowadays on many public platforms.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.49)