Goto

Collaborating Authors

Results


Thwarting Side-Channel Attacks

Communications of the ACM

The same attributes that give deep learning its ability to tell images apart are helping attackers break into the cryptoprocessors built into integrated circuits that were meant improve their security. The same technology may provide the tools that will let chip designers find effective countermeasures, but it faces an uphill struggle. Side-channel attacks have been a concern for decades, as they have been used in the hacking of smartcard-based payment systems and pay-TV decoders, as well as in espionage. Yet the rise of Internet of Things (IoT) and edge systems and their use in large-scale, commercially sensitive applications makes such attacks a growing worry for chipmakers. The innate connectivity of IoT devices means success in obtaining private encryption keys from them may open up network access on cloud-based systems that rely on their data.


Machine learning security: why security is important in ML

#artificialintelligence

Machine learning security is software security for machine learning systems. Like other types of software, machine learning software is at risk for security breaches and cyber attacks. Although machine learning has been around even longer than computer security, its security risks were some of the least understood. Over recent years, hackers have been working hard to figure out all the potential attacks an ML system could fall victim to, so that engineers know what potential risks to plan for and cover in their machine learning security plan.


The Four Benefits of Data Mining: Google's Side of the Data Story

#artificialintelligence

Whenever you use a free application, website, or service, the companies behind it gain large amounts of information about you and then package you with other users with similar ages and interests to be sold to advertisers. This process is called data mining, is how Google generated a staggering $134.81 billion in advertising in 2019 alone. With advertising accounting for over 70% of Google's revenue, it has no other option than to try to convince us that we should not only tolerate its data collection and mining but accept it, because of its many advantages. Your phone is your personal assistant, and the more information about you it gets fed, the more things it can do for you. Would you care that your data is being collected if Google could use it to make things easier for you?


The Incident Response Challenge 2020 -- Results and Solutions Announced

#artificialintelligence

In April 2020, Cynet launched the world's first Incident Response Challenge to test and reward the skills of Incident Response professionals. The Challenge consisted of 25 incidents, in increasing difficulty, all inspired by real-life scenarios that required participants to go beyond the textbook solution and think outside of the box. Over 2,500 IR professionals competed to be recognized as the top incident responders. Now that the competition is over (however, the challenge website is still open for anyone who wants to practice solving the challenges), Cynet makes the detailed solutions available as a free resource for knowledge and inspiration. Providing the thought process and detailed steps to solve each of the challenges will serve as a training aid and knowledge base for incident responders.


Top 10 Digital Transformation Trends For 2021

#artificialintelligence

No one could have predicted where 2020 would take us: The last six months alone have produced more digital transformation than the last decade, with every transformation effort already underway finding itself accelerated, and at scale. While many of my digital transformation predictions from a year ago benefited from this shift, others were displaced by more urgent needs, like 24/7 secure and reliable connectivity. What does this mean for 2021? Will core technologies like AI and data analytics still dominate headlines, or will we see newer, previously emerging technologies take the lead? Only time will tell, but here are my top ten digital transformation predictions for 2021.


Limitations

#artificialintelligence

AI algorithms namely machine learning and deep learning algorithms are powerful tools. However, they suffer from some limitations which require that human analytics should work with AI tools collaboratively. In this post, we will look at the most important shortcoming of Artificial Intelligence in the cybersecurity domain. Though Benefits are more, AI also comprises limits [4]. Cybercriminals are creative and come up with new ways to conduct cyberattacks.


To simplify AI regulation, use the GDPR's high-risk criteria

#artificialintelligence

First, the two cumulative criteria proposed by the Commission will inevitably be incomplete, leaving some applications out. That's the tradeoff for simple rules – they miss the mark in a small but significant number of cases. To work properly, simple rules must be supplemented by a general catch-all category for other high-risk applications that would not qualify under the two-criteria test. If you add a catch-all test (which would be necessary in our view), the goal of legal certainty would be largely defeated. Second, the "high risk" criterion will interfere with other legal concepts and thresholds that already apply to AI applications.


AI + Automation -- future of cybersecurity. -- Artificial Intelligence +

#artificialintelligence

Artificial Intelligence and Automation should be used in cyber threat detection to increase security, efficiency and help organizations be pro-active, helping them see the threats in advance and keep their infrastructure and data safe. As organizations dwell into smarter and innovative products, they are dependent on critical data which is under constant threat. A breach of critical data can put the organization and its customers at serious risk. A combination of AI and Automation can be leveraged to counter these threats and provide insight into obscure and malicious activity on systems, networks, and infrastructure. In 2017, the average number of breached records by country was 24,089.


Top 10 Digital Transformation Trends For 2021

#artificialintelligence

No one could have predicted where 2020 would take us: The last six months alone have produced more digital transformation than the last decade, with every transformation effort already underway finding itself accelerated, and at scale. While many of my digital transformation predictions from a year ago benefited from this shift, others were displaced by more urgent needs, like 24/7 secure and reliable connectivity. What does this mean for 2021? Will core technologies like AI and data analytics still dominate headlines, or will we see newer, previously emerging technologies take the lead? Only time will tell, but here are my top ten digital transformation predictions for 2021.


AI and Machine Learning Algorithms are Increasingly being Used to Identify Fraudulent Transactions, Cybersecurity Professional Explains

#artificialintelligence

The retail banking sector has been hit with numerous scams during the past few years. Cybercriminals are now also beginning to increasingly go after much larger corporate accounts by launching sophisticated malware and phishing attacks, according to Beate Zwijnenberg, chief information security officer at ING Group. Zwijnenberg recommends using advanced AI defense systems to identify potentially fraudulent transactions which may not be immediately recognizable by human analysts. Financial institutions across the globe have been spending a lot of money to deal with serious cybersecurity threats. They've been using static, rules-based verification processes to identify suspicious activity.