Collaborating Authors

Security & Privacy

Security Think Tank: Artificial intelligence will be no silver bullet for security


Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone. Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system. Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

Secure Collaborative XGBoost on Encrypted Data


Training a machine learning model requires a large quantity of high-quality data. One way to achieve this is to combine data from many different data organizations or data owners. But data owners are often unwilling to share their data with each other due to privacy concerns, which can stem from business competition, or be a matter of regulatory compliance. The question is: how can we mitigate such privacy concerns? Secure collaborative learning enables many data owners to build robust models on their collective data, but without revealing their data to each other.

Security Think Tank: Get your house in order before deploying AI


It is true to say that AI and ML offer great promise when it comes to organisational security measures. A predictive security stance may be some way off for many businesses and the belief that AI or ML will dissolve existing poor practice or protocols is as widespread as it is erroneous. Before really talking about AI and ML, we must talk about bias and the impact it has on quality outcomes from either technology. Bias will simply double down on any practice or protocol in place and reinforce it, good or bad. You don't have to look very far to see an example of how it can go wrong if you have not considered the bias problem – Amazon was forced to scrap its experimental AI recruitment tool, as it eventually decided the best people for its roles were pretty much just men.

Iran nuclear site fire hit centrifuge facility, analysts say

FOX News

Secretary of State Mike Pompeo seized on a U.N. report confirming Iranian weapons were used to attack Saudi Arabia in September and were part of an arms shipment seized months ago off Yemen's coast; State Department correspondent Rich Edson reports. A fire and an explosion struck a centrifuge production plant above Iran's underground Natanz nuclear enrichment facility early Thursday, analysts said, one of the most-tightly guarded sites in all of the Islamic Republic after earlier acts of sabotage there. The Atomic Energy Organization of Iran sought to downplay the fire, calling it an "incident" that only affected an under-construction "industrial shed," spokesman Behrouz Kamalvandi said. However, both Kamalvandi and Iranian nuclear chief Ali Akbar Salehi rushed after the fire to Natanz, a facility earlier targeted by the Stuxnet computer virus and built underground to withstand enemy airstrikes. The fire threatened to rekindle wider tensions across the Middle East, similar to the escalation in January after a U.S. drone strike killed a top Iranian general in Baghdad and Tehran launched a retaliatory ballistic missile attack targeting American forces in Iraq. While offering no cause for Thursday's blaze, Iran's state-run IRNA news agency published a commentary addressing the possibility of sabotage by enemy nations such as Israel and the U.S. following other recent explosions in the country.

Deep learning-based attack detection AI engine


TOKYO, June 30, 2020 /PRNewswire-PRWeb/ -- About Cyneural While cyber-attack defenses generally respond by detecting specific patterns of "signatures" that indicate malicious access, complex or unknown attacks that utilize AI or BOTs can be difficult to detect or can result in false positives. This is why cyber-attack defenses also need to take advantage of technology with flexibility such as AI. Against this backdrop, Cyber Security Cloud developed its own attack detection AI engine, Cyneural, in August 2019. "Cyneural" uses a feature extraction engine that utilizes the knowledge cultivated through CSC's research on web access and various attack methods. It builds multiple types of training models to help detect not only common attacks but also unknown cyber-attacks and false positives at a higher speed. About Cyneural being used in Shadankun and WafCharm Since the development of Cyneural, CSC has been operating it by utilizing the large amount of data that they have.

How AI and Blockchain Will Be The Future Of Cybersecurity - IntelligentHQ


As businesses, governments and consumers rely on digital systems to fulfil most of their daily operations, so do the risks of those systems being hacked increase. The more the technologies they adopt, the greater the hazards they have to face. In fact, new solutions to ease businesses daily operations such as Artificial Intelligence in Operative Systems and IT software huge databases, bring even more complexity to an already convoluted world. However, these new techs can also become their strongest allies! If properly developed and embraced, they can deliver new layers of security that build up a strong shield of protection against hackers.

Security Think Tank: AI cyber attacks will be a step-change for criminals


Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.

Using the power of machine learning to detect cyber attacks - Fintech News


As the world becomes increasingly digital, we are unlocking more value and growth than ever before. However, a challenge that governments, enterprises and well as individuals leveraging technology are constantly facing is the growing threat of cyberattacks that looms large over us. Cyber security solutions provider SonicWall's 2019 report revealed 10.52 billion malware attacks in 2018, a 217% increase in IoT attacks and 391,689 new variants of attack that were identified. What's more is that cyber criminals today are evolving with technology and upping their game. Such incidents don't just have the potential to bring businesses to a standstill but can also inflict serious damages to their resources and repute.

Artificial Intelligence, Augmented Reality & Automation: Technology For Change


Melvin Greer is Chief Data Scientist, Americas, Intel Corporation. He is responsible for building Intel's data science platform through graph analytics, machine learning and cognitive computing to accelerate transformation of data into a strategic asset for Public Sector and commercial enterprises. His systems and software engineering experience has resulted in patented inventions in Cloud Computing, Synthetic Biology and IoT Bio-sensors for edge analytics. He significantly advances the body of knowledge in basic research and critical, highly advanced engineering and scientific disciplines. Mr. Greer is a member of the American Association for the Advancement of Science (AAAS) and U.S. National Academy of Science, Engineering and Medicine, GUIRR.

Center for Security Research and Education announces seed grant awardees


CSRE is providing a total of $300,000 in funding for the projects, with an additional $300,000 in matching and supplemental funding from other colleges, departments, and institutes. "Today's challenges to global, national, and individual security are numerous and complex," said CSRE Director James W. Houck, "and we are delighted to support these innovative and exciting initiatives." CSRE was established in 2017 to promote interdisciplinary research and education to protect people, infrastructure and institutions from the broad range of threats and hazards confronting society today. Contributing units include the Provost and Office of the Senior Vice President for Research, as well as the colleges of Agricultural Sciences, Earth and Mineral Sciences, Engineering, Information Sciences and Technology, and the Liberal Arts; Penn State Law and the School of International Affairs; Penn State Harrisburg; Applied Research Laboratory; Institute for Computational and Data Sciences; Institutes of Energy and the Environment; Huck Institutes of the Life Sciences; and the Social Sciences Research Institute. In its first three years, CSRE has provided over $633,000 in funding, augmented by an additional $581,000 from contributing units, to a total of 39 seed projects and faculty fellowships and hosted a number of guest speakers, workshops and other events.