Goto

Collaborating Authors

 security practice


Enhancing Cloud Security through Topic Modelling

Saleh, Sabbir M., Madhavji, Nazim, Steinbacher, John

arXiv.org Artificial Intelligence

Protecting cloud applications is critical in an era where security threats are increasingly sophisticated and persistent. Continuous Integration and Continuous Deployment (CI/CD) pipelines are particularly vulnerable, making innovative security approaches essential. This research explores the application of Natural Language Processing (NLP) techniques, specifically Topic Modelling, to analyse security-related text data and anticipate potential threats. We focus on Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic Analysis (PLSA) to extract meaningful patterns from data sources, including logs, reports, and deployment traces. Using the Gensim framework in Python, these methods categorise log entries into security-relevant topics (e.g., phishing, encryption failures). The identified topics are leveraged to highlight patterns indicative of security issues across CI/CD's continuous stages (build, test, deploy). This approach introduces a semantic layer that supports early vulnerability recognition and contextual understanding of runtime behaviours.


Uplifted Attackers, Human Defenders: The Cyber Offense-Defense Balance for Trailing-Edge Organizations

Murphy, Benjamin, Stone, Twm

arXiv.org Artificial Intelligence

Advances in AI are widely understood to have implications for cybersecurity. Articles have emphasized the effect of AI on the cyber offense-defense balance, and commentators can be found arguing either that cyber will privilege attackers or defenders. For defenders, arguments are often made that AI will enable solutions like formal verification of all software--and for some well-equipped companies, this may be true. This conversation, however, does not match the reality for most companies. "Trailing-edge organizations," as we term them, rely heavily on legacy software, poorly staff security roles, and struggle to implement best practices like rapid deployment of security patches. These decisions may be the result of corporate inertia, but may also be the result of a seemingly-rational calculation that attackers may not bother targeting a firm due to lack of economic incentives, and as a result, underinvestment in defense will not be punished. This approach to security may have been sufficient prior to the development of AI systems, but it is unlikely to remain viable in the near future. We argue that continuing improvements in AI's capabilities poses additional risks on two fronts: First, increased usage of AI will alter the economics of the marginal cyberattack and expose these trailing-edge organizations to more attackers, more frequently. Second, AI's advances will enable attackers to develop exploits and launch attacks earlier than they can today--meaning that it is insufficient for these companies to attain parity with today's leading defenders, but must instead aim for faster remediation timelines and more resilient software. The situation today portends a dramatically increased number of attacks in the near future. Moving forward, we offer a range of solutions for both organizations and governments to improve the defensive posture of firms which lag behind their peers today.


Assumptions to Evidence: Evaluating Security Practices Adoption and Their Impact on Outcomes in the npm Ecosystem

Zahan, Nusrat, Rahman, Imranur, Williams, Laurie

arXiv.org Artificial Intelligence

Practitioners often struggle with the overwhelming number of security practices outlined in cybersecurity frameworks for risk mitigation. Given the limited budget, time, and resources, practitioners want to prioritize the adoption of security practices based on empirical evidence. The goal of this study is to assist practitioners and policymakers in making informed decisions on which security practices to adopt by evaluating the relationship between software security practices adoption and security outcome metrics. To do this, we analyzed the adoption of security practices and their impact on security outcome metrics across 145K npm packages. We selected the OpenSSF Scorecard metrics to automatically measure the adoption of security practices in npm GitHub repositories. We also investigated project-level security outcome metrics: the number of open vulnerabilities (Vul_Count)), mean time to remediate (MTTR) vulnerabilities in dependencies, and mean time to update (MTTU) dependencies. We conducted regression and causal analysis using 11 Scorecard metrics and the aggregated Scorecard score (computed by aggregating individual security practice scores) as predictors and Vul_Count), MTTR, and MTTU as target variables. Our findings reveal that aggregated adoption of security practices is associated with 5.2 fewer vulnerabilities, 216.8 days faster MTTR, and 52.3 days faster MTTU. Repository characteristics have an impact on security practice effectiveness: repositories with high security practice adoptions, especially those that are mature, actively maintained, large in size, have many contributors, few dependencies, and high download volumes, tend to exhibit better outcomes compared to smaller or inactive repositories.


MLSMM: Machine Learning Security Maturity Model

Jedrzejewski, Felix, Fucci, Davide, Adamov, Oleksandr

arXiv.org Artificial Intelligence

Assessing the maturity of security practices during the development of Machine Learning (ML) based software components has not gotten as much attention as traditional software development. In this Blue Sky idea paper, we propose an initial Machine Learning Security Maturity Model (MLSMM) which organizes security practices along the ML-development lifecycle and, for each, establishes three levels of maturity. We envision MLSMM as a step towards closer collaboration between industry and academia.


Six Security Considerations for Machine Learning Solutions - Microsoft Community Hub

#artificialintelligence

Model Theft: Because models represent a significant investment in Intellectual Property, they can be a valuable target for theft. And like other software assets, they are tangible and can be stolen. Model theft happens when a model is taken outright from a storage location or re-created through deliberate query manipulation. An example of this type of attack was demonstrated by a research team at UC Berkeley who used public endpoints to re-create language models with near-production state-of-the-art translation quality. The researchers were then able to degrade the performance and erode the integrity of the original machine learning model using data input techniques to compromise the integrity of the original machine learning model (see Data Poisoning above).


Mental health and prayer apps have some of the worst privacy protections, study claims

Daily Mail - Science & tech

Mental health and prayer apps have some of the worst privacy protections, a Mozilla study claims, finding they'track and share' users intimate thoughts and feelings. The findings, released to coincide with May's Mental Health Awareness Month, were published as part of the annual Mozilla'Privacy Not Included' guide. The researchers examined privacy and security practices for 32 mental health and prayer apps on iOS and Android, including Talkspace, Better Help, Calm and Glorify. The six worst offenders, according to Mozilla, that is those with the very worst privacy and security, were Better Help, Youper, Woebot, Better Stop Suicide, Pray.com, and Talkspace. 'Their flaws entail incredibly vague and messy privacy policies, sharing personal information with third parties, and even collecting chat transcripts,' Mozilla said.


Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security

#artificialintelligence

What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable. Artificial intelligence is many things to many people. One fairly neutral definition is that it's a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, it's time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape.


Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security

#artificialintelligence

What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable. Artificial intelligence is many things to many people. One fairly neutral definition is that it's a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, it's time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape.


Security Guidelines for cloud-native Chatbots

#artificialintelligence

Welcome to the era of digital transformation. In fact, this growing need for quick, easy and effective communication has resulted in – you guessed it – the rise of chatbots. For Chatbots, it need not be foretold that they are going to be the cornerstones of customer engagements over the web. To list a few, they cater to needs like prompt customer support, helping with product FAQs, making reservations of cabs, hotels, etc. All of this has a direct impact on customer satisfaction and brings in stickiness to online branding mediums.


The Role Of Cognitive RPA In The Insurance Industry

#artificialintelligence

Cognitive robotic process automation (RPA) is a fast-evolving field of computing and is an emerging form of business process automation (BPA) technology. It involves the automation of many internal and external customer journeys through software "bots." RPA started roughly 20 years ago as a rudimentary screen-scraping tool, technology that is used to eliminate repetitive data entry or form-filling that human operators used to do the bulk of. For example, the software could copy data from one source to another on a computer screen. Imagine a finance clerk handling invoice processes by filling in specific fields on the screen. Early RPA was able to take this function off the clerk's plate by automating that invoice processing.