Even if your grad has finally made it through college, that doesn't mean they're ready to step out into the real world with no help. They'll not only have to find a job but also might need a little help living on their own, taking on more responsibility and being more of an adult in general. That includes having better security practices, dressing smarter and, if they're lucky enough to find their own apartment, making their new place feel more like home. Here are a few gadgets that could help ease the transition into "adulthood." It's not a terribly sexy subject, but keeping your online data safe should be a priority for everyone, including your new grad.
A quick web search for "chatbots and security" brings up results warning you about the security risks of using these virtual agents. Dig a little deeper, however, and you'll find that this artificial intelligence (AI) technology could actually help address many work-from-home cybersecurity challenges -- such as secure end-to-end encryption and user authentication -- and ensure that your organization continues to prove its data privacy compliance with less direct oversight. While many companies rely on chatbots to answer customer questions or step through a process, that same service can be used to help employees connect with security professionals as they work remotely, allowing many security problems to be resolved as efficiently as they would be if the security team were able to come directly to their colleagues' desks. Between 2005 and 2018, the number of remote workers grew by 173 percent, 11 percent faster than the rest of the workforce, according to Global Workplace Analytics. And as more employees and management experience the benefits of working from home, more people will demand the opportunity.
Security teams are dealing with a stream of warnings about failed login attempts, possible phishing emails and potential malware threats, among other challenges. There are concerns over authorized use -- who has permission to do what, when they will access it and why -- and issues around private data generated by staff and consumers. Around 26 percent of these alerts are false positives, according to Neustar. Some require no action, others are an easy fix and a small percentage actually require IT intervention. The result is hardly surprising: alert fatigue.
Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings. This trend is supported by a new white paper from Osterman Research titled "The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions." According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI. However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.
With the rapid advances in computing and information technologies, traditional access control models have become inadequate in terms of capturing fine-grained, and expressive security requirements of newly emerging applications. An attribute-based access control (ABAC) model provides a more flexible approach for addressing the authorization needs of complex and dynamic systems. While organizations are interested in employing newer authorization models, migrating to such models pose as a significant challenge. Many large-scale businesses need to grant authorization to their user populations that are potentially distributed across disparate and heterogeneous computing environments. Each of these computing environments may have its own access control model. The manual development of a single policy framework for an entire organization is tedious, costly, and error-prone. In this paper, we present a methodology for automatically learning ABAC policy rules from access logs of a system to simplify the policy development process. The proposed approach employs an unsupervised learning-based algorithm for detecting patterns in access logs and extracting ABAC authorization rules from these patterns. In addition, we present two policy improvement algorithms, including rule pruning and policy refinement algorithms to generate a higher quality mined policy. Finally, we implement a prototype of the proposed approach to demonstrate its feasibility.
IRONSCALES, the pioneer of self-learning email security, today announced that is has won Cyber Defense Magazine's Infosec Award in the category of Most Innovative Artificial Intelligence and Machine Learning application. In addition, IRONSCALES also revealed today that it has won two'Gold' awards from the Info Security Products Guide Global Excellence Awards in the categories of Artificial Intelligence in Security and Incident Analysis & Response. These awards continue momentum form 2019 in which IRONSCALES won a total of six awards, including the distinction as the Best Anti-Phishing Security Solution and Innovation in Email Security. "IRONSCALES philosophy has always been that in order to make a dent in what has become the global phishing epidemic, real-time human intelligence combined with technology that leverages artificial intelligence and machine learning is required to protect against the rapid scale of new phishing attacks," said Eyal Benishti, IRONSCALES founder and CEO. "Our team has worked tirelessly to build an email security platform that is both seamless to use yet incredibly powerful and effective. I thank the judges for recognizing our intuition and technological achievements, our thousands of customers for believing in our product and of course our dedicated team for pushing the limits to build the anti-phishing solution of tomorrow, today."
Machine learning (ML) has made tremendous progress during the past decade and is being adopted in various critical real-world applications. However, recent research has shown that ML models are vulnerable to multiple security and privacy attacks. In particular, backdoor attacks against ML models that have recently raised a lot of awareness. A successful backdoor attack can cause severe consequences, such as allowing an adversary to bypass critical authentication systems. Current backdooring techniques rely on adding static triggers (with fixed patterns and locations) on ML model inputs. In this paper, we propose the first class of dynamic backdooring techniques: Random Backdoor, Backdoor Generating Network (BaN), and conditional Backdoor Generating Network (c-BaN). Triggers generated by our techniques can have random patterns and locations, which reduce the efficacy of the current backdoor detection mechanisms. In particular, BaN and c-BaN are the first two schemes that algorithmically generate triggers, which rely on a novel generative network. Moreover, c-BaN is the first conditional backdooring technique, that given a target label, it can generate a target-specific trigger. Both BaN and c-BaN are essentially a general framework which renders the adversary the flexibility for further customizing backdoor attacks. We extensively evaluate our techniques on three benchmark datasets: MNIST, CelebA, and CIFAR-10. Our techniques achieve almost perfect attack performance on backdoored data with a negligible utility loss. We further show that our techniques can bypass current state-of-the-art defense mechanisms against backdoor attacks, including Neural Cleanse, ABS, and STRIP.
There is a lot of excitement around AI for SecOps. From a market perspective, AI in cybersecurity is projected to grow by a CAGR of 23.3% between 2019 and 2026 to exceed $38B. On the physical security front, AI- powered video analytics market driven primarily by security and safety, is projected to grow by a CAGR of 22.3% between 2018 and 2025 to reach $4.5B2. From a value perspective, securityintelligence.com has an insightful article titled - Artificial Intelligence (AI) and Security: A Match Made in the SOC – where it says " In summary, when security analysts partner with artificial intelligence, the benefits include streamlined threat detection, investigation and response processes, increased productivity, and improved job satisfaction -- analysts spend more time doing what they enjoy most and the cost of security breaches decreases." It is a well-known fact that the talent war is real in security operations.
Fintech or Financial technology is the industry that delivers traditional financial services in a technical manner. The industry has been there for a long time but made significant growth in the previous few years. New cryptocurrencies and payment solutions are surfacing, and the industry is projected to make significant growth of $309.98 billion at an annual growth rate of 24.8% through 2022. All this growth is due to huge customer value from consumers around the world. As the competition of FinTech is with a decades-old industry the need to retain loyal customers is important.
IBM Cloud Identity now features AI-based adaptive access capabilities that help continually assess employee or consumer user risk levels when accessing applications and services. The solution escalates suspicious user interactions for further authentication, while those identified as lower risk are "fast tracked" so they can access applications and services they need. With data breaches on the rise, traditional means of securing access, like passwords, are often not enough to prevent unauthorized access. The rise of credential-stuffing attacks, where a malicious actor obtains a list of credentials and tests them at various other sites using a bot, demonstrates that many password combinations have been leaked. Considering the amount of programs and passwords that employees are managing between their professional and personal lives, it is increasingly important that new security measures do not hinder user experience.