If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
ITRS Group help enterprises run their IT estates efficiently, prevent outages and optimise costs. Since its inception, the Internet of Things (IoT) has grown at a steady pace – but, finally, it is positioned to break into the mainstream. Demonstrating this growth, a quarter of businesses now use IoT technology, compared to just 13% in 2014. And this expansion is only set to continue, with IoT underpinning an increasing host of new technologies, including driverless cars and smart homes. However, as IoT continues to proliferate, security becomes a crucial concern – with a number of high-profile cyberattacks demonstrating the vulnerability of IoT.
While these technical skills are certainly important, we're also now looking more holistically at candidates to test their abilities to think critically and creatively as well as uncover new solutions. As we face new and unprecedented challenges in cyber protection, it's critical that cyber leaders hire team members who think outside-the-box, have intellectual curiosity, employ bold thinking, and are natural problem solvers. Protecting an organization against advanced cyber threats requires innovative thinking and techniques; people, process and technology capabilities are needed to properly defend ourselves against sophisticated attackers, such as nation states. Cyber threats will continue to evolve, as will the new techniques described above to enable cyber resiliency. Ariel Weintraub is currently the Head of Enterprise Cyber Security at MassMutual. Ariel first joined MassMutual in the fall of 2019 as the Head of Security Operations & Engineering, responsible for the Global Security Operations Center, Security Engineering, Security Intelligence, and Identity & Access Management. Prior to joining MassMutual, Ariel served as Senior Director of Data & Access Security within Cybersecurity Operations at TIAA where she led a three-year business transformation program to position IAM as a digital business enabler. Prior to TIAA, Ariel held the position of Global Head of Vulnerability Management at BNY Mellon and was part of the Threat & Vulnerability Management practice at PricewaterhouseCoopers (PwC).
Stopping ransomware has become a priority for many organizations. So, they are turning to artificial intelligence (AI) and machine learning (ML) as their defenses of choice. However, threat actors are also turning to AI and ML to launch their attacks. One specific type of attack, data poisoning, takes advantage of this. Like any other tech, AI is a two-sided coin.
We often hear about the positive aspects of artificial intelligence (AI) security -- the way it can predict what customers need through data and deliver a custom result. When the darker side of AI is discussed, the conversation often centers on data privacy. Other conversations in this area veer into science fiction where the AI works of its own volition: "Open the pod bay doors, HAL." But a concerning trend is emerging in the real world: an increase in AI-enabled cyberattacks. Cybersecurity experts are becoming more concerned about AI attacks, both now and in the near future.
As part of Microsoft's research into ways to use machine learning and AI to improve security defenses, the company has released an open source attack toolkit to let researchers create simulated network environments and see how they fare against attacks. Microsoft 365 Defender Research released CyberBattleSim, which creates a network simulation and models how threat actors can move laterally through the network looking for weak points. When building the attack simulation, enterprise defenders and researchers create various nodes on the network and indicate which services are running, which vulnerabilities are present, and what type of security controls are in place. Automated agents, representing threat actors, are deployed in the attack simulation to randomly execute actions as they try to take over the nodes. "The simulated attacker's goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack," the Microsoft 365 Defender Research Team wrote in a post discussing the project.
Sophisticated cyber attacks have plagued many high-profile businesses. To remain aware of the fast-evolving threat landscape, open-source Cyber Threat Intelligence (OSCTI) has received growing attention from the community. Commonly, knowledge about threats is presented in a vast number of OSCTI reports. Despite the pressing need for high-quality OSCTI, existing OSCTI gathering and management platforms, however, have primarily focused on isolated, low-level Indicators of Compromise. On the other hand, higher-level concepts (e.g., adversary tactics, techniques, and procedures) and their relationships have been overlooked, which contain essential knowledge about threat behaviors that is critical to uncovering the complete threat scenario. To bridge the gap, we propose SecurityKG, a system for automated OSCTI gathering and management. SecurityKG collects OSCTI reports from various sources, uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors, and constructs a security knowledge graph. SecurityKG also provides a UI that supports various types of interactivity to facilitate knowledge graph exploration.
The formidable power of the digital economy, aka economic activity that results from online interactions between users and businesses, has the potential provide India with $1 trillion in economic value by 2025. Companies across multiple industries, such as the financial sector, are eager to reap the benefits of the digital economy. Enterprising institutions seek to increasingly adopt modern tools and techniques, such as artificial intelligence (AI) enabled applications, in order to tap into this mountain of economic potential. AI can process large amounts of information very quickly, and financial institutions will start adopting AI-enabled tools to make accurate risk assessments, detect insider trading, and streamline daily operations. However, researchers have also demonstrated how exploiting vulnerabilities in certain AI models can adversely affect the final performance of a system.
Bias and susceptibility were evident during the 2016 US Presidential election and has plagued much of President Trump's first four years in office. The term "fake news," which years ago would have been considered absurd, is now part of our cultural vernacular. Allegations against foreign-state actors interfering with US elections and conspiracy theories related to COVID-19 has divided a culture, communities, friends, and even families. Social media has become a platform that propagates both real and fake news and has confounded the next generation of fact checkers and truth seekers dedicated to vetting accurate content. "Deep Fake" In recent years, the emergence of fake news has brought the concept deep fake to the public spotlight.
Mobile devices are popular with hackers because they're designed for quick responses based on minimal contextual information. Verizon's 2020 Data Breach Investigations Report (DBIR) found that hackers are succeeding with integrated email, SMS and link-based attacks across social media aimed at stealing passwords and privileged access credentials. And with a growing number of breaches originating on mobile devices according to Verizon's Mobile Security Index 2020, combined with 83% of all social media visits in the United States are on mobile devices according to Merkle's Digital Marketing Report Q4 2019, applying machine learning to harden mobile threat defense deserves to be on any CISOs' priority list today. Google's use of machine learning to thwart the skyrocketing number of phishing attacks occurring during the Covid-19 pandemic provides insights into the scale of these threats. During a typical week in April of this year, Google's G-Mail Security team saw 18M daily malware and phishing emails related to Covid-19.
A lot of this could be attributed to the vulnerability businesses offer the cybercriminals to take advantage of the situation quickly. While the conventional cybersecurity approach has benefited many, having cybersecurity without cyber-intelligence and necessary awareness can put the security professionals off-guarded to more complicated and novel threats. Furthermore, with limited cybersecurity resources, businesses need to prioritise their efforts to strengthen cyber posture effectively; however, many organisations do not have an anchor point or a guiding principle, to begin with. With cyber-intelligence inputs missing from cybersecurity capabilities like incident management, vulnerability management, risk assessment and brand monitoring, businesses end up running their security practice in silos instead of an integrated approach. And, thus, in an attempt to revolutionise the cyber threat visibility and intelligence market, CYFIRMA, a cyber analytics startup assists businesses to understand the relevance of the current threat landscape.