Visa has introduced a new suite of security services designed to protect merchants and users from the latest security threats, according to a release. The new features are meant to help stop and contain payment fraud and to protect the payments ecosystem. There will be no cost for Visa clients; the company said it is one of the many benefits available to Visa merchants and financial institutions. "Cybercriminals attempt to bypass traditional defenses by stealing credentials, harvesting data, obtaining privileged access and attacking trusted third-party supply chains," said RL Prasad, senior vice president of payments systems risk for Visa. "Visa's new payment security capabilities combine payment and cyber intelligence, insights and learnings from breach investigations, and law enforcement engagement to help financial institutions and merchants solve the most critical security challenges."
The rapid progress in artificial intelligence, smart devices, and smart cities promises to revolutionise the way we work, live, and connect. However, recent scandals surrounding the handling of user data have prompted a wave of privacy concerns. The smarter a city gets, the more it can keep tabs on our every move. Likewise, with connected home devices and digital assistants picking up our daily activities and queries, the potential for privacy breaches are endless. Europe's pioneering General Data Privacy Regulation (GDPR) is one of several attempts by governments to mitigate widespread shortfalls in customer data protection, for both companies and governments.
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Cybersecurity, a huge industry worth over $100 billion, is regularly subject to buzzwords. Cybersecurity companies often (pretend) to use new state-of-the-art technologies to attract customers and sell their solutions. Naturally, with artificial intelligence being in one of its craziest hype cycles, we're seeing plenty of solutions that claim to use machine learning, deep learning and other AI-related technologies to automatically secure the networks and digital assets of their clients. But contrary to what many companies profess, machine learning is not a silver bullet that will automatically protect individuals and organizations against security threats, says Ilia Kolochenko, CEO of ImmuniWeb, a company that uses AI to test the security of web and mobile applications.
Allied Market Research recently published a report, titled, "Artificial Intelligence Chip Market by Chip Type (GPU, ASIC, FPGA, CPU, and others), Application (Natural Language Processing (NLP), Robotic, Computer Vision, Network Security, and Others), Technology (System-on-Chip, System-in-Package, Multi-chip Module, and Others), Processing Type (Edge and Cloud), and Industry Vertical (Media & Advertising, BFSI, IT & Telecom, Retail, Healthcare, Automotive & Transportation, and Others): Global Opportunity Analysis and Industry Forecast, 2019-2025". According to the report, the global AI chip market was pegged at $6.64 billion in 2018 and is projected to attain $91.18 billion by 2025, registering a colossal CAGR of 45.2% during the forecast period. Rise in demand for smart homes & smart cities, surge in investments in AI startups, and advent of quantum computing have boosted the growth of the global AI chip market. However, dearth of skilled workforce hampers the market growth. On the contrary, rapid adoption of AI chips in the emerging countries and development of smart robots are expected to create numerous opportunities in the near future.
The cyber defense needs to be constantly adapted in order to keep up with the developing threats thanks to more sophisticated technology. Though many people may not be entirely familiar with machine learning and what it has to offer, it is already making an impact on their daily lives. Shaping the future to create a safer, more efficient world to come. Although the idea of machine learning is not exactly new, it's experienced its biggest level of growth in the past decade due to increased interest. So, what exactly is machine learning and how can it help cyber defense?
Just the way the deficiency of vitamin and poor hygiene can make the human body fell seriously ill, the deficient access points and lack of hygiene in cyber environments can lead to cyberattacks in the hyper-connected workplaces. The likelihood of cyberattacks is increasing with continuous ballooning of the data feeding into the network. It's a sign, the organizations are living in the fear of becoming a victim of the cyberattacks and willing to spend big bundles on cybersecurity tools and services. According to IDC research, "The organizations will spend $101.6 billion on cybersecurity software, services, and hardware by 2020." The leading organizations are integrating the tens of security products in the environment, but yet, they afraid of being exposed and vulnerable.
Scalable cognitive solutions that meet these objectives must leverage machine learning to manage the vast amount of data that is produced by security sensors. Security expertise, data science and the math behind machine learning are all essential to developing the complex mechanisms, timing and features of machine learning systems. Thus, taking the right action by creating security tactics enabled by machine learning is dependent on three things: resources, confidence in the science, and actionability. In this paper, Jon Ramsey, Secureworks CTO, provides his vision of Machine Learning and how these three things above can enable "Smart Security" to benefit CIOs and businesses.
High-profile data breaches of enterprise companies and large government agencies gain a lot of news coverage. But does the lack of reporting involving small and medium-sized businesses mean that, for them, the cybersecurity risk is much smaller? SMBs have information and credentials that are indeed valuable for cybercriminals, including: employee and customer records, access to business financial information including bank accounts, and access to larger companies and their networks through the supply chain.
There is a lively debate all over the world regarding AI's perceived "black box" problem. Most profoundly, if a machine can be taught to learn itself, how does it explain its conclusions? This issue comes up most frequently in the context of how to address possible algorithmic bias. One way to address this issue is to mandate a right to a human decision per the General Data Protection Regulation's (GDPR) Article 22. Here in the United States, Senators Wyden and Booker propose in the Algorithmic Accountability Act that companies be compelled to conduct impact assessments.
As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations. "We have a platform that is deployed on-prem in our customer's environment, and helps them identify what data they're collecting, how they're using it, where it's being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected," company co-founder and CEO Heather Wade told TechCrunch. She says that the company made a deliberate decision to keep the solution on-prem.