Cyber attackers currently focus most of their efforts on manipulating existing artificial intelligence (AI) systems for malicious purposes, instead of creating new attacks that use machine learning. That is the key finding of a report by the Sherpa consortium, an EU-funded project founded in 2018 to study the impact of AI on ethics and human rights, supported by 11 organisations in six countries, including the UK. However, the report notes that attackers have access to machine learning techniques, and AI-enabled cyber attacks will be a reality soon, according to Mikko Hypponen, chief research officer at IT security company F-Secure, a member of the Sherpa consortium. The continuing game of "cat and mouse" between attackers and defenders will reach a whole new level when both sides are using AI, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging. But despite the claims of some security suppliers, Hypponen told Computer Weekly in a recent interview that no criminal groups appear to be using AI to conduct cyber attacks.
From big players to small and midsize businesses, every organization has faced the impact of cyber threats at some point. But, the new generation of automated cyber attacks will affect multiple businesses to an unimaginable extent. With the onset of the digital age, going online became a necessity for every business. Most business processes, data storage, and data exchange are now handled digitally. Data has become such a significant asset that companies have started monetizing their data.
A new report details a dystopian future for humans, as we have created a technology that will soon create an unreality that will be difficult for our cognitive abilities to discern from reality. Titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," the report was authored by 26 experts from 14 institutions, including Oxford University's Future of Humanity Institute, Cambridge University's Centre for the Study of Existential Risk, Elon Musk's OpenAI, and the Electronic Frontier Foundation. They only looked at the near-future. This isn't some Jetsons-style society that our grandchildren will have to deal with, but an evolving threat that everyone will soon be fighting back against.
Machine learning (ML) over distributed data is relevant to a variety of domains. Existing approaches, such as federated learning, compose the outputs computed by a group of devices at a central aggregator and run multi-round algorithms to generate a globally shared model. Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils. In this paper we first evaluate the vulnerability of federated learning to sybil-based poisoning attacks. We then describe FoolsGold, a novel defense to this problem that identifies poisoning sybils based on the diversity of client contributions in the distributed learning process. Unlike prior work, our system does not assume that the attackers are in the minority, requires no auxiliary information outside of the learning process, and makes fewer assumptions about clients and their data. In our evaluation we show that FoolsGold exceeds the capabilities of existing state of the art approaches to countering ML poisoning attacks. Our results hold for a variety of conditions, including different distributions of data, varying poisoning targets, and various attack strategies.