Cybercriminals today are extremely organized and often take advantage of social trends to deliver weaponized bundles used to launch an attack against victims. These bundles are typically delivered via phishing emails or malware web sites that include misinformation targeting fears and uncertainty. In recent months, for example, threat intelligence researchers have been seeing an evolution in ransomware attacks targeting those most impacted by COVID-19, such as hospitals and health care providers. In fact, 41 hospitals announced ransomware attacks during the first half of 2020. Ransomware gangs, typically associated with well-established and known criminal organizations are also evolving their tactics for extortion, including publicly shaming victim organizations and threatening to publish files to the internet or auction off PII (personally identifiable information) to the highest bidder.
Cammarota, Rosario, Schunter, Matthias, Rajan, Anand, Boemer, Fabian, Kiss, Ágnes, Treiber, Amos, Weinert, Christian, Schneider, Thomas, Stapf, Emmanuel, Sadeghi, Ahmad-Reza, Demmler, Daniel, Chen, Huili, Hussain, Siam Umar, Riazi, Sadegh, Koushanfar, Farinaz, Gupta, Saransh, Rosing, Tajan Simunic, Chaudhuri, Kamalika, Nejatollahi, Hamid, Dutt, Nikil, Imani, Mohsen, Laine, Kim, Dubey, Anuj, Aysu, Aydin, Hosseini, Fateme Sadat, Yang, Chengmo, Wallace, Eric, Norton, Pamela
In this work, we provide an industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems. Such systems provide customers with timely, informed, and customized inferences to aid their decision, while at the same time utilizing appropriate security protection mechanisms for AI models. Additionally, such systems should also use Privacy-Enhancing Technologies (PETs) to protect customers' data at any time. To approach the subject, we start by introducing trends in AI inference systems. We continue by elaborating on the relationship between Intellectual Property (IP) and private data protection in such systems. Regarding the protection mechanisms, we survey the security and privacy building blocks instrumental in designing, building, deploying, and operating private AI inference systems. For example, we highlight opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use. Finally, we outline areas of further development that require the global collective attention of industry, academia, and government researchers to sustain the operation of trustworthy AI inference systems.
Even with the government's do not call list, spammers are still able to get through to millions of mobile phones each year. Not only are they annoying, but some people fall victim to their scams and give out personal information. To avoid this pitfall, you could decide to ignore all incoming phone calls or even try to screen them yourself, but these tactics are time-consuming and come with risks. Let an app like CallHero do the work for you by screening incoming calls and automatically blocking those that are spam. CallHero is part digital bouncer and part artificial intelligence secretary app that answers your calls when you don't want to.
In May of 2017, a nasty cyber attack hit more than 200,000 computers in 150 countries over the course of just a few days. Dubbed "WannaCry," it exploited a vulnerability that was first discovered by the National Security Agency (NSA) and later stolen and disseminated online. It worked like this: After successfully breaching a computer, WannaCry encrypted that computer's files and rendered them unreadable. In order to recover their imprisoned material, targets of the attack were told they needed to purchase special decryption software. Guess who sold that software? The so-called "ransomware" siege affected individuals as well as large organizations, including the U.K.'s National Health Service, Russian banks, Chinese schools, Spanish telecom giant Telefonica and the U.S.-based delivery service FedEx.
Protecting the networks of tomorrow is set to be a challenging domain due to increasing cyber security threats and widening attack surfaces created by the Internet of Things (IoT), increased network heterogeneity, increased use of virtualisation technologies and distributed architectures. This paper proposes SDS (Software Defined Security) as a means to provide an automated, flexible and scalable network defence system. SDS will harness current advances in machine learning to design a CNN (Convolutional Neural Network) using NAS (Neural Architecture Search) to detect anomalous network traffic. SDS can be applied to an intrusion detection system to create a more proactive and end-to-end defence for a 5G network. To test this assumption, normal and anomalous network flows from a simulated environment have been collected and analyzed with a CNN. The results from this method are promising as the model has identified benign traffic with a 100% accuracy rate and anomalous traffic with a 96.4% detection rate. This demonstrates the effectiveness of network flow analysis for a variety of common malicious attacks and also provides a viable option for detection of encrypted malicious network traffic.
Data protection and privacy have been discussed nonstop as more and more people come to realize just how much personal information they are sharing through the countless apps and websites they regularly visit. It's no longer so surprising to see products you've talked about with friends or concerts you've searched on Google promptly appear as advertisements in your social media feeds. And that has many people concerned. Recent government initiatives such as the EU's General Data Protection Regulation (GDPR) are designed to protect individuals' data privacy, with a core concept being "the right to be forgotten." The bad news is, it's generally difficult to revoke things that have already been shared online or to properly delete such data.
Deep within the encrypted bowels of the dark Web, beyond the reach of regular search engines, hackers and cybercriminals are brazenly trading a new breed of digital fakes. Yet unlike AI-generated deepfake audio and video--which embarrass the likes of politicians and celebrities by making them appear to say or do things they never would--this new breed of imitators is aimed squarely at relieving us of our hard-earned cash. Comprising highly detailed fake user profiles known as digital doppelgängers, these entities convincingly mimic numerous facets of our digital device IDs, alongside many of our tell-tale online behaviors when conducting transactions and e-shopping. The result: credit card fraudsters can use these doppelgängers to attempt to evade the machine-learning-based anomaly-detecting antifraud measures upon which banks and payments service providers have come to rely. It is proving to be big criminal business: many tens of thousands of doppelgängers are now being sold on the dark Web.
The vital role that cybersecurity plays in protecting our privacy, rights, freedoms, and everything up to and including our physical safety will be more prominent than ever during 2020. More and more of our vital infrastructure is coming online and vulnerable to digital attacks, data breaches involving the leak of personal information are becoming more frequent and bigger, and there's an increasing awareness of political interference and state-sanctioned cyberattacks. The importance of cybersecurity is undoubtedly a growing matter of public concern. We put our faith in technology to solve many of the problems we are facing, both on a global and personal scale. But as the world becomes increasingly connected, the opportunities for bad guys to take advantage for profit or political ends inevitably increases.
Since its invention in 1970, email has undergone very little changes. Its ease of use has made it the most common method of business communication, used by 3.7 billion users worldwide. Simultaneously, it has become the most targeted intrusion point for cybercriminals, with devastating outcomes. When initially envisioned, email was built for connectivity. Network communication was in its early days, and merely creating a digital alternative for mailboxes was revolutionary and difficult enough.
For the past few years, we've shared predictions each December on what we believe will be the top ten technology policy issues for the year ahead. As this year draws to a close, we are looking out a bit further. It gives us all an opportunity to reflect upon the past ten years and consider what the 2020s may bring. As we concluded in our book, Tools and Weapons: The Promise and the Peril of the Digital Age, "Technology innovation is not going to slow down. The work to manage it needs to speed up." Digital technology has gone longer with less regulation than virtually any major technology before it. This dynamic is no longer sustainable, and the tech sector will need to step up and exercise more responsibility while governments catch up by modernizing tech policies. In short, the 2020s will bring sweeping regulatory changes to the world of technology. Tech is at a crossroads, and to consider why, it helps to start with the changes in technology itself. The 2010s saw four trends intersect, collectively transforming how we work, live and learn. Continuing advances in computational power made more ambitious technical scenarios possible both for devices and servers, while cloud computing made these advances more accessible to the world. Like the invention of the personal computer itself, cloud computing was as important economically as it was technically. The cloud allows organizations of any size to tap into massive computing and storage capacity on demand, paying for the computing they need without the outlay of capital expenses. More powerful computers and cloud economics combined to create the third trend, the explosion of digital data.