Results


Artificial Intelligence Update

#artificialintelligence

These advances will create a network where almost every device can be simultaneously connected, enabling technologies not possible today. Governments and private entities are just beginning to invest in the technology, and projections suggest commercial availability around 2030. But given 6G's anticipated ubiquity and potential to change the landscape, we would be wise to begin learning about it now. Artificial intelligence ("AI") represents a new frontier in the global economy: Some estimates say it could contribute up to $15.7 trillion worldwide by 2030. Increases in computing power and innovations in computer science have fueled AI innovation.


100 critical IT policies every company needs, ready for download

ZDNet

Whether you're writing corporate policies for business workers or university policies for faculty and staff, crafting an effective IT policy can be a daunting and expensive task. You could spend hours writing a policies and procedures manual yourself, but consider how much your time is worth. According to job site Glassdoor, the average salary of an IT Director in the U.S. is over $140,000 (depending on geographic location, company, education, etc.). If it takes you one work day to write an IT policy, that single policy cost you $536 ($67 x 8 hours). Don't have time to write a business or university policy?


Automating the GDPR Compliance Assessment for Cross-border Personal Data Transfers in Android Applications

arXiv.org Artificial Intelligence

Abstract-- The General Data Protection Regulation (GDPR) aims to ensure that all personal data processing activities are fair and transparent for the European Union (EU) citizens, regardless of whether these are carried out within the EU or anywhere else. To this end, it sets strict requirements to transfer personal data outside the EU. However, checking these requirements is a daunting task for supervisory authorities, particularly in the mobile app domain due to the huge number of apps available and their dynamic nature. In this paper, we propose a fully automated method to assess compliance of mobile apps with the GDPR requirements for cross-border personal data transfers. We have applied the method to the top-free 10,080 apps from the Google Play Store. The results reveal that there is still a very significant gap between what app providers and third-party recipients do in practice and what is intended by the GDPR. A substantial 56% of analysed apps are potentially non-compliant with the GDPR cross-border transfer requirements. THE distributed nature of today's digital systems and services across the world [1], or shared between chains of thirdparty not only facilitates the collection of personal data service providers [6], even without the app developer's from individuals anywhere, but also their transfer to different knowledge [7]. Second, apps are distributed through countries around the world [1]. This raises potential global stores, enabling app providers to easily reach markets risks to the privacy of individuals, as the organizations and users beyond its country of residence. In this sending and receiving personal data can be subject to different context, there is a need for constant vigilance by the various data protection laws and, therefore, may not offer an stakeholders, including app developers, supervisory equivalent level of protection.


Remember What You Want to Forget: Algorithms for Machine Unlearning

arXiv.org Artificial Intelligence

We study the problem of forgetting datapoints from a learnt model. In this case, the learner first receives a dataset $S$ drawn i.i.d. from an unknown distribution, and outputs a predictor $w$ that performs well on unseen samples from that distribution. However, at some point in the future, any training data point $z \in S$ can request to be unlearned, thus prompting the learner to modify its output predictor while still ensuring the same accuracy guarantees. In our work, we initiate a rigorous study of machine unlearning in the population setting, where the goal is to maintain performance on the unseen test loss. We then provide unlearning algorithms for convex loss functions. For the setting of convex losses, we provide an unlearning algorithm that can delete up to $O(n/d^{1/4})$ samples, where $d$ is the problem dimension. In comparison, in general, differentially private learningv(which implies unlearning) only guarantees deletion of $O(n/d^{1/2})$ samples. This shows that unlearning is at least polynomially more efficient than learning privately in terms of dependence on $d$ in the deletion capacity.


Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack

arXiv.org Artificial Intelligence

Recently, significant advancements have been made in face recognition technologies using Deep Neural Networks. As a result, companies such as Microsoft, Amazon, and Naver offer highly accurate commercial face recognition web services for diverse applications to meet the end-user needs. Naturally, however, such technologies are threatened persistently, as virtually any individual can quickly implement impersonation attacks. In particular, these attacks can be a significant threat for authentication and identification services, which heavily rely on their underlying face recognition technologies' accuracy and robustness. Despite its gravity, the issue regarding deepfake abuse using commercial web APIs and their robustness has not yet been thoroughly investigated. This work provides a measurement study on the robustness of black-box commercial face recognition APIs against Deepfake Impersonation (DI) attacks using celebrity recognition APIs as an example case study. We use five deepfake datasets, two of which are created by us and planned to be released. More specifically, we measure attack performance based on two scenarios (targeted and non-targeted) and further analyze the differing system behaviors using fidelity, confidence, and similarity metrics. Accordingly, we demonstrate how vulnerable face recognition technologies from popular companies are to DI attack, achieving maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks, respectively. Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0.02% for targeted and non-targeted attacks, respectively.


How Online Privacy Issues Will Shape Future Use Of Artificial Intelligence In Advertising

#artificialintelligence

Privacy restrictions are pushing many marketer toward the use of artificial intelligence in order to ... [ ] delive more targeted messages. The trend toward greater focus on privacy issues has been going on for some time and is starting to come to a head. More restrictions on the sharing and merging of data on individuals has been leading to advertisers to look for effective ways to target and reach consumers, including using the use of behavioral targeting supplemented by the use of artificial intelligence (AI). At a time when privacy regulations are sometimes fragmented and confusing but changing, it is critically important for marketers to monitor changes in the regulatory environment. Against this backdrop, I interviewed Sheri Bachstein, IBM's Global Head of Watson Advertising to get her insights and predictions on the future of privacy regulation and how it will affect advertisers, particularly as regards the use of AI and came away with three major takeaways: The European Union's General Data Protection Regulation and the California Consumer Privacy Act are already leading to the devaluation of traditional third-party cookies and the way many advertisers do business.


The SolarWinds Body Count Now Includes NASA and the FAA

#artificialintelligence

Some blasts from the past surfaced this week, including revelations that a Russia-linked hacking group has repeatedly targeted the US electrical grid, along with oil and gas utilities and other industrial firms. Notably, the group has ties to the notorious industrial-control GRU hacking group Sandworm. Meanwhile, researchers revealed evidence this week that an elite NSA hacking tool for Microsoft Windows, known as EpMe, fell into the hands of Chinese hackers in 2014, years before that same tool then leaked in the notorious Shadow Brokers dump of NSA tools. WIRED got an inside look at how the video game hacker Empress has become so powerful and skilled at cracking the digital rights management software that lets video game makers, ebook publishers, and others control the content you buy from them. And the increasingly popular, but still invite-only, audio-based social media platform Clubhouse continues to struggle with security and privacy missteps. If you want something relaxing to take your mind off all of this complicated and concerning news, though, check out the new generation of Opte, an art piece that depicts the evolution and growth of the internet from 1997 to today.


The SolarWinds Body Count Now Includes NASA and the FAA

WIRED

Some blasts from the past surfaced this week, including revelations that a Russia-linked hacking group has repeatedly targeted the US electrical grid, along with oil and gas utilities and other industrial firms. Notably, the group has ties to the notorious industrial-control GRU hacking group Sandworm. Meanwhile, researchers revealed evidence this week that an elite NSA hacking tool for Microsoft Windows, known as EpMe, fell into the hands of Chinese hackers in 2014, years before that same tool then leaked in the notorious Shadow Brokers dump of NSA tools. WIRED got an inside look at how the video game hacker Empress has become so powerful and skilled at cracking the digital rights management software that lets video game makers, ebook publishers, and others control the content you buy from them. And the increasingly popular, but still invite-only, audio-based social media platform Clubhouse continues to struggle with security and privacy missteps. If you want something relaxing to take your mind off all of this complicated and concerning news, though, check out the new generation of Opte, an art piece that depicts the evolution and growth of the internet from 1997 to today.


The Morning After: 'Cyberpunk 2077' runs into another delay

Engadget

Cyberpunk 2077's woes have continued long after the game launched, with all the issues that entailed. CD Projekt Red announced yesterday that we'll have to wait until the second half of March for the next big patch. The developer cited that recent ransomware hack as the major culprit -- it initially planned to launch the 1.2 patch in February. As you're probably aware, February ends this week. The news is especially frustrating for PS5 owners as the game hasn't returned to the PlayStation Store since it was pulled.


Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS

arXiv.org Artificial Intelligence

Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and information or cyber worlds. Their deployment in critical infrastructure have demonstrated a potential to transform the world. However, harnessing this potential is limited by their critical nature and the far reaching effects of cyber attacks on human, infrastructure and the environment. An attraction for cyber concerns in CPS rises from the process of sending information from sensors to actuators over the wireless communication medium, thereby widening the attack surface. Traditionally, CPS security has been investigated from the perspective of preventing intruders from gaining access to the system using cryptography and other access control techniques. Most research work have therefore focused on the detection of attacks in CPS. However, in a world of increasing adversaries, it is becoming more difficult to totally prevent CPS from adversarial attacks, hence the need to focus on making CPS resilient. Resilient CPS are designed to withstand disruptions and remain functional despite the operation of adversaries. One of the dominant methodologies explored for building resilient CPS is dependent on machine learning (ML) algorithms. However, rising from recent research in adversarial ML, we posit that ML algorithms for securing CPS must themselves be resilient. This paper is therefore aimed at comprehensively surveying the interactions between resilient CPS using ML and resilient ML when applied in CPS. The paper concludes with a number of research trends and promising future research directions. Furthermore, with this paper, readers can have a thorough understanding of recent advances on ML-based security and securing ML for CPS and countermeasures, as well as research trends in this active research area.