Goto

Collaborating Authors

 gard


Goodbye cartoon breasts, hello sweat stains: the feminist reinvention of Tomb Raider

The Guardian

Hot on the heels of that Oasis reunion comes news of the return of another 90s icon – Lara Croft. She bounds back on to our screens with a new animated series, still sporting that holy triumvirate of classic ponytail, backpack and combat boots. From the get-go she's performing seemingly impossible feats in the name of archaeology: she outswims a ravenous crocodile, and uses her signature blend of parkour and gymnastics to avoid a pit of sharp spikes. But this isn't the Tomb Raider star quite as you might remember her. The eponymous star of Netflix's Tomb Raider: The Legend of Lara Croft – voiced by Agent Carter's Hayley Atwell – looks different to how she appeared in the original games.


DARPA Launches Program to Build AI Resiliency Against Adversaries

#artificialintelligence

The Department of Defense's (DoD) Defense Advanced Research Projects Agency (DARPA) announced the launch of its Guaranteeing AI Robustness Against Deception (GARD) program, which is designed to develop new defenses against adversarial attacks on machine learning (ML) models. The program aims to respond to adversarial AI by developing a testbed to characterize different ML defenses and assess their applicability. Researchers on the program have created resources and virtual tools for the community to be able to test and verify the effectiveness of existing and emerging ML defense models. "Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve," GARD program manager Bruce Draper said in the announcement. "With GARD, we are taking a page from cryptography and are striving to create a community to facilitate the open exchange of ideas, tools, and technologies that can help researchers test and evaluate their ML defenses. Our goal is to raise the bar on existing evaluation efforts, bringing more sophistication and maturation to the field."


Intel & DARPA's Project On Making Object Detection Resilient Against Attacks

#artificialintelligence

Intel, along with the Georgia Institute of Technology (Georgia Tech) recently obtained a multimillion-dollar deal from the Defense Advanced Research Projects Agency (DARPA) in the US. As per the four-year contract, both will work on'Guaranteeing Artificial Intelligence (AI) Robustness against Deception' – or GARD – program for DARPA. According to Intel, it is the main contractor in the multimillion-dollar joint deal, which is targeted at improving cybersecurity defence support facing and spoofing attacks on machine learning (ML) systems. Spoofing attacks can alter and imperil the interpretation of data by the ML algorithms used in an autonomous system. Military systems are vulnerable to security attacks, which can pose risks to extremely sensitive information that can potentially harm military systems.


DARPA snags Intel to lead its machine learning security tech – TechCrunch

#artificialintelligence

Chip maker Intel has been chosen to lead a new initiative led by the U.S. military's research wing, DARPA, aimed at improving cyber-defenses against deception attacks on machine learning models. Machine learning is a kind of artificial intelligence that allows systems to improve over time with new data and experiences. One of its most common use cases today is object recognition, such as taking a photo and describing what's in it. That can help those with impaired vision to know what's in a photo if they can't see it, for example, but it also can be used by other computers, such as autonomous vehicles, to identify what's on the road. But deception attacks, although rare, can meddle with machine learning algorithms.


Maritime autonomous surface ships on the horizon

#artificialintelligence

Gard's mission is: Together we enable sustainable maritime development. To deliver on this mission, we explore and support the development of emerging technologies including maritime autonomous surface ships. The Nordic countries are leading the way in this area and we are proud to be collaborating with Yara International (Yara) and their newly established company Yara Birkeland AS that is developing the well-known Norwegian autonomous logistics project, YARA BIRKELAND. Construction of the zero-emission autonomous containership has already begun. When the ship enters service in early 2020, she will be operated by onboard crew while the autonomous systems are being tested and certified safe.


Defending Against Adversarial Artificial Intelligence

#artificialintelligence

Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning. "Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient," said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems." In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application. To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario. "There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure.


Defending against adversarial artificial intelligence

#artificialintelligence

Today, machine learning (ML) is coming into its own, ready to serve mankind in a diverse array of applications – from highly efficient manufacturing, medicine and massive information analysis to self-driving transportation, and beyond. However, if misapplied, misused or subverted, ML holds the potential for great harm – this is the double-edged sword of machine learning. "Over the last decade, researchers have focused on realizing practical ML capable of accomplishing real-world tasks and making them more efficient," said Dr. Hava Siegelmann, program manager in DARPA's Information Innovation Office (I2O). But, in a very real way, we've rushed ahead, paying little attention to vulnerabilities inherent in ML platforms – particularly in terms of altering, corrupting or deceiving these systems." In a commonly cited example, ML used by a self-driving car was tricked by visual alterations to a stop sign. While a human viewing the altered sign would have no difficulty interpreting its meaning, the ML erroneously interpreted the stop sign as a 45 mph speed limit posting. In a real-world attack like this, the self-driving car would accelerate through the stop sign, potentially causing a disastrous outcome. This is just one of many recently discovered attacks applicable to virtually any ML application. To get ahead of this acute safety challenge, DARPA created the Guaranteeing AI Robustness against Deception (GARD) program. GARD aims to develop a new generation of defenses against adversarial deception attacks on ML models. Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach ML defense differently – by developing broad-based defenses that address the numerous possible attacks in a given scenario. "There is a critical need for ML defense as the technology is increasingly incorporated into some of our most critical infrastructure.


Noise Statistics Oblivious GARD For Robust Regression With Sparse Outliers

Kallummil, Sreejith, Kalyani, Sheetal

arXiv.org Machine Learning

Linear regression models contaminated by Gaussian noise (inlier) and possibly unbounded sparse outliers are common in many signal processing applications. Sparse recovery inspired robust regression (SRIRR) techniques are shown to deliver high quality estimation performance in such regression models. Unfortunately, most SRIRR techniques assume \textit{a priori} knowledge of noise statistics like inlier noise variance or outlier statistics like number of outliers. Both inlier and outlier noise statistics are rarely known \textit{a priori} and this limits the efficient operation of many SRIRR algorithms. This article proposes a novel noise statistics oblivious algorithm called residual ratio thresholding GARD (RRT-GARD) for robust regression in the presence of sparse outliers. RRT-GARD is developed by modifying the recently proposed noise statistics dependent greedy algorithm for robust de-noising (GARD). Both finite sample and asymptotic analytical results indicate that RRT-GARD performs nearly similar to GARD with \textit{a priori} knowledge of noise statistics. Numerical simulations in real and synthetic data sets also point to the highly competitive performance of RRT-GARD.


Tuning Free Orthogonal Matching Pursuit

Kallummil, Sreejith, Kalyani, Sheetal

arXiv.org Machine Learning

Orthogonal matching pursuit (OMP) is a widely used compressive sensing (CS) algorithm for recovering sparse signals in noisy linear regression models. The performance of OMP depends on its stopping criteria (SC). SC for OMP discussed in literature typically assumes knowledge of either the sparsity of the signal to be estimated $k_0$ or noise variance $\sigma^2$, both of which are unavailable in many practical applications. In this article we develop a modified version of OMP called tuning free OMP or TF-OMP which does not require a SC. TF-OMP is proved to accomplish successful sparse recovery under the usual assumptions on restricted isometry constants (RIC) and mutual coherence of design matrix. TF-OMP is numerically shown to deliver a highly competitive performance in comparison with OMP having \textit{a priori} knowledge of $k_0$ or $\sigma^2$. Greedy algorithm for robust de-noising (GARD) is an OMP like algorithm proposed for efficient estimation in classical overdetermined linear regression models corrupted by sparse outliers. However, GARD requires the knowledge of inlier noise variance which is difficult to estimate. We also produce a tuning free algorithm (TF-GARD) for efficient estimation in the presence of sparse outliers by extending the operating principle of TF-OMP to GARD. TF-GARD is numerically shown to achieve a performance comparable to that of the existing implementation of GARD.