Goto

Collaborating Authors

adversary


Anyone Can Download An Autonomous 'Research Robot' From The Air Force Research Laboratory

#artificialintelligence

Dr. Benji Maruyama is the Air Force Research Laboratory team lead for Autonomous Materials and the ... [ ] Autonomous Research System also known as ARES. ARES OS, an open-source software program, is now available online as a free download. In the fight to prevail over America's adversaries by out-innovating them - a fight which has all the hallmarks of a Cold War despite President Biden's assertions to the contrary - increasing the speed at which physical lab experiments can be done and iterated is vital. Air Force Research Laboratory (AFRL) scientist, Dr. Benji Maruyama, is reminding his peers and the public that, "Research is a painfully slow process. Being in a lab and doing experiments takes lots of time."


Synthetic Data Does Not Reliably Protect Privacy, Researchers Claim

#artificialintelligence

A new research collaboration between France and the UK casts doubt on growing industry confidence that synthetic data can resolve the privacy, quality and availability issues (among other issues) that threaten progress in the machine learning sector. Among several key points addressed, the authors assert that synthetic data modeled from real data retains enough of the genuine information as to provide no reliable protection from inference and membership attacks, which seek to deanonymize data and re-associate it with actual people. Furthermore, the individuals most at risk from such attacks, including those with critical medical conditions or high hospital bills (in the case of medical record anonymization) are, through the'outlier' nature of their condition, most likely to be re-identified by these techniques. 'Given access to a synthetic dataset, a strategic adversary can infer, with high confidence, the presence of a target record in the original data.' The paper also notes that differentially private synthetic data, which obscures the signature of individual records, does indeed protect individuals' privacy, but only by significantly crippling the usefulness of the information retrieval systems that use it.


In Artificial Intelligence, 'We Need To Be More Precise': Lt. Gen. O'Brien - Breaking Defense

#artificialintelligence

A soldier wears virtual reality glasses. Illustration created by NIWC Pacific. AFA: Beyond throwing around "artificial intelligence" as a buzzword during briefings, the Air Force needs to communicate more clearly within own its ranks and to industry about what it wants in AI capabilities, a top Air Force intelligence officer said. "I'm in the Pentagon, so I see a lot of PowerPoint presentations, and I see a lot of slides saying'we're going to use some AI'" to solve a problem, Lt. Gen. Mary O'Brien said. "But we need to be more precise. Sometimes we say we want AI, but what we describe to industry is an automation tool, or a visualization tool, or [some technology] without training data."


Vulnerabilities May Slow Air Force's Adoption of Artificial Intelligence

#artificialintelligence

The Air Force needs to better prepare to defend AI programs and algorithms from adversaries that may seek to corrupt training data, the service's deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects said Wednesday. "There's an assumption that once we develop the AI, we have the algorithm, we have the training data, it's giving us whatever it is we want it to do, that there's no risk. There's no threat," said Lt. Gen. Mary F. O'Brien, the Air Force's deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations. That assumption could be costly to future operations. Speaking at the Air Force Association's Air, Space and Cyber conference, O'Brien said that while deployed AI is still in its infancy, the Air Force should prepare for the possibility of adversaries using the service's own tools against the United States.


La veille de la cybersécurité

#artificialintelligence

As researchers and engineers race to develop new artificial intelligence systems for the U.S. military, they must consider how the technology could lead to accidents with catastrophic consequences. In a startling, but fictitious, scenario, analysts at the Center for Security and Emerging Technology -- which is part of Georgetown University's Walsh School of Foreign Service -- lay out a potential doomsday storyline with phantom missile launches. In the scenario, U.S. Strategic Command relies on a new missile defense system's algorithms to detect attacks from adversaries. The system can quickly and autonomously trigger an interceptor to shoot down enemy missiles which might be armed with nuclear warheads. "One day, unusual atmospheric conditions over the Bering Strait create an unusual glare on the horizon," the report imagined.


Army researchers seek to provide more data to soldiers through two projects

#artificialintelligence

The U.S. Army Research Lab made breakthroughs this summer on two neural networks projects that could assist commanders' decision-making on the battlefield and provide soldiers' health information through fibers in their uniform. The advancements come as the U.S. military is preparing for data-driven battle, in which gobs of data are transmitted across the battlespace, processed and used in a commander's decision-making. Neural networks are a combination of algorithms that work together to recognize patterns in data through a process similar to that of the human brain. The first project is working to provide a tool to battle commanders that quantifies uncertainty in data analysis using neural networks. Researchers associated with the Army Research Lab created a new framework for neural network processing that would use artificial intelligence to provide confidence ratings.


Aegis: A Trusted, Automatic and Accurate Verification Framework for Vertical Federated Learning

arXiv.org Artificial Intelligence

Vertical federated learning (VFL) leverages various privacy-preserving algorithms, e.g., homomorphic encryption or secret sharing based SecureBoost, to ensure data privacy. However, these algorithms all require a semi-honest secure definition, which raises concerns in real-world applications. In this paper, we present Aegis, a trusted, automatic, and accurate verification framework to verify the security of VFL jobs. Aegis is separated from local parties to ensure the security of the framework. Furthermore, it automatically adapts to evolving VFL algorithms by defining the VFL job as a finite state machine to uniformly verify different algorithms and reproduce the entire job to provide more accurate verification. We implement and evaluate Aegis with different threat models on financial and medical datasets. Evaluation results show that: 1) Aegis can detect 95% threat models, and 2) it provides fine-grained verification results within 84% of the total VFL job time.


Statistically Near-Optimal Hypothesis Selection

arXiv.org Artificial Intelligence

Hypothesis Selection is a fundamental distribution learning problem where given a comparator-class $Q=\{q_1,\ldots, q_n\}$ of distributions, and a sampling access to an unknown target distribution $p$, the goal is to output a distribution $q$ such that $\mathsf{TV}(p,q)$ is close to $opt$, where $opt = \min_i\{\mathsf{TV}(p,q_i)\}$ and $\mathsf{TV}(\cdot, \cdot)$ denotes the total-variation distance. Despite the fact that this problem has been studied since the 19th century, its complexity in terms of basic resources, such as number of samples and approximation guarantees, remains unsettled (this is discussed, e.g., in the charming book by Devroye and Lugosi `00). This is in stark contrast with other (younger) learning settings, such as PAC learning, for which these complexities are well understood. We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p,q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$. This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran (COLT `19) gave a learner achieving the optimal $2$-approximation, but with an exponentially worse sample complexity of $\tilde O(\sqrt{n}/\epsilon^{2.5})$, and Yatracos~(Annals of Statistics `85) gave a learner with optimal sample complexity of $O(\log n /\epsilon^2)$ but with a sub-optimal approximation factor of $3$.


Making machine learning trustworthy

Science

Machine learning (ML) has advanced dramatically during the past decade and continues to achieve impressive human-level performance on nontrivial tasks in image, speech, and text recognition. It is increasingly powering many high-stake application domains such as autonomous vehicles, self–mission-fulfilling drones, intrusion detection, medical image classification, and financial predictions ([ 1 ][1]). However, ML must make several advances before it can be deployed with confidence in domains where it directly affects humans at training and operation, in which cases security, privacy, safety, and fairness are all essential considerations ([ 1 ][1], [ 2 ][2]). The development of a trustworthy ML model must build in protections against several types of adversarial attacks (see the figure). An ML model requires training datasets, which can be “poisoned” through the insertion, modification, or removal of training samples with the purpose of influencing the decision boundary of a model to serve the adversary's intent ([ 3 ][3]). Poisoning happens when models learn from crowdsourced data or from inputs they receive while in operation, both of which are susceptible to tampering. Adversarially manipulated inputs can evade ML models through purposely crafted inputs called adversarial examples ([ 4 ][4]). For example, in an autonomous vehicle, a control model may rely on road-sign recognition for its navigation. By placing a tiny sticker on a stop sign, an adversary can evade the model to mistakenly recognize the stop sign as a yield sign or a “speed limit 45” sign, whereas a human driver would simply ignore the visually nonconsequential sticker and apply the brakes at the stop sign (see the figure). Attacks can also abuse the input-output interaction of a model's prediction interface to steal the ML model itself ([ 5 ][5], [ 6 ][6]). By supplying a batch of inputs (for example, publicly available images of traffic signs) and obtaining predictions for each, a model serves as a labeling oracle that enables an adversary to train a surrogate model that is functionally equivalent to the model. Such attacks pose greater risks for ML models that learn from high-stake data such as intellectual property and military or national security intelligence. ![Figure][7] Adversarial threats to machine learning Machine learning models are vulnerable to attacks that degrade model confidentiality and model integrity or that reveal private information. GRAPHIC: KELLIE HOLOSKI/ SCIENCE When models are trained for predictive analytics on privacy-sensitive data, such as patient clinical data and bank customer transactions, privacy is of paramount importance. Privacy-motivated attacks can reveal sensitive information contained in training data through mere interaction with deployed models ([ 7 ][8]). The root cause for such attacks is that ML models tend to “memorize” ancillary parts of their training data and, at prediction time, inadvertently divulge identifying details about individuals who contributed to the training data. One common strategy, called membership inference, enables an adversary to exploit the differences in a model's response to members and nonmembers of a training dataset ([ 7 ][8]). In response to these threats to ML models, the quest for countermeasures is promising. Research has made progress on detecting poisoning and adversarial inputs to limiting what an adversary may learn by just interacting with a model to limit the extent of model stealing or membership inference attacks ([ 1 ][1], [ 8 ][9]). One promising example is the formally rigorous formulation of privacy. The notion of differential privacy promises to an individual who participates in a dataset that whether your record belongs to a training dataset of a model or not, what an adversary learns about you by interacting with the model is basically the same ([ 9 ][10]). Beyond technical remedies, the lessons learned from the ML attack-defense arms race provide opportunities to motivate broader efforts to make ML truly trustworthy in terms of societal needs. Issues include how a model “thinks” when it makes decisions (transparency) and fairness of an ML model when it is trained to solve high-stake inference tasks for which bias exists if those decisions were made by humans. Making meaningful progress toward trustworthy ML requires an understanding about the connections, and at times tensions, between the traditional security and privacy requirements and the broader issues of transparency, fairness, and ethics when ML is used to address human needs. Several worrisome instances of biases in consequential ML applications have been documented ([ 10 ][11], [ 11 ][12]), such as race and gender misidentification, wrongfully scoring darker-skin faces for higher likelihood of being a criminal, disproportionately favoring male applicants in resume screenings, and disfavoring black patients in medical trials. These harmful consequences require that the developers of ML models look beyond technical solutions to win trust among human subjects who are affected by these harmful consequences. On the research front, especially for the security and privacy of ML, the aforementioned defensive countermeasures have solidified the understanding around blind spots of ML models in adversarial settings ([ 8 ][9], [ 9 ][10], [ 12 ][13], [ 13 ][14]). On the fairness and ethics front, there is more than enough evidence to demonstrate pitfalls of ML, especially on underrepresented subjects of training datasets. Thus, there is still more to be done by way of human-centered and inclusive formulations of what it means for ML to be fair and ethical. One misconception about the root cause of bias in ML is attributing bias to data and data alone. Data collection, sampling, and annotation play a critical role in causing historical bias, but there are multiple junctures in the data processing pipeline where bias can manifest. From data sampling to feature extraction, from aggregation during training to evaluation methodologies and metrics during testing, bias issues manifest across the ML data-processing pipeline. At present, there is a lack of broadly accepted definitions and formulations of adversarial robustness ([ 13 ][14]) and privacy-preserving ML (except for differential privacy, which is formally appealing yet not widely deployed). Lack of transferability of notions of attacks, defenses, and metrics from one domain to another is also a pressing issue that impedes progress toward trustworthy ML. For example, most ML evasion and membership inference attacks illustrated earlier are predominantly on applications such as image classification (road-sign detection by an autonomous vehicle), object detection (identifying a flower from a living room photo with multiple objects), speech processing (voice assistants), and natural language processing (machine translation). The threats and countermeasures proposed in the context of vision, speech, and text domain hardly translate to one another, often naturally adversarial domains, such as network intrusion detection and financial-fraud detection. Another important consideration is the inherent tension between some trustworthiness properties. For example, transparency and privacy are often conflicting because if a model is trained on privacy-sensitive data, aiming for the highest level of transparency in production would inevitably lead to leakage of privacy-sensitive details of data subjects ([ 14 ][15]). Thus, choices need to be made as to the extent that transparency is penalized to gain privacy, and vice versa, and such choices need to be made clear to system purchasers and users. Generally, privacy concerns prevail because of the legal implications if they are not enforced (for example, patient privacy with respect to the Health Insurance Portability and Accountability Act in the United States). Also, privacy and fairness may not always develop synergy. For example, although privacy-preserving ML (such as differential privacy) provides a bounded guarantee on indistinguishability of individual training examples, in terms of utility, research shows that minority groups in the training data (for example, based on race, gender, or sexuality) tend to be negatively affected by the model outputs ([ 15 ][16]). Broadly speaking, the scientific community needs to step back and align the robustness, privacy, transparency, fairness, and ethical norms in ML with human norms. To do this, clearer norms for robustness and fairness need to be developed and accepted. In research efforts, limited formulations of adversarial robustness, fairness, and transparency must be replaced with broadly applicable formulations like what differential privacy offers. In policy formulation, there needs to be concrete steps toward regulatory frameworks that spell out actionable accountability measures on bias and ethical norms on datasets (including diversity guidelines), training methodologies (such as bias-aware training), and decisions on inputs (such as augmenting model decisions with explanations). The hope is that these regulatory frameworks will eventually evolve into ML governance modalities backed by legislation to lead to accountable ML systems in the future. Most critically, there is a dire need for insights from diverse scientific communities to consider societal norms of what makes a user confident about using ML for high-stake decisions, such as a passenger in a self-driving car, a bank customer accepting investment recommendations by a bot, and a patient trusting an online diagnostic interface. Policies need to be developed that govern safe and fair adoption of ML in such high-stake applications. Equally important, the fundamental tensions between adversarial robustness and model accuracy, privacy and transparency, and fairness and privacy invite more rigorous and socially grounded reasonings about trustworthy ML. Fortunately, at this juncture in the adoption of ML, a consequential window of opportunity remains open to tackle its blind spots before ML is pervasively deployed and becomes unmanageable. 1. [↵][17]1. I. Goodfellow, 2. P. McDaniel, 3. N. Papernot , Commun. ACM 61, 56 (2018). [OpenUrl][18] 2. [↵][19]1. S. G. Finlayson et al ., Science 363, 1287 (2019). [OpenUrl][20][Abstract/FREE Full Text][21] 3. [↵][22]1. J. Langford, 2. J. Pineau 1. B. Biggio, 2. B. Nelson, 3. P. Laskov , Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, J. Langford, J. Pineau, Eds. (Omnipress, 2012), pp. 1807–1814. 4. [↵][23]1. K. Eykholt et al ., Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1625–1634. 5. [↵][24]1. F. Tramèr, 2. F. Zhang, 3. A. Juels, 4. M. K. Reiter, 5. T. Ristenpart , Proceedings of the 25th USENIX Security Symposium, Austin, TX (USENIX Association, 2016), pp. 601–618. 6. [↵][25]1. A. Ali, 2. B. Eshete , Proceedings of the 16th EAI International Conference on Security and Privacy in Communication Networks, Washington, DC (EAI, 2020), pp. 318–338. 7. [↵][26]1. R. Shokri, 2. M. Stronati, 3. C. Song, 4. V. Shmatikov , Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA (IEEE, 2017), pp. 3–18. 8. [↵][27]1. N. Papernot, 2. M. Abadi, 3. U. Erlingsson, 4. I. Goodfellow, 5. K. Talwar , arXiv:1610.05755 [stat.ML] (2017). 9. [↵][28]1. I. Jarin, 2. B. Eshete , Proceedings of the 7th ACM International Workshop on Security and Privacy Analytics (2021), pp. 25–35. 10. [↵][29]1. J. Buolamwini, 2. T. Gebru , Proceedings of Conference on Fairness, Accountability and Transparency, New York, NY (ACM, 2018), pp. 77–91. 11. [↵][30]1. A. Birhane, 2. V. U. Prabhu , Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (IEEE, 2021), pp. 1537–1547. 12. [↵][31]1. N. Carlini et al ., arXiv:1902.06705 [cs.LG] (2019). 13. [↵][32]1. N. Papernot, 2. P. McDaniel, 3. A. Sinha, 4. M. P. Wellman , Proceedings of 3rd IEEE European Symposium on Security and Privacy (London, 2018), pp. 399–414. 14. [↵][33]1. R. Shokri, 2. M. Strobel, 3. Y. Zick , Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY (2021); . 15. [↵][34]1. V. M. Suriyakumar, 2. N. Papernot, 3. A. Goldenberg, 4. M. Ghassemi , FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM, 2021), pp. 723–734. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: pending:yes [8]: #ref-7 [9]: #ref-8 [10]: #ref-9 [11]: #ref-10 [12]: #ref-11 [13]: #ref-12 [14]: #ref-13 [15]: #ref-14 [16]: #ref-15 [17]: #xref-ref-1-1 "View reference 1 in text" [18]: {openurl}?query=rft.jtitle%253DCommun.%2BACM%26rft.volume%253D61%26rft.spage%253D56%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: #xref-ref-2-1 "View reference 2 in text" [20]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DFinlayson%26rft.auinit1%253DS.%2BG.%26rft.volume%253D363%26rft.issue%253D6433%26rft.spage%253D1287%26rft.epage%253D1289%26rft.atitle%253DAdversarial%2Battacks%2Bon%2Bmedical%2Bmachine%2Blearning%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.aaw4399%26rft_id%253Dinfo%253Apmid%252F30898923%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [21]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEzOiIzNjMvNjQzMy8xMjg3IjtzOjQ6ImF0b20iO3M6MjI6Ii9zY2kvMzczLzY1NTYvNzQzLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ== [22]: #xref-ref-3-1 "View reference 3 in text" [23]: #xref-ref-4-1 "View reference 4 in text" [24]: #xref-ref-5-1 "View reference 5 in text" [25]: #xref-ref-6-1 "View reference 6 in text" [26]: #xref-ref-7-1 "View reference 7 in text" [27]: #xref-ref-8-1 "View reference 8 in text" [28]: #xref-ref-9-1 "View reference 9 in text" [29]: #xref-ref-10-1 "View reference 10 in text" [30]: #xref-ref-11-1 "View reference 11 in text" [31]: #xref-ref-12-1 "View reference 12 in text" [32]: #xref-ref-13-1 "View reference 13 in text" [33]: #xref-ref-14-1 "View reference 14 in text" [34]: #xref-ref-15-1 "View reference 15 in text"


Integrating Planning, Execution and Monitoring in the presence of Open World Novelties: Case Study of an Open World Monopoly Solver

arXiv.org Artificial Intelligence

The game of monopoly is an adversarial multi-agent domain where there is no fixed goal other than to be the last player solvent, There are useful subgoals like monopolizing sets of properties, and developing them. There is also a lot of randomness from dice rolls, card-draws, and adversaries' strategies. This unpredictability is made worse when unknown novelties are added during gameplay. Given these challenges, Monopoly was one of the test beds chosen for the DARPA-SAILON program which aims to create agents that can detect and accommodate novelties. To handle the game complexities, we developed an agent that eschews complete plans, and adapts it's policy online as the game evolves. In the most recent independent evaluation in the SAILON program, our agent was the best performing agent on most measures. We herein present our approach and results.