Goto

Collaborating Authors

 private communication



cb70ab375662576bd1ac5aaf16b3fca4-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for the time they invested to review this paper and share their insights. We have conducted experiments on real-world data, yet could not include them within page limits. Publication of the algorithm in an implemented code (e.g. Java as stated in Line 304). The pseudocodes are given below.


Differentiable Programming for Computational Plasma Physics

McGreivy, Nick

arXiv.org Artificial Intelligence

Differentiable programming allows for derivatives of functions implemented via computer code to be calculated automatically. These derivatives are calculated using automatic differentiation (AD). This thesis explores two applications of differentiable programming to computational plasma physics. First, we consider how differentiable programming can be used to simplify and improve stellarator optimization. We introduce a stellarator coil design code (FOCUSADD) that uses gradient-based optimization to produce stellarator coils with finite build. Because we use reverse mode AD, which can compute gradients of scalar functions with the same computational complexity as the function, FOCUSADD is simple, flexible, and efficient. We then discuss two additional applications of AD in stellarator optimization. Second, we explore how machine learning (ML) can be used to improve or replace the numerical methods used to solve partial differential equations (PDEs), focusing on time-dependent PDEs in fluid mechanics relevant to plasma physics. Differentiable programming allows neural networks and other techniques from ML to be embedded within numerical methods. This is a promising, but relatively new, research area. We focus on two basic questions. First, can we design ML-based PDE solvers that have the same guarantees of conservation, stability, and positivity that standard numerical methods do? The answer is yes; we introduce error-correcting algorithms that preserve invariants of time-dependent PDEs. Second, which types of ML-based solvers work best at solving PDEs? We perform a systematic review of the scientific literature on solving PDEs with ML. Unfortunately we discover two issues, weak baselines and reporting biases, that affect the interpretation reproducibility of a significant majority of published research. We conclude that using ML to solve PDEs is not as promising as we initially believed.


Differential Privacy in Cooperative Multiagent Planning

Chen, Bo, Hawkins, Calvin, Karabag, Mustafa O., Neary, Cyrus, Hale, Matthew, Topcu, Ufuk

arXiv.org Artificial Intelligence

Privacy-aware multiagent systems must protect agents' sensitive data while simultaneously ensuring that agents accomplish their shared objectives. Towards this goal, we propose a framework to privatize inter-agent communications in cooperative multiagent decision-making problems. We study sequential decision-making problems formulated as cooperative Markov games with reach-avoid objectives. We apply a differential privacy mechanism to privatize agents' communicated symbolic state trajectories, and then we analyze tradeoffs between the strength of privacy and the team's performance. For a given level of privacy, this tradeoff is shown to depend critically upon the total correlation among agents' state-action processes. We synthesize policies that are robust to privacy by reducing the value of the total correlation. Numerical experiments demonstrate that the team's performance under these policies decreases by only 3 percent when comparing private versus non-private implementations of communication. By contrast, the team's performance decreases by roughly 86 percent when using baseline policies that ignore total correlation and only optimize team performance.


1 On Alan Turing and the Origins of Digital Computers B. Randell

AI Classics

This paper documents an investigation into the role that the late Alan Turing played in the development of electronic computers. Evidence is presented that during the war he was associated with a group that designed and built a series of special purpose electronic computers, which were in at least a limited sense'program controlled', and that the origins of several post-war general purpose computer projects in Britain can be traced back to these wartime computers. INTRODUCTION During my amateur investigations into computer history, I grew intrigued by the lack of information concerning the role played by the late Alan Turing.


An Empirical Evaluation of Four Algorithms for Multi-Class Classification: Mart, ABC-Mart, Robust LogitBoost, and ABC-LogitBoost

Li, Ping

arXiv.org Artificial Intelligence

This empirical study is mainly devoted to comparing four tree-based boosting algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for multi-class classification on a variety of publicly available datasets. Some of those datasets have been thoroughly tested in prior studies using a broad range of classification algorithms including SVM, neural nets, and deep learning. In terms of the empirical classification errors, our experiment results demonstrate: 1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart on most datasets. 4. Abc-logitboost considerably improves abc-mart on most datasets. 5. These four boosting algorithms (especially abc-logitboost) outperform SVM on many datasets. 6. Compared to the best deep learning methods, these four boosting algorithms (especially abc-logitboost) are competitive.


Principles of Risk Minimization for Learning Theory

Vapnik, V.

Neural Information Processing Systems

Learning is posed as a problem of function estimation, for which two principles ofsolution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements ofthe function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition.


Principles of Risk Minimization for Learning Theory

Vapnik, V.

Neural Information Processing Systems

Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition.


Principles of Risk Minimization for Learning Theory

Vapnik, V.

Neural Information Processing Systems

Learning is posed as a problem of function estimation, for which two principles of solution are considered: empirical risk minimization and structural risk minimization. These two principles are applied to two different statements of the function estimation problem: global and local. Systematic improvements in prediction power are illustrated in application to zip-code recognition.