Goto

Collaborating Authors

Homomorphic Matrix Completion

Neural Information Processing Systems

In recommendation systems, global positioning, system identification, and mobile social networks, it is a fundamental routine that a server completes a low-rank matrix from an observed subset of its entries. However, sending data to a cloud server raises up the data privacy concern due to eavesdropping attacks and the singlepoint failure problem, e.g., the Netflix prize contest was canceled after a privacy lawsuit. In this paper, we propose a homomorphic matrix completion algorithm for privacy-preserving purpose. First, we formulate a homomorphic matrix completion problem where a server performs matrix completion on cyphertexts, and propose an encryption scheme that is fast and easy to implement. Secondly, we prove that the proposed scheme satisfies the homomorphism property that decrypting the recovered matrix on cyphertexts will obtain the target matrix (on plaintexts). Thirdly, we prove that the proposed scheme satisfies an (,)-differential privacy property.


Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners

Neural Information Processing Systems

Partner diversity is known to be crucial for training a robust generalist cooperative agent. In this paper, we show that partner specialization, in addition to diversity, is crucial for the robustness of a downstream generalist agent. We propose a principled method for quantifying both the diversity and specialization of a partner population based on the concept of mutual information.




Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features

Neural Information Processing Systems

Deep generative networks trained via maximum likelihood on a natural image dataset like CIFAR10 often assign high likelihoods to images from datasets with different objects (e.g., SVHN). We refine previous investigations of this failure at anomaly detection for invertible generative networks and provide a clear explanation of it as a combination of model bias and domain prior: Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset and these low-level features dominate the likelihood. Hence, when the discriminative features between inliers and outliers are on a high-level, e.g., object shapes, anomaly detection becomes particularly challenging. To remove the negative impact of model bias and domain prior on detecting high-level differences, we propose two methods, first, using the log likelihood ratios of two identical models, one trained on the in-distribution data (e.g., CIFAR10) and the other one on a more general distribution of images (e.g., 80 Million Tiny Images). We also derive a novel outlier loss for the in-distribution network on samples from the more general distribution to further improve the performance. Secondly, using a multi-scale model like Glow, we show that lowlevel features are mainly captured at early scales. Therefore, using only the likelihood contribution of the final scale performs remarkably well for detecting high-level feature differences of the out-of-distribution and the in-distribution. This method is especially useful if one does not have access to a suitable general distribution. Overall, our methods achieve strong anomaly detection performance in the unsupervised setting, and only slightly underperform state-of-the-art classifier-based methods in the supervised setting.


f106b7f99d2cb30c3db1c3cc0fde9ccb-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their informative feedback, indicating improved results (All), that hypotheses are "intuitive" (see Section 6). Do partially joint models help? Still, it is interesting future work to try a joint network (see Discussion p.8). Correlation Computation (R2): We compute the correlations of different models' likelihoods from all datasets That shows local low-level features, beyond being correlated with the likelihood, dominate it. Overclaiming wrt MSP-OE (R5): We agree and would modify wording, e.g., to "slightly underperform". More Extensive Related Work (R1,R4, R5): Thanks, we will cite and compare with the work (like BIVA) in revision.



Appendix

Neural Information Processing Systems

Readers who are interested in SA-MDP can find an example of SA-MDP in Section A and complete proofs in Section B. Readers who are interested in adversarial attacks can find more details about our new attacks and existing attacks in Section D. Especially, we discussed how a robust critic can help in attacking RL, and show experiments on the improvements gained by the robustness objective during attack. Readers who want to know more details of optimization techniques to solve our state-adversarial robust regularizers can refer to Section C, including more background on convex relaxations of neural networks in Section C.1. We provide detailed algorithm and hyperparameters for SA-PPO in Section F. We provide details for SA-DDPG in Section G. We provide details for SA-DQN in Section H. We provide more empirical results in Section I. To demonstrate the convergence of our algorithm, we repeat each experiment at least 15 times and plot the convergence of rewards during multiple runs. We found that for some environments (like Humanoid) we can consistently improve baseline performance. We also evaluate some settings under multiple perturbation strength ɛ. We first show a simple environment and solve it under different settings of MDP and SA-MDP.



f0eb6568ea114ba6e293f903c34d7488-AuthorFeedback.pdf

Neural Information Processing Systems

Rephrase any claims that seem too strong, add additional reference and discuss more connections to previous works. Paper too long We will reorganize our paper (see general response). The red lines in bars represent median rewards. We improve reward under attacks consistently across runs. We vary ɛ bounds in Figure 1.