Plotting

ea4eb49329550caaa1d2044105223721-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their helpful reviews. Please see the details below. See the figure below for the results of Reg. We now provide preliminary results for experiments on the DARTS search space. See the figure below (top right).


Multiclass versus Binary Differentially Private PAC Learning Marco Gaboardi Department of Computer Science Department of Computer Science Boston University

Neural Information Processing Systems

We show a generic reduction from multiclass differentially private PAC learning to binary private PAC learning. We apply this transformation to a recently proposed binary private PAC learner to obtain a private multiclass learner with sample complexity that has a polynomial dependence on the multiclass Littlestone dimension and a poly-logarithmic dependence on the number of classes. This yields a doubly exponential improvement in the dependence on both parameters over learners from previous work. Our proof extends the notion of ฮจ-dimension defined in work of Ben-David et al. [5] to the online setting and explores its general properties.


EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views

Neural Information Processing Systems

Understanding egocentric human-object interaction (HOI) is a fundamental aspect of human-centric perception, facilitating applications like AR/VR and embodied AI. For the egocentric HOI, in addition to perceiving semantics e.g., "what" interaction is occurring, capturing "where" the interaction specifically manifests in 3D space is also crucial, which links the perception and operation. Existing methods primarily leverage observations of HOI to capture interaction regions from an exocentric view. However, incomplete observations of interacting parties in the egocentric view introduce ambiguity between visual observations and interaction contents, impairing their efficacy. From the egocentric view, humans integrate the visual cortex, cerebellum, and brain to internalize their intentions and interaction concepts of objects, allowing for the pre-formulation of interactions and making behaviors even when interaction regions are out of sight.


Hilbert Distillation for Cross-Dimensionality Networks Dian Qin 1 Haishuai Wang

Neural Information Processing Systems

However, the competitive performance by leveraging 3D networks results in huge computational costs, which are far beyond that of 2D networks. In this paper, we propose a novel Hilbert curvebased cross-dimensionality distillation approach that facilitates the knowledge of 3D networks to improve the performance of 2D networks. The proposed Hilbert Distillation (HD) method preserves the structural information via the Hilbert curve, which maps high-dimensional (>=2) representations to one-dimensional continuous space-filling curves. Since the distilled 2D networks are supervised by the curves converted from dimensionally heterogeneous 3D features, the 2D networks are given an informative view in terms of learning structural information embedded in well-trained high-dimensional representations. We further propose a Variablelength Hilbert Distillation (VHD) method to dynamically shorten the walking stride of the Hilbert curve in activation feature areas and lengthen the stride in context feature areas, forcing the 2D networks to pay more attention to learning from activation features. The proposed algorithm outperforms the current stateof-the-art distillation techniques adapted to cross-dimensionality distillation on two classification tasks. Moreover, the distilled 2D networks by the proposed method achieve competitive performance with the original 3D networks, indicating the lightweight distilled 2D networks could potentially be the substitution of cumbersome 3D networks in the real-world scenario.




Supplementary material Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

Neural Information Processing Systems

In this section, we present details on the improved local properties achieved using the proposed single-step defense, GAT (Guided Adversarial Training). We examine the local properties of networks trained using the proposed methodology here. Thus, given that we want to obtain the strongest adversary achievable within a single backward-pass of the loss, we find x as given in Alg.1, L6 to L9. Hence, imposing the proposed regularizer encourages the optimization procedure to produce a network that is locally Lipschitz continuous, with a smaller local Lipschitz constant. The value of ฮป can be chosen so as to achieve the desired trade-off between clean accuracy and robustness [16]. We run extensive evaluations on MNIST [10], CIFAR-10 [9] and ImageNet [5] datasets to validate our claims on the proposed attack and defense. MNIST [10] is a handwritten digit recognition dataset consisting of 60,000 training images and 10,000 test images. The images are grayscale, and of dimension 28 28. We split the training set into a random subset of 50,000 training images and 10,000 validation images.