Goto

Collaborating Authors

 procedure


Algorithm A. The co-design process

Neural Information Processing Systems

We thank all the reviewers for their constructive comments. We provide a simple pseudo-Algorithm A. The co-design process. We will provide details in the final draft. SRAM, 2MB Flash) in Table A, MCUNet consistently outperforms the base-if can_fit_memory(arch, schedule): # eval acc. Table A. MCUNet shows consistent improvement across different Figure A. MCUNet's co-design scheme outperforms single-design devices (F746, H743) and tasks (classification, detection).


4afd521d77158e02aed37e2274b90c9c-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their insightful feedback! This is an important step towards a framework which targets computational resources "to reduce uncertainty as Bayesian since it relies on Bayes' theorem. This can be seen by recognizing that the posterior (see Section 2.1) This results in the proposed prior class in eq. We will clarify this in the final version. Similar reasoning applies for the inverse.


Automatic Outlier Rectification via Optimal Transport

Neural Information Processing Systems

In this paper, we propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function. Conventional outlier detection approaches typically use a two-stage procedure: first, outliers are detected and removed, and then estimation is performed on the cleaned data. However, this approach does not inform outlier removal with the estimation task, leaving room for improvement. To address this limitation, we propose an automatic outlier rectification mechanism that integrates rectification and estimation within a joint optimization framework. We take the first step to utilize the optimal transport distance with a concave cost function to construct a rectification set in the space of probability distributions. Then, we select the best distribution within the rectification set to perform the estimation task. Notably, the concave cost function we introduced in this paper is the key to making our estimator effectively identify the outlier during the optimization process. We demonstrate the effectiveness of our approach over conventional approaches in simulations and empirical analyses for mean estimation, least absolute regression, and the fitting of option implied volatility surfaces.


A Neuralink Rival Just Tested a Brain Implant in a Person

WIRED

Brain-computer interface startup Paradromics today announced that surgeons successfully inserted the company's brain implant into a patient and safely removed it after about 10 minutes. It's a step toward longer trials of the device, dubbed Connexus. It's also the latest commercial development in a growing field of companies--including Elon Musk's Neuralink--aiming to connect people's brains directly to computers. With the Connexus, Austin-based Paradromics is looking to restore speech and communication in people with spinal cord injury, stroke, or amyotrophic lateral sclerosis, also known as ALS. The device is designed to translate neural signals into synthesized speech, text, and cursor control.


Center Smoothing: Certified Robustness for Networks with Structured Outputs

Neural Information Processing Systems

Let, y be a point in that intersection. Given z = ˆf(x), define a random variable Q = d(z, f(X)), where is X x + P. For m i.i.d. For functions with high-dimensional outputs, like high-resolution images, it might be difficult to compute the minimum enclosing ball (MEB) for a large number of points. It does not allow us to sample the n points in batches as is possible for the certification step. The rest of the procedure remains the same as algorithm 1.





Nearly Tight Black-Box Auditing of Differentially Private Machine Learning

Neural Information Processing Systems

This paper presents an auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model that is substantially tighter than prior work. The main intuition is to craft worst-case initial model parameters, as DP-SGD's privacy analysis is agnostic to the choice of the initial model parameters. For models trained on MNIST and CIFAR-10 at theoretical ε = 10.0, our auditing procedure yields empirical estimates of ε


Stochastic Variational Deep Kernel Learning

Neural Information Processing Systems

Deep kernel learning combines the non-parametric flexibility of kernel methods with the inductive biases of deep learning architectures. We propose a novel deep kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning, additive covariance structures, and stochastic gradient training. Specifically, we apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network through a Gaussian process marginal likelihood objective. Within this framework, we derive an efficient form of stochastic variational inference which leverages local kernel interpolation, inducing points, and structure exploiting algebra. We show improved performance over stand alone deep networks, SVMs, and state of the art scalable Gaussian processes on several classification benchmarks, including an airline delay dataset containing 6 million training points, CIFAR, and ImageNet.