Goto

Collaborating Authors

 error rate


Neural Architecture Optimization

Neural Information Processing Systems

Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space.


DeepPINK: reproducible feature selection in deep neural networks

Neural Information Processing Systems

Deep learning has become increasingly popular in both supervised and unsupervised machine learning thanks to its outstanding empirical performance. However, because of their intrinsic complexity, most deep learning methods are largely treated as black box tools with little interpretability. Even though recent attempts have been made to facilitate the interpretability of deep neural networks (DNNs), existing methods are susceptible to noise and lack of robustness. Therefore, scientists are justifiably cautious about the reproducibility of the discoveries, which is often related to the interpretability of the underlying statistical models. In this paper, we describe a method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate. By designing a new DNN architecture and integrating it with the recently proposed knockoffs framework, we perform feature selection with a controlled error rate, while maintaining high power. This new method, DeepPINK (Deep feature selection using Paired-Input Nonlinear Knockoffs), is applied to both simulated and real data sets to demonstrate its empirical utility.


Learning sparse neural networks via sensitivity-driven regularization

Neural Information Processing Systems

The ever-increasing number of parameters in deep neural networks poses challenges for memory-limited applications. Regularize-and-prune methods aim at meeting these challenges by sparsifying the network weights. In this context we quantify the output sensitivity to the parameters (i.e.


The Iran War Is Throwing Global Shipping Into Chaos

WIRED

Flexport CEO Ryan Petersen says the conflict is stranding cargo and threatening inflation. After years of chaos in the global supply chain, Ryan Petersen, CEO of the logistics company Flexport, felt 2026 might offer some modicum of order. The pandemic was firmly in the rearview mirror. Red Sea shipping channels--which had been closed due to the Gaza crisis--were finally opening. The Supreme Court struck down many of Donald Trump's tariffs, and some Flexport customers were hoping for refunds.



Strong and Precise Modulation of Human Percepts via Robustified ANNs Supplementary Material Pixel budget regimes

Neural Information Processing Systems

Subject screening To gain entry into the study, subjects were required to first perform a "demo" task consisting of 100 We refer to measures of human choice probability that are lapse-rate correct in this manner as "Normalized" (e.g., Supp. The typically observed lapse rates were quite low (median over subjects: 0%; mean 4.9%), indicating Figure 3: Human disruption rates are largely stable across stimulus presentation times. At shorter viewing times, we observed modest or no increases in disruption rate. Source images were captured with a smartphone camera. ImageNet classes, as previously defined in robustness library [2].