Google Killed its Popular 'View Image' Feature, and the Internet Isn't Having It
People online are upset over a new decision from Google that makes it a little harder to download photos. The search giant removed its popular "view image" feature Thursday as a part of a legal settlement. The feature previously allowed users to download and save photos without having to navigate through to the pictures' web pages. Today we're launching some changes on Google Images to help connect users and useful websites. This will include removing the View Image button.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Information Management > Search (1.00)
- Information Technology > Communications (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Pattern Recognition > Image Matching (0.47)
Nonparametric Testing under Random Projection
Liu, Meimei, Shang, Zuofeng, Cheng, Guang
A common challenge in nonparametric inference is its high computational complexity when data volume is large. In this paper, we develop computationally efficient nonparametric testing by employing a random projection strategy. In the specific kernel ridge regression setup, a simple distance-based test statistic is proposed. Notably, we derive the minimum number of random projections that is sufficient for achieving testing optimality in terms of the minimax rate. An adaptive testing procedure is further established without prior knowledge of regularity. One technical contribution is to establish upper bounds for a range of tail sums of empirical kernel eigenvalues. Simulations and real data analysis are conducted to support our theory.
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (3 more...)
Dimensionality reduction for acoustic vehicle classification with spectral embedding
Sunu, Justin, Percus, Allon G.
Classification and identification of moving vehicles from audio signals is of interest in many applications, ranging from traffic flow management to military target recognition. Classification may involve differentiating vehicles by type, such as jeep, sedan, etc. Identification can involve distinguishing specific vehicles, even within a given vehicle type. Since audio data is small compared to, say, video data, multiple audio sensors can be placed easily and inexpensively. However, there are certain obstacles having to do with both hardware and physics. Certain microphones and recording devices have built-in features, for example, damping/normalizing that may be applied when the recording exceeds a threshold.
- Transportation > Ground > Road (0.87)
- Transportation > Passenger (0.69)
- Automobiles & Trucks > Manufacturer (0.68)
Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications
Liu, Sijia, Chen, Jie, Chen, Pin-Yu, Hero, Alfred O.
In this paper, we design and analyze a new zeroth-order online algorithm, namely, the zeroth-order online alternating direction method of multipliers (ZOO-ADMM), which enjoys dual advantages of being gradient-free operation and employing the ADMM to accommodate complex structured regularizers. Compared to the first-order gradient-based online algorithm, we show that ZOO-ADMM requires $\sqrt{m}$ times more iterations, leading to a convergence rate of $O(\sqrt{m}/\sqrt{T})$, where $m$ is the number of optimization variables, and $T$ is the number of iterations. To accelerate ZOO-ADMM, we propose two minibatch strategies: gradient sample averaging and observation averaging, resulting in an improved convergence rate of $O(\sqrt{1+q^{-1}m}/\sqrt{T})$, where $q$ is the minibatch size. In addition to convergence analysis, we also demonstrate ZOO-ADMM to applications in signal processing, statistics, and machine learning.
- North America > United States > Michigan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China (0.04)
- (3 more...)
- Retail > Online (0.80)
- Health & Medicine (0.67)
Post Selection Inference with Incomplete Maximum Mean Discrepancy Estimator
Yamada, Makoto, Wu, Denny, Tsai, Yao-Hung Hubert, Takeuchi, Ichiro, Salakhutdinov, Ruslan, Fukumizu, Kenji
Measuring divergence between two distributions is essential in machine learning and statistics and has various applications including binary classification, change point detection, and two-sample test. Furthermore, in the era of big data, designing divergence measure that is interpretable and can handle high-dimensional and complex data becomes extremely important. In the paper, we propose a post selection inference (PSI) framework for divergence measure, which can select a set of statistically significant features that discriminate two distributions. Specifically, we employ an additive variant of maximum mean discrepancy (MMD) for features and introduce a general hypothesis test for PSI. A novel MMD estimator using the incomplete U-statistics, which has an asymptotically Normal distribution (under mild assumptions) and gives high detection power in PSI, is also proposed and analyzed theoretically. Through synthetic and real-world feature selection experiments, we show that the proposed framework can successfully detect statistically significant features. Last, we propose a sample selection framework for analyzing different members in the Generative Adversarial Networks (GANs) family.
- Asia > Japan (0.05)
- Oceania > Australia (0.04)
- North America > United States > New York (0.04)
- North America > United States > California (0.04)
MIDA: Multiple Imputation using Denoising Autoencoders
Missing data is a significant problem impacting all domains. State-of-the-art framework for minimizing missing data bias is multiple imputation, for which the choice of an imputation model remains nontrivial. We propose a multiple imputation model based on overcomplete deep denoising autoencoders. Our proposed model is capable of handling different data types, missingness patterns, missingness proportions and distributions. Evaluation on several real life datasets show our proposed model significantly outperforms current state-of-the-art methods under varying conditions while simultaneously improving end of the line analytics.
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)
Black-Box Reductions for Parameter-free Online Learning in Banach Spaces
Cutkosky, Ashok, Orabona, Francesco
We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Education > Educational Setting > Online (0.84)
- Transportation > Air (0.61)
Generating Neural Networks with Neural Networks
Hypernetworks are neural networks that transform a random input vector into weights for a specified target neural network. We formulate the hypernetwork training objective as a compromise between accuracy and diversity, where the diversity takes into account trivial symmetry transformations of the target network. We show that this formulation naturally arises as a relaxation of an optimistic probability distribution objective for the generated networks, and we explain how it is related to variational inference. We use multi-layered perceptrons to form the mapping from the low dimensional input random vector to the high dimensional weight space, and demonstrate how to reduce the number of parameters in this mapping by weight sharing. We perform experiments on a four layer convolutional target network which classifies MNIST images, and show that the generated weights are diverse and have interesting distributions.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > New York (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Perceptrons (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
Learning Adversarially Fair and Transferable Representations
Madras, David, Creager, Elliot, Pitassi, Toniann, Zemel, Richard
In this work, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream. We envision a scenario where learned representations may be handed off to other entities with unknown objectives. We propose and explore adversarial representation learning as a natural method of ensuring those entities will act fairly, and connect group fairness (demographic parity, equalized odds, and equal opportunity) to different adversarial objectives. Through worst-case theoretical guarantees and experimental validation, we show that the choice of this objective is crucial to fair prediction. Furthermore, we present the first in-depth experimental demonstration of fair transfer learning, by showing that our learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.