Not enough data to create a plot.
Try a different view from the menu above.
Bounded-Loss Private Prediction Markets
Prior work has investigated variations of prediction markets that preserve participants' (differential) privacy, which formed the basis of useful mechanisms for purchasing data for machine learning objectives. Such markets required potentially unlimited financial subsidy, however, making them impractical. In this work, we design an adaptively-growing prediction market with a bounded financial subsidy, while achieving privacy, incentives to produce accurate predictions, and precision in the sense that market prices are not heavily impacted by the added privacy-preserving noise. We briefly discuss how our mechanism can extend to the data-purchasing setting, and its relationship to traditional learning algorithms.
Relating Leverage Scores and Density using Regularized Christoffel Functions
Statistical leverage scores emerged as a fundamental tool for matrix sketching and column sampling with applications to low rank approximation, regression, random feature learning and quadrature. Yet, the very nature of this quantity is barely understood. Borrowing ideas from the orthogonal polynomial literature, we introduce the regularized Christoffel function associated to a positive definite kernel. This uncovers a variational formulation for leverage scores for kernel methods and allows to elucidate their relationships with the chosen kernel as well as population density. Our main result quantitatively describes a decreasing relation between leverage score and population density for a broad class of kernels on Euclidean spaces. Numerical simulations support our findings.
Constant Regret, Generalized Mixability, and Mirror Descent
We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and ``mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} $\operatorname{S}$. For a given entropy $\Phi$, losses for which a constant regret is possible using the \textsc{GAA} are called $\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an open question. We fully characterize $\Phi$-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy $\operatorname{S}$ is fundamental in nature when it comes to mixability; any $\Phi$-mixable loss is necessarily $\operatorname{S}$-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.67)
- South America (0.42)
- North America > United States (0.42)
- (9 more...)
- Transportation > Infrastructure & Services > Airport (0.58)
- Transportation > Air (0.58)
- Law Enforcement & Public Safety > Fire & Emergency Services (0.45)
- Information Technology > Security & Privacy (0.40)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.45)
- Information Technology > Security & Privacy (0.40)
Memory Replay GANs: Learning to Generate New Categories without Forgetting
In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (i.e.
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. For advanced NLP problems, a neural language model usually consists of recurrent layers (e.g., using LSTM cells), an embedding matrix for representing input tokens, and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of-the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90\% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). We start by grouping words into $c$ blocks based on their frequency, and then refine the clustering iteratively by constructing weighted low-rank approximation for each block, where the weights are based the frequencies of the words in the block. The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6x compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26x compression rate without losing prediction accuracy.
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
Large amounts of labeled data are typically required to train deep learning models. For many real-world problems, however, acquiring additional data can be expensive or even impossible. We present semi-supervised deep kernel learning (SSDKL), a semi-supervised regression model based on minimizing predictive variance in the posterior regularization framework. SSDKL combines the hierarchical representation learning of neural networks with the probabilistic modeling capabilities of Gaussian processes. By leveraging unlabeled data, we show improvements on a diverse set of real-world regression tasks over supervised deep kernel learning and semi-supervised methods such as VAT and mean teacher adapted for regression.
Race on to establish globally recognised 'AI-free' logo
Race on to establish globally recognised'AI-free' logo Organisations worldwide are racing to develop a universally recognised label for human-made products and services as part of the growing backlash against AI use. Declarations like Proudly Human, Human-made, 'No A.I and AI-free are appearing across films, marketing, books and websites. It is in response to fears that jobs or entire professions are being swept away in a wave of AI-powered automation. BBC News has counted at least eight different initiatives trying to come up with a label that could get the kind of global recognition that the Fair Trade logo has for ethically made products. But with so many competing labels - as well as confusion over the definition of AI-free - experts say consumers are in danger of being left confused unless a single standard can be agreed on.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (11 more...)
What Iranians are being told about the war
The first reports appeared on foreign screens, beyond the reach of most Iranians. On 28 February Prime Minister Benjamin Netanyahu said there were signs that the tyrant is no more, suggesting Supreme Leader Ayatollah Ali Khamenei had been killed in a joint US-Israeli strike. Iranians watching state television, however, encountered silence. Government officials would neither confirm nor deny Khamenei's death. On one of the state broadcaster's channels, IRTV3, one news presenter urged viewers to trust him and the latest information the government had.
- North America > United States (1.00)
- Asia > Middle East > Iran (0.98)
- Asia > Middle East > Israel (0.35)
- (19 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Media > News (0.99)
- Government > Regional Government > Asia Government > Middle East Government > Iran Government (0.49)
Unsupervised Video Object Segmentation for Deep Reinforcement Learning
We present a new technique for deep reinforcement learning that automatically detects moving objects and uses the relevant information for action selection. The detection of moving objects is done in an unsupervised way by exploiting structure from motion. Instead of directly learning a policy from raw images, the agent first learns to detect and segment moving objects by exploiting flow information in video sequences. The learned representation is then used to focus the policy of the agent on the moving objects. Over time, the agent identifies which objects are critical for decision making and gradually builds a policy based on relevant moving objects.