Trump accuses Iran of using AI to spread disinformation
U.S. President Donald Trump speaks to reporters aboard Air Force One on a flight to Washington on Sunday. SAN FRANCISCO - U.S. President Donald Trump on Sunday accused Iran of using artificial intelligence as a "disinformation weapon" to misrepresent its wartime successes and support. "AI can be very dangerous, we have to be very careful with it," Trump said to reporters on Air Force One shortly after he made a post on his Truth Social platform where he accused Western media outlets without evidence of "close coordination" with Iran to spread AI-generated fake news." The comments come amid renewed tensions between the Federal Communications Commission and broadcasters after Trump took aim at media coverage of the U.S. and Israel's war with Iran. FCC Chairman Brendan Carr on Saturday threatened to pull licenses of broadcasters who did not "correct course" on their coverage.
- Asia > Middle East > Iran (1.00)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > Middle East > Israel (0.25)
- (7 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Bounded-Loss Private Prediction Markets
Prior work has investigated variations of prediction markets that preserve participants' (differential) privacy, which formed the basis of useful mechanisms for purchasing data for machine learning objectives. Such markets required potentially unlimited financial subsidy, however, making them impractical. In this work, we design an adaptively-growing prediction market with a bounded financial subsidy, while achieving privacy, incentives to produce accurate predictions, and precision in the sense that market prices are not heavily impacted by the added privacy-preserving noise. We briefly discuss how our mechanism can extend to the data-purchasing setting, and its relationship to traditional learning algorithms.
Relating Leverage Scores and Density using Regularized Christoffel Functions
Statistical leverage scores emerged as a fundamental tool for matrix sketching and column sampling with applications to low rank approximation, regression, random feature learning and quadrature. Yet, the very nature of this quantity is barely understood. Borrowing ideas from the orthogonal polynomial literature, we introduce the regularized Christoffel function associated to a positive definite kernel. This uncovers a variational formulation for leverage scores for kernel methods and allows to elucidate their relationships with the chosen kernel as well as population density. Our main result quantitatively describes a decreasing relation between leverage score and population density for a broad class of kernels on Euclidean spaces. Numerical simulations support our findings.
Constant Regret, Generalized Mixability, and Mirror Descent
We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and ``mixing'' algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for \emph{mixable} losses using the \emph{aggregating algorithm}. The \emph{Generalized Aggregating Algorithm} (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the \emph{Shannon entropy} $\operatorname{S}$. For a given entropy $\Phi$, losses for which a constant regret is possible using the \textsc{GAA} are called $\Phi$-mixable. Which losses are $\Phi$-mixable was previously left as an open question. We fully characterize $\Phi$-mixability and answer other open questions posed by \cite{Reid2015}. We show that the Shannon entropy $\operatorname{S}$ is fundamental in nature when it comes to mixability; any $\Phi$-mixable loss is necessarily $\operatorname{S}$-mixable, and the lowest worst-case regret of the \textsc{GAA} is achieved using the Shannon entropy. Finally, by leveraging the connection between the \emph{mirror descent algorithm} and the update step of the GAA, we suggest a new \emph{adaptive} generalized aggregating algorithm and analyze its performance in terms of the regret bound.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.67)
- South America (0.42)
- North America > United States (0.42)
- (9 more...)
- Transportation > Infrastructure & Services > Airport (0.58)
- Transportation > Air (0.58)
- Law Enforcement & Public Safety > Fire & Emergency Services (0.45)
- Information Technology > Security & Privacy (0.40)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.45)
- Information Technology > Security & Privacy (0.40)
Memory Replay GANs: Learning to Generate New Categories without Forgetting
In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (i.e.
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. For advanced NLP problems, a neural language model usually consists of recurrent layers (e.g., using LSTM cells), an embedding matrix for representing input tokens, and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of-the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90\% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). We start by grouping words into $c$ blocks based on their frequency, and then refine the clustering iteratively by constructing weighted low-rank approximation for each block, where the weights are based the frequencies of the words in the block. The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6x compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26x compression rate without losing prediction accuracy.
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance
Large amounts of labeled data are typically required to train deep learning models. For many real-world problems, however, acquiring additional data can be expensive or even impossible. We present semi-supervised deep kernel learning (SSDKL), a semi-supervised regression model based on minimizing predictive variance in the posterior regularization framework. SSDKL combines the hierarchical representation learning of neural networks with the probabilistic modeling capabilities of Gaussian processes. By leveraging unlabeled data, we show improvements on a diverse set of real-world regression tasks over supervised deep kernel learning and semi-supervised methods such as VAT and mean teacher adapted for regression.
Race on to establish globally recognised 'AI-free' logo
Race on to establish globally recognised'AI-free' logo Organisations worldwide are racing to develop a universally recognised label for human-made products and services as part of the growing backlash against AI use. Declarations like Proudly Human, Human-made, 'No A.I and AI-free are appearing across films, marketing, books and websites. It is in response to fears that jobs or entire professions are being swept away in a wave of AI-powered automation. BBC News has counted at least eight different initiatives trying to come up with a label that could get the kind of global recognition that the Fair Trade logo has for ethically made products. But with so many competing labels - as well as confusion over the definition of AI-free - experts say consumers are in danger of being left confused unless a single standard can be agreed on.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (11 more...)
'We will go wherever they hide': Rooting out IS in Somalia
'We will go wherever they hide': Rooting out IS in Somalia A figure appears in the picture, moving through a valley. He has been to fetch water for his friends, says the drone operator. He is running and carrying something on his back, adds another soldier. The man on the screen is near a cave, which the army believes is a hideout for 50 to 60 IS fighters. The Puntland Defence Forces have about 500 soldiers stationed at this base in the north-east of Somalia. Ten years ago the barren and inhospitable landscape was home to only a few nomadic communities, but that changed when IS established a foothold here, shifting its focus to Africa as its fighters were driven out of their strongholds in Syria and Iraq.
- Asia > Middle East > Syria (0.26)
- Asia > Middle East > Iraq (0.24)
- North America > Central America (0.14)
- (23 more...)