Not enough data to create a plot.
Try a different view from the menu above.

Plotting

Solving Marginal MAP Problems with NP Oracles and Parity Constraints

Neural Information Processing Systems

Arising from many applications at the intersection of decision-making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them. We propose XOR MMAP provides a constant factor approximation to the Marginal MAP problem, by encoding it as a single optimization in a polynomial size of the original problem. We evaluate our approach in several machine learning and decision-making applications, and show that our approach outperforms several state-of-the-art Marginal MAP solvers.


ATM jackpotting attacks surge across the US

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .



Robustness of classifiers: from adversarial to random noise

Neural Information Processing Systems

Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that generalizes both the random and worst-case noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers' decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems.


Discriminative Gaifman Models

Mathias Niepert

Neural Information Processing Systems

Considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable, mitigates the problem of overfitting, and facilitates weight sharing.


World's broadcasters urge EU to tighten rules for big tech in smart TV battle

The Guardian

Services such as Google TV and Amazon's Fire TV have recommendation systems, as well as search functions, that may prioritise some content over others. Services such as Google TV and Amazon's Fire TV have recommendation systems, as well as search functions, that may prioritise some content over others. World's broadcasters urge EU to tighten rules for big tech in smart TV battle The world's largest broadcasters have pushed for the EU to enforce its toughest regulations against virtual TVs and smart assistants built by Google, Amazon, Apple and Samsung . The call came in a letter from the Association of Commercial Television and Video on Demand Services in Europe (ACT), whose members include Canal+, RTL, Mediaset, ITV, Paramount+, NBCUniversal, Walt Disney, Warner Bros Discovery, Sky and TF1 Groupe. The letter argues that big tech companies have growing control over the operating systems of smart TVs and voice assistants, allowing them to act as "gatekeepers" funnelling users towards some content and away from others.




Learning Sparse Gaussian Graphical Models with Overlapping Blocks

Seyed Mohammad Javad Hosseini, Su-In Lee

Neural Information Processing Systems

Second, GRAB blocks (Figa priorioruseasequential fixed. Thefirsttwo terms,logdet ( ) trace (S ), in Eq (3) correspondtologP(X| ), thelog-likelihoodof GGM givenaparticularparameter (i.e., anestimateof 1), asdescribedin Section 2.1.