Plotting

Fox News Politics Newsletter: Bondi Backs the Blue

FOX News

Welcome to the Fox News Politics newsletter, with the latest updates on the Trump administration, Capitol Hill and more Fox News politics content. The Justice Department (DOJ) is moving funds formerly granted to groups supporting transgender ideology and diversity, equity and inclusion (DEI) initiatives to law enforcement, Fox News Digital has confirmed. A Justice Department official told Fox News Digital that the DOJ, under Attorney General Pam Bondi's watch, will "not waste" funds on DEI. "The Department of Justice under Pam Bondi will not waste discretionary funds on DEI passion projects that do not make Americans safer," the official told Fox News Digital. "We will use our money to get criminals off the streets, seize drugs, and in some cases, fund programs that deliver a tangible impact for victims of crime."โ€ฆREAD


Local Latent Space Bayesian Optimization over Structured Inputs

Neural Information Processing Systems

Bayesian optimization over the latent spaces of deep autoencoder models (DAEs) has recently emerged as a promising new approach for optimizing challenging black-box functions over structured, discrete, hard-to-enumerate search spaces (e.g., molecules). Here the DAE dramatically simplifies the search space by mapping inputs into a continuous latent space where familiar Bayesian optimization tools can be more readily applied. Despite this simplification, the latent space typically remains high-dimensional. Thus, even with a well-suited latent space, these approaches do not necessarily provide a complete solution, but may rather shift the structured optimization problem to a high-dimensional one. In this paper, we propose LOL-BO, which adapts the notion of trust regions explored in recent work on high-dimensional Bayesian optimization to the structured setting.


WeightedSHAP: analyzing and improving Shapley based feature attributions

Neural Information Processing Systems

Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less natural in certain machine learning settings, leading to unintuitive model interpretation. In particular, the Shapley value uses the same weight for all marginal contributions---i.e. it gives the same importance when a large number of other features are given versus when a small number of other features are given. This property can be problematic if larger feature sets are more or less informative than smaller feature sets. Our work performs a rigorous analysis of the potential limitations of Shapley feature attribution.


I Thought ChatGPT Was Killing My Students' Skills. It's Killing Something More Important Than That.

Slate

This essay was adapted from Phil Christman's newsletter, the Tourist. Before 2023, my teaching year used to follow a predictable emotional arc. In September, I was always excited, not only about meeting a new crop of first-year writing students but even about the prep work. My lesson-planning sessions would take longer than intended and yet leave me feeling energized. I'd look forward to conference week--the one-on-one meetings I try to hold with every student, every term, at least once--and even to the first stack of papers.


Google offers AI certification for business leaders now - and the training is free

ZDNet

As AI becomes an increasingly used tool by organizations across all industries, studies show that employees' expectations of being knowledgeable about AI are only increasing. Now, Google is presenting business leaders with a new AI literacy opportunity. On Wednesday, Google Cloud announced a "first-of-its-kind" generative AI certification geared toward non-technical learners, such as managers and business leaders, who want to learn about AI's impacts beyond coding. According to Google, the course focuses on how to strategically adopt, discuss, and lead generative AI efforts. The Google Cloud Generative AI Leader certification exam, which costs 99 and lasts 90 minutes, is available starting May 14.


Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective

Neural Information Processing Systems

Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs due to their impressive capability of capturing nonlinear relations in graph-structured data. However, for node classification tasks, often, only marginal improvement of GNNs has been observed in practice over their linear counterparts. Previous works provide very few understandings of this phenomenon. In this work, we resort to Bayesian learning to give an in-depth investigation of the functions of non-linearity in GNNs for node classification tasks. Given a graph generated from the statistical model CSBM, we observe that the max-a-posterior estimation of a node label given its own and neighbors' attributes consists of two types of non-linearity, the transformation of node attributes and a ReLU-activated feature aggregation from neighbors.


Score the Narwal Freo Z10 at its lowest-ever price -- get 200 off at Amazon

Mashable

SAVE 18%: As of May 14, you can get the Narwal Freo Z10 Robot Vacuum and Mop for 899.99, down from 1,099.99, at Amazon. It's also the lowest price we've seen for this model yet. Paying over a grand for a robot vacuum is a little ridiculous, if you can get one with all the bells and whistles for less. The Narwal Freo Z10 Robot Vacuum and Mop (one of Narwal's newest releases) is currently on sale for 899.99 (with an on-screen coupon), down from 1,099.99. It's also the lowest price we've ever seen for this model. If you haven't heard of it, Narwal is known for its AI-powered cleaning robots.


Structural Analysis of Branch-and-Cut and the Learnability of Gomory Mixed Integer Cuts

Neural Information Processing Systems

The incorporation of cutting planes within the branch-and-bound algorithm, known as branch-and-cut, forms the backbone of modern integer programming solvers. These solvers are the foremost method for solving discrete optimization problems and thus have a vast array of applications in machine learning, operations research, and many other fields. Choosing cutting planes effectively is a major research topic in the theory and practice of integer programming. We conduct a novel structural analysis of branch-and-cut that pins down how every step of the algorithm is affected by changes in the parameters defining the cutting planes added to the input integer program. Our main application of this analysis is to derive sample complexity guarantees for using machine learning to determine which cutting planes to apply during branch-and-cut.


Near-Optimal Correlation Clustering with Privacy

Neural Information Processing Systems

Correlation clustering is a central problem in unsupervised learning, with applications spanning community detection, duplicate detection, automated labeling and many more. In the correlation clustering problem one receives as input a set of nodes and for each node a list of co-clustering preferences, and the goal is to output a clustering that minimizes the disagreement with the specified nodes' preferences. In this paper, we introduce a simple and computationally efficient algorithm for the correlation clustering problem with provable privacy guarantees. Our additive error is stronger than those obtained in prior work and is optimal up to polylogarithmic factors for fixed privacy parameters.


VICE: Variational Interpretable Concept Embeddings

Neural Information Processing Systems

A central goal in the cognitive sciences is the development of numerical models for mental representations of object concepts. This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task. VICE uses variational inference to obtain sparse, non-negative representations of object concepts with uncertainty estimates for the embedding values. These estimates are used to automatically select the dimensions that best explain the data. We derive a PAC learning bound for VICE that can be used to estimate generalization performance or determine a sufficient sample size for experimental design.