Goto

Collaborating Authors

 cosh


LiftingWeakSupervisionToStructuredPrediction

Neural Information Processing Systems

For labels taking values in a finite metric space, we introduce techniques new to weak supervision based on pseudo-Euclidean embeddings andtensor decompositions, providing anearly-consistent noise rate estimator.





7c40c5050bd029a3ea7ff8b01412f735-Supplemental-Conference.pdf

Neural Information Processing Systems

Additional notation For a matrix A Rd1 d2, A op is the operator norm (with respect to Euclidean norms), and A F istheFrobenius norm ofA. The main intuition behind the HMM considered in this paper comes from the correlation decay phenomenon ingraphicalmodel. Informally, we expect that there is one sign flip (i.e., Si = Si+1) per 1δ samples. To begin with the analysis of the estimator in Figure 2, the following lemma is a simple, yet key tool for the proof. It establishes the variance of the random gainS.



Convexity Certificates from Hessians (Supplementary Material)

Neural Information Processing Systems

The formal language for mathematical expressions to which our certification algorithm is applied is specified by the grammar depicted in Figure 1. The language is rich enough to cover all the examples in the main paper and this supplement. In this grammar, number is a placeholder for an arbitrary floating point number, variable is a placeholder for variable names starting with a Latin character and function is a placeholder for the supported elementary differentiable functions like exp,log and sum. Here, is used for transposition and a preceding . Here are some examples from the language (the fist example uses a transposition and the fifth and seventh example use elementwise operations): 2-norm Xw y 2: (X*w-y)'*(X*w-y) logistic log(1+exp(x)): log(1+exp(x)) 1 quadratic x2: x^2 relative entropy xlog(x/y): x*log(x/y), x>0, y>0 logistic regression Our implementation of the Hessian approach works on vectorized and normalized expression DAGs (directed acyclic graphs) for Hessians that contain every subexpression exactly once.


Ananalytictheoryofshallownetworksdynamicsfor hingelossclassification--SupplementaryMaterial

Neural Information Processing Systems

In physical systems a particle instead interacts only with a finite number of other particles, hence the density field remains highly fluctuating. The effect of theθ(w x) term is to select one particular half-space over which the integralisdone. To estimate the fluctuations due to a finite number of nodes, we will have to estimate the width of the output distribution for a given set of parameters. Toestimate the error inFigure 1d ofthe main text, we ask what are the values ofxk = xcosθ such that the average output plus or minus a standard deviation, divided by M, would be equal to the threshold. Since the standard deviation involves|x|2, we estimate its average value for points 3 with a givenxk, i.e.



Decomposable Neuro Symbolic Regression

Morales, Giorgio, Sheppard, John W.

arXiv.org Artificial Intelligence

Symbolic regression (SR) models complex systems by discovering mathematical expressions that capture underlying relationships in observed data. However, most SR methods prioritize minimizing prediction error over identifying the governing equations, often producing overly complex or inaccurate expressions. To address this, we present a decomposable SR method that generates interpretable multivariate expressions leveraging transformer models, genetic algorithms (GAs), and genetic programming (GP). In particular, our explainable SR method distills a trained ``opaque'' regression model into mathematical expressions that serve as explanations of its computed function. Our method employs a Multi-Set Transformer to generate multiple univariate symbolic skeletons that characterize how each variable influences the opaque model's response. We then evaluate the generated skeletons' performance using a GA-based approach to select a subset of high-quality candidates before incrementally merging them via a GP-based cascade procedure that preserves their original skeleton structure. The final multivariate skeletons undergo coefficient optimization via a GA. We evaluated our method on problems with controlled and varying degrees of noise, demonstrating lower or comparable interpolation and extrapolation errors compared to two GP-based methods, three neural SR methods, and a hybrid approach. Unlike them, our approach consistently learned expressions that matched the original mathematical structure.