Law
Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces
Chuan Guo, Ali Mousavi, Xiang Wu, Daniel N. Holtmann-Rice, Satyen Kale, Sashank Reddi, Sanjiv Kumar
In extreme classification settings, embedding-based neural network models are currently not competitive with sparse linear and tree-based methods in terms of accuracy. Most prior works attribute this poor performance to the low-dimensional bottleneck in embedding-based methods. In this paper, we demonstrate that theoretically there is no limitation to using low-dimensional embedding-based methods, and provide experimental evidence that overfitting is the root cause of the poor performance of embedding-based methods. These findings motivate us to investigate novel data augmentation and regularization techniques to mitigate overfitting. To this end, we propose GLaS, a new regularizer for embedding-based neural network approaches. It is a natural generalization from the graph Laplacian and spread-out regularizers, and empirically it addresses the drawback of each regularizer alone when applied to the extreme classification setup. With the proposed techniques, we attain or improve upon the state-of-the-art on most widely tested public extreme classification datasets with hundreds of thousands of labels.
DATASHEET: Recognition-by-Components Degraded Polygons
For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? This dataset was created in order to study modern deep learning vision systems from a principled, cognitive science perspective. In particular, we aim to study human and machine vision misalignment by analyzing their behavior on the image recovery task. We revisit and modernize the theory of Recognition-by-Components by generating this dataset. Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?
Effective Dimension in Bandit Problems under Censorship
In this paper, we study both multi-armed and contextual bandit problems in censored environments. Our goal is to estimate the performance loss due to censorship in the context of classical algorithms designed for uncensored environments. Our main contributions include the introduction of a broad class of censorship models and their analysis in terms of the effective dimension of the problem - a natural measure of its underlying statistical complexity and main driver of the regret bound. In particular, the effective dimension allows us to maintain the structure of the original problem at first order, while embedding it in a bigger space, and thus naturally leads to results analogous to uncensored settings. Our analysis involves a continuous generalization of the Elliptical Potential Inequality, which we believe is of independent interest. We also discover an interesting property of decisionmaking under censorship: a transient phase during which initial misspecification of censorship is self-corrected at an extra cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension. Our results are useful for applications of sequential decision-making models where the feedback received depends on strategic uncertainty (e.g., agents' willingness to follow a recommendation) and/or random uncertainty (e.g., loss or delay in arrival of information).
Effective Dimension in Bandit Problems under Censorship
In this paper, we study both multi-armed and contextual bandit problems in censored environments. Our goal is to estimate the performance loss due to censorship in the context of classical algorithms designed for uncensored environments. Our main contributions include the introduction of a broad class of censorship models and their analysis in terms of the effective dimension of the problem - a natural measure of its underlying statistical complexity and main driver of the regret bound. In particular, the effective dimension allows us to maintain the structure of the original problem at first order, while embedding it in a bigger space, and thus naturally leads to results analogous to uncensored settings. Our analysis involves a continuous generalization of the Elliptical Potential Inequality, which we believe is of independent interest. We also discover an interesting property of decisionmaking under censorship: a transient phase during which initial misspecification of censorship is self-corrected at an extra cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension. Our results are useful for applications of sequential decision-making models where the feedback received depends on strategic uncertainty (e.g., agents' willingness to follow a recommendation) and/or random uncertainty (e.g., loss or delay in arrival of information).
Multi-label Contrastive Predictive Coding
Variational mutual information (MI) estimators are widely used in unsupervised representation learning methods such as contrastive predictive coding (CPC). A lower bound on MI can be obtained from a multi-class classification problem, where a critic attempts to distinguish a positive sample drawn from the underlying joint distribution from (m 1) negative samples drawn from a suitable proposal distribution. Using this approach, MI estimates are bounded above by log m, and could thus severely underestimate unless m is very large. To overcome this limitation, we introduce a novel estimator based on a multi-label classification problem, where the critic needs to jointly identify multiple positive samples at the same time. We show that using the same amount of negative samples, multi-label CPC is able to exceed the log m bound, while still being a valid lower bound of mutual information. We demonstrate that the proposed approach is able to lead to better mutual information estimation, gain empirical improvements in unsupervised representation learning, and beat a current state-of-the-art knowledge distillation method over 10 out of 13 tasks.
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the xAUC Metric
Where machine-learned predictive risk scores inform high-stakes decisions, such as bail and sentencing in criminal justice, fairness has been a serious concern. Recent work has characterized the disparate impact that such risk scores can have when used for a binary classification task. This may not account, however, for the more diverse downstream uses of risk scores and their non-binary nature. To better account for this, in this paper, we investigate the fairness of predictive risk scores from the point of view of a bipartite ranking task, where one seeks to rank positive examples higher than negative ones. We introduce the xAUC disparity as a metric to assess the disparate impact of risk scores and define it as the difference in the probabilities of ranking a random positive example from one protected group above a negative one from another group and vice versa. We provide a decomposition of bipartite ranking loss into components that involve the discrepancy and components that involve pure predictive ability within each group. We use xAUC analysis to audit predictive risk scores for recidivism prediction, income prediction, and cardiac arrest prediction, where it describes disparities that are not evident from simply comparing within-group predictive performance.
LinkedIn hit with lawsuit alleging private messages were used to train AI models
LinkedIn is facing a class-action lawsuit over allegations of using private messages to train its AI model. The lawsuit, filed in the U.S. District Court in the Northern District of California, has accused the Microsoft-owned professional networking site of "unlawfully disclosing its Premium customers' private messages to third parties" and "concealing" its practices by "stealthily altering its privacy policies and statements." A key part of the lawsuit accused LinkedIn of disclosing private InMail messages to third parties to train its model. A spokesperson for LinkedIn said, "we are not using member messages to train models as alleged in the complaint." The issue of attaining training data for AI models is a contentious one, and LinkedIn is not the first company to be accused of misconduct.
Grounded Mathematical Proof Generation with Language Models
Theorem proving in natural mathematical language - the mixture of symbolic and natural language used by humans - plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence. Yet it has remained underexplored with modern generative models. We study largescale language models on two new generation tasks: suggesting the next step in a mathematical proof, and full proof generation.
Learning Certified Individually Fair Representations
Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points. The key idea is to map similar individuals to close latent representations and leverage this latent proximity to certify individual fairness.
SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment Anything Model: Supplementary Material
A.1 Category Abbreviations For the SOTA dataset, we present the list of all category abbreviations as follows. For the SIOR dataset, we present the list of all category abbreviations as follows. For the FAST dataset, we present the list of all category abbreviations as follows. B.1 Experiment Settings We present the experiment settings of pre-training and fine-tuning in Table S1-S2. This work was partially done during Di Wang's internship at iFlytek. "|" means Table S1: Basic settings in experiments. B.2 SAMRS Training and Validation sets For the experiments based on the SAMRS dataset (see Table 2 in the main text), in each subset, 95% samples are used for pre-training.