ood 0
Expert-Agnostic Learning to Defer
Strong, Joshua, Saha, Pramit, Ibrahim, Yasin, Ouyang, Cheng, Noble, Alison
Recent advancements in this field have including the development of consistent surrogate losses for introduced features enabling flexibility to unseen training these systems (Mozannar & Sontag, 2021; Verma experts at test-time, but we find these approaches & Nalisnick, 2022), and extensions that allow for deferral have significant limitations. To address these, we to multiple experts (Verma et al., 2023). Recent work by introduce EA-L2D: Expert-Agnostic Learning to Tailor et al. (2024) proposed a meta-learning solution for Defer, a novel L2D framework that leverages a L2D systems that can adapt to experts not seen during the Bayesian approach to model expert behaviour in training regime through meta-learning representations of an expert-agnostic manner, facilitating optimal expert behaviours, enabling the system to quickly adapt to deferral decisions. EA-L2D offers several critical new experts using a small set of their example predictions, improvements over prior methods, including denoted context predictions. However, this approach exhibits the ability to incorporate prior knowledge about a key weakness in limited generalisation to experts experts, a reduced reliance on expert-annotated with expertise unseen during training. Additionally, their data, and robust performance when deferring to solution poses problems seen more widely in L2D literature, experts with expertise not seen during training.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Slovenia > Central Slovenia > Municipality of Ljubljana > Ljubljana (0.04)
- Health & Medicine (1.00)
- Education (1.00)
Efficient Conditionally Invariant Representation Learning
Pogodin, Roman, Deka, Namrata, Li, Yazhe, Sutherland, Danica J., Veitch, Victor, Gretton, Arthur
We introduce the Conditional Independence Regression CovariancE (CIRCE), a measure of conditional independence for multivariate continuous-valued variables. CIRCE applies as a regularizer in settings where we wish to learn neural features $\varphi(X)$ of data $X$ to estimate a target $Y$, while being conditionally independent of a distractor $Z$ given $Y$. Both $Z$ and $Y$ are assumed to be continuous-valued but relatively low dimensional, whereas $X$ and its features may be complex and high dimensional. Relevant settings include domain-invariant learning, fairness, and causal learning. The procedure requires just a single ridge regression from $Y$ to kernelized features of $Z$, which can be done in advance. It is then only necessary to enforce independence of $\varphi(X)$ from residuals of this regression, which is possible with attractive estimation properties and consistency guarantees. By contrast, earlier measures of conditional feature dependence require multiple regressions for each step of feature learning, resulting in more severe bias and variance, and greater computational cost. When sufficiently rich features are used, we establish that CIRCE is zero if and only if $\varphi(X) \perp \!\!\! \perp Z \mid Y$. In experiments, we show superior performance to previous methods on challenging benchmarks, including learning conditionally invariant image features.
Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection
Oldenhof, Martijn, Arany, Adam, Moreau, Yves, De Brouwer, Edward
Training object detection models usually requires instance-level annotations, such as the positions and labels of all objects present in each image. Such supervision is unfortunately not always available and, more often, only image-level information is provided, also known as weak supervision. Recent works have addressed this limitation by leveraging knowledge from a richly annotated domain. However, the scope of weak supervision supported by these approaches has been very restrictive, preventing them to use all available information. In this work, we propose ProbKT, a framework based on probabilistic logical reasoning that allows to train object detection models with arbitrary types of weak supervision. We empirically show on different datasets that using all available information is beneficial as our ProbKT leads to significant improvement on target domain and better generalization compared to existing baselines. We also showcase the ability of our approach to handle complex logic statements as supervision signal.