rcad
AppendixOutline
Hence, we rely on subgradients defined in Equation 7. Since, many subgradient directions exist for the margin points, for consistency, we stick with xlγ(w;(x,y)) = {0}wheny w,x = γ. Note, that thesetofpoints inX satisfying this equality isazeromeasure set. For simplicity we shall treat the projection operation as just renormalizing w(t+1) to have unit norm,i.e., w(t+1) 2 = 1, t 0. This is not necessarily restrictive. A.1 TechnicalLemmas In this section we shall state some technical lemmas without proof, with references to works that contain the full proof. We shall use these in the following sections when proving our lemmas in Section5.
- North America > United States > Virginia (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training. RCAD can be easily integrated into training pipelines with a few lines of code. Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques (e.g., label smoothing, MixUp training) to increase test accuracy by 1-3% in absolute value, with more significant gains in the low data regime. We also provide a theoretical analysis that helps to explain these benefits in simplified settings, showing that RCAD can provably help the model unlearn spurious features in the training data.
Variance reduction for Random Coordinate Descent-Langevin Monte Carlo
Sampling from a log-concave distribution function is one core problem that has wide applications in Bayesian statistics and machine learning. While most gradient free methods have slow convergence rate, the Langevin Monte Carlo (LMC) that provides fast convergence requires the computation of gradients. In practice one uses finite-differencing approximations as surrogates, and the method is expensive in high-dimensions. A natural strategy to reduce computational cost in each iteration is to utilize random gradient approximations, such as random coordinate descent (RCD) or simultaneous perturbation stochastic approximation (SPSA).We show by a counterexamplethat blindly applying RCD does not achieve the goal in the most general setting. The high variance induced by the randomness means a larger number of iterations are needed, and this balances out the saving in each iteration.
- North America > United States > Virginia (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training.
Variance reduction for Random Coordinate Descent-Langevin Monte Carlo
Sampling from a log-concave distribution function is one core problem that has wide applications in Bayesian statistics and machine learning. While most gradient free methods have slow convergence rate, the Langevin Monte Carlo (LMC) that provides fast convergence requires the computation of gradients. In practice one uses finite-differencing approximations as surrogates, and the method is expensive in high-dimensions. A natural strategy to reduce computational cost in each iteration is to utilize random gradient approximations, such as random coordinate descent (RCD) or simultaneous perturbation stochastic approximation (SPSA).We show by a counterexamplethat blindly applying RCD does not achieve the goal in the most general setting. The high variance induced by the randomness means a larger number of iterations are needed, and this balances out the saving in each iteration.
Robotising Psychometrics: Validating Wellbeing Assessment Tools in Child-Robot Interactions
Abbasi, Nida Itrat, Laban, Guy, Ford, Tamsin, Jones, Peter B, Gunes, Hatice
The interdisciplinary nature of Child-Robot Interaction (CRI) fosters incorporating measures and methodologies from many established domains. However, when employing CRI approaches to sensitive avenues of health and wellbeing, caution is critical in adapting metrics to retain their safety standards and ensure accurate utilisation. In this work, we conducted a secondary analysis to previous empirical work, investigating the reliability and construct validity of established psychological questionnaires such as the Short Moods and Feelings Questionnaire (SMFQ) and three subscales (generalised anxiety, panic and low mood) of the Revised Child Anxiety and Depression Scale (RCADS) within a CRI setting for the assessment of mental wellbeing. Through confirmatory principal component analysis, we have observed that these measures are reliable and valid in the context of CRI. Furthermore, our analysis revealed that scales communicated by a robot demonstrated a better fit than when self-reported, underscoring the efficiency and effectiveness of robot-mediated psychological assessments in these settings. Nevertheless, we have also observed variations in item contributions to the main factor, suggesting potential areas of examination and revision (e.g., relating to physiological changes, inactivity and cognitive demands) when used in CRI. Findings from this work highlight the importance of verifying the reliability and validity of standardised metrics and assessment tools when employed in CRI settings, thus, aiming to avoid any misinterpretations and misrepresentations.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.29)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)