Goto

Collaborating Authors

 union


'It's Undignified': Hundreds of Workers Training Meta's AI Could Be Laid Off

WIRED

'It's Undignified': Hundreds of Workers Training Meta's AI Could Be Laid Off More than 700 people working for a Meta contractor in Ireland are at risk of losing their jobs, documents show. Hundreds of workers in Ireland tasked with refining Meta's AI models have been told that their jobs are at risk as the company embarks on a sweeping new round of layoffs, according to documents obtained by WIRED. The affected workers are employed by the Dublin-based firm Covalen, which handles various content moderation and labeling services for Meta. The workers were informed of the layoffs over a brief video meeting on Monday afternoon and were not allowed to ask questions, according to Nick Bennett, one of the employees on the call. "We had a pretty bad feeling [before the meeting]," he says.


Federated fairness-aware classification under differential privacy

Xue, Gengyu, Yu, Yi

arXiv.org Machine Learning

Privacy and algorithmic fairness have become two central issues in modern machine learning. Although each has separately emerged as a rapidly growing research area, their joint effect remains comparatively under-explored. In this paper, we systematically study the joint impact of differential privacy and fairness on classification in a federated setting, where data are distributed across multiple servers. Targeting demographic disparity constrained classification under federated differential privacy, we propose a two-step algorithm, namely FDP-Fair. In the special case where there is only one server, we further propose a simple yet powerful algorithm, namely CDP-Fair, serving as a computationally-lightweight alternative. Under mild structural assumptions, theoretical guarantees on privacy, fairness and excess risk control are established. In particular, we disentangle the source of the private fairness-aware excess risk into a) intrinsic cost of classification, b) cost of private classification, c) non-private cost of fairness and d) private cost of fairness. Our theoretical findings are complemented by extensive numerical experiments on both synthetic and real datasets, highlighting the practicality of our designed algorithms.


L.A. teachers union widely expected to announce strike date at massive Wednesday rally

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. L.A. teachers union widely expected to announce strike date at massive Wednesday rally Members of the largest unions representing teachers and nonteachers participate in joint rally at Grand Park in March 2023. The scene will be repeated on Wednesday, with union members once again on the verge of a strike. This is read by an automated voice. Please report any issues or inconsistencies here .


Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction

Neural Information Processing Systems

The increasing size and complexity of scientific data could dramatically enhance discovery and prediction for basic scientific applications, e.g., neuroscience, genetics, systems biology, etc. Realizing this potential, however, requires novel statistical analysis methods that are both interpretable and predictive. We introduce the Union of Intersections (UoI) method, a flexible, modular, and scalable framework for enhanced model selection and estimation. The method performs model selection and model estimation through intersection and union operations, respectively. We show that UoI can satisfy the bi-criteria of low-variance and nearly unbiased estimation of a small number of interpretable features, while maintaining high-quality prediction accuracy. We perform extensive numerical investigation to evaluate a UoI algorithm ($UoI_{Lasso}$) on synthetic and real data. In doing so, we demonstrate the extraction of interpretable functional networks from human electrophysiology recordings as well as the accurate prediction of phenotypes from genotype-phenotype data with reduced features. We also show (with the $UoI_{L1Logistic}$ and $UoI_{CUR}$ variants of the basic framework) improved prediction parsimony for classification and matrix factorization on several benchmark biomedical data sets. These results suggest that methods based on UoI framework could improve interpretation and prediction in data-driven discovery across scientific fields.


Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation

Neural Information Processing Systems

Recent studies in reinforcement learning (RL) have made significant progress by leveraging function approximation to alleviate the sample complexity hurdle for better performance. Despite the success, existing provably efficient algorithms typically rely on the accessibility of immediate feedback upon taking actions. The failure to account for the impact of delay in observations can significantly degrade the performance of real-world systems due to the regret blow-up. In this work, we tackle the challenge of delayed feedback in RL with linear function approximation by employing posterior sampling, which has been shown to empirically outperform the popular UCB algorithms in a wide range of regimes. We first introduce Delayed-PSVI, an optimistic value-based algorithm that effectively explores the value function space via noise perturbation with posterior sampling.


Core-sets for Fair and Diverse Data Summarization

Neural Information Processing Systems

Second, we show the first core-set w.r.t. the sum-of-nearest-neighbor distances. Finally, we run several experiments showing the effectiveness of our core-set approach. In particular, we apply constrained diversity maximization to summarize a set of timed messages that takes into account the messages' recency.