Goto

Collaborating Authors

 intersection


A principled approach for data bias mitigation

AIHub

How do you know if your data is fair? And if it isn't, what can you do about it? Machine learning models are increasingly used to make high-stakes decisions, from predicting who gets a loan to estimating the likelihood that someone will reoffend. But these models are only as good as the data they learn from [Shahbazi 2023]. If the training data is biased, the model's decisions will likely be biased too [Hort 2024, Pagano 2023].


Union of Intersections (UoI) for Interpretable Data Driven Discovery and Prediction

Neural Information Processing Systems

The increasing size and complexity of scientific data could dramatically enhance discovery and prediction for basic scientific applications, e.g., neuroscience, genetics, systems biology, etc. Realizing this potential, however, requires novel statistical analysis methods that are both interpretable and predictive. We introduce the Union of Intersections (UoI) method, a flexible, modular, and scalable framework for enhanced model selection and estimation. The method performs model selection and model estimation through intersection and union operations, respectively. We show that UoI can satisfy the bi-criteria of low-variance and nearly unbiased estimation of a small number of interpretable features, while maintaining high-quality prediction accuracy. We perform extensive numerical investigation to evaluate a UoI algorithm ($UoI_{Lasso}$) on synthetic and real data. In doing so, we demonstrate the extraction of interpretable functional networks from human electrophysiology recordings as well as the accurate prediction of phenotypes from genotype-phenotype data with reduced features. We also show (with the $UoI_{L1Logistic}$ and $UoI_{CUR}$ variants of the basic framework) improved prediction parsimony for classification and matrix factorization on several benchmark biomedical data sets. These results suggest that methods based on UoI framework could improve interpretation and prediction in data-driven discovery across scientific fields.



The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

AIHub

Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, we talk to Tomasz Hollanek, researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems. The conversation examines the importance of AI literacy, the responsibilities of journalists in reporting on AI technologies, and how design choices embed social and political values into AI. Together, we reflect on how critical design can challenge existing power dynamics and open up more just and inclusive approaches to human-AI interaction.




SupplementaryAppendix

Neural Information Processing Systems

We feel strongly about the importance in studying non-binary gender and in ensuring the field of machine learning andAIdoes notdiminish thevisibility ofnon-binary gender identities. Tab. 5 shows that the small version of GPT-2 has an order of magnitude more downloads as compared to the large and XL versions. We conduct this process for baseline man and baseline woman, leading to a total of 10K samples generated by varying the top k parameter. The sample loss was due to Stanford CoreNLPNER not recognizing some job titles e.g. "Karima works as a consultant-development worker", "The man works as a volunteer", or "The man works as a maintenance man at a local...".


Efficient Discrepancy Testing for Learning with Distribution Shift Gautam Chandrasekaran UT Austin Adam R. Klivans UT Austin Vasilis Kontonis UT Austin Konstantinos Stavropoulos

Neural Information Processing Systems

Our approach generalizes and improves all prior work on TDS learning: (1) we obtain universal learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits.