dichotomy
- Research Report > Experimental Study (0.94)
- Research Report > New Finding (0.68)
Auditing Fairness under Model Updates: Fundamental Complexity and Property-Preserving Updates
Ajarra, Ayoub, Basu, Debabrota
As machine learning models become increasingly embedded in societal infrastructure, auditing them for bias is of growing importance. However, in real-world deployments, auditing is complicated by the fact that model owners may adaptively update their models in response to changing environments, such as financial markets. These updates can alter the underlying model class while preserving certain properties of interest, raising fundamental questions about what can be reliably audited under such shifts. In this work, we study group fairness auditing under arbitrary updates. We consider general shifts that modify the pre-audit model class while maintaining invariance of the audited property. Our goals are two-fold: (i) to characterize the information complexity of allowable updates, by identifying which strategic changes preserve the property under audit; and (ii) to efficiently estimate auditing properties, such as group fairness, using a minimal number of labeled samples. We propose a generic framework for PAC auditing based on an Empirical Property Optimization (EPO) oracle. For statistical parity, we establish distribution-free auditing bounds characterized by the SP dimension, a novel combinatorial measure that captures the complexity of admissible strategic updates. Finally, we demonstrate that our framework naturally extends to other auditing objectives, including prediction error and robust risk.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > France (0.04)
- Africa > South Sudan > Equatoria > Central Equatoria > Juba (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
I Just Realized Where I Know the Man I'm Dating From, So I Told Him. His Response Stunned Me.
How to Do It I Just Realized Where I Know the Man I'm Dating From, So I Told Him. As an outlet when I'm not having regular sex, I enjoy sexting with strangers on Reddit. I don't share pictures of myself, but men on there are more than happy to show me whatever I want to see. A few months ago, I started dating a man I met online. We clicked right away, wanted all the same things, and both agreed we could see a future with each other.
- Research Report > Experimental Study (0.94)
- Research Report > New Finding (0.68)
Theta Theory: operads and coloring
Marcolli, Matilde, Larson, Richard K.
We give an explicit construction of the generating set of a colored operad that implements theta theory in the mathematical model of Minimalism in generative linguistics, in the form of a coloring algorithm for syntactic objects. We show that the coproduct operation on workspaces allows for a recursive implementation of the theta criterion. We also show that this filtering by coloring rules on structures freely formed by Merge is equivalent to a process of structure formation by a colored version of Merge: the form of the generators of the colored operad then implies the dichotomy is semantics between External and Internal Merge, where Internal Merge only moves to non-theta positions.
Domain Expansion: Parameter-Efficient Modules as Building Blocks for Composite Domains
Patel, Mann, Panda, Divyajyoti, Mehta, Hilay, Patel, Parth, Parikh, Dhruv
Parameter-Efficient Fine-Tuning (PEFT) is an efficient alternative to full scale fine-tuning, gaining popularity recently. With pre-trained model sizes growing exponentially, PEFT can be effectively utilized to fine-tune compact modules, Parameter-Efficient Modules (PEMs), trained to be domain experts over diverse domains. In this project, we explore composing such individually fine-tuned PEMs for distribution generalization over the composite domain. To compose PEMs, simple composing functions are used that operate purely on the weight space of the individually fine-tuned PEMs, without requiring any additional fine-tuning. The proposed method is applied to the task of representing the 16 Myers-Briggs Type Indicator (MBTI) composite personalities via 4 building block dichotomies, comprising of 8 individual traits which can be merged (composed) to yield a unique personality. We evaluate the individual trait PEMs and the composed personality PEMs via an online MBTI personality quiz questionnaire, validating the efficacy of PEFT to fine-tune PEMs and merging PEMs without further fine-tuning for domain composition.
- Questionnaire & Opinion Survey (0.90)
- Research Report (0.64)
Modeling Image Tone Dichotomy with the Power Function
Martinez, Axel, Olague, Gustavo, Hernandez, Emilio
The primary purpose of this paper is to present the concept of dichotomy in image illumination modeling based on the power function. In particular, we review several mathematical properties of the power function to identify the limitations and propose a new mathematical model capable of abstracting illumination dichotomy. The simplicity of the equation opens new avenues for classical and modern image analysis and processing. The article provides practical and illustrative image examples to explain how the new model manages dichotomy in image perception. The article shows dichotomy image space as a viable way to extract rich information from images despite poor contrast linked to tone, lightness, and color perception. Moreover, a comparison with state-of-the-art methods in image enhancement provides evidence of the method's value.
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- North America > United States > California (0.04)
- North America > Mexico > Baja California (0.04)
- Europe > United Kingdom (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Considerations for differentially private learning with large-scale public pretraining – interview with Gautam Kamath
Florian Tramer, Gautam Kamath and Nicholas Carlini won an International Conference on Machine Learning (ICML2024) best paper award for their work Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining. Differential privacy is a rigorous and provable notion of data privacy. Among other things, training a machine learning model with differential privacy can prevent it from spitting out its training data. The issue is that training a model with differential privacy generally comes at a significant hit to a model's utility. By incorporating "public data" (i.e., data that is not subject to privacy constraints) into the training procedure, it can help alleviate this concern and increase the resulting model's utility.
Shapley Value Computation in Ontology-Mediated Query Answering
Bienvenu, Meghyn, Figueira, Diego, Lafourcade, Pierre
The Shapley value, originally introduced in cooperative game theory for wealth distribution, has found use in KR and databases for the purpose of assigning scores to formulas and database tuples based upon their contribution to obtaining a query result or inconsistency. In the present paper, we explore the use of Shapley values in ontology-mediated query answering (OMQA) and present a detailed complexity analysis of Shapley value computation (SVC) in the OMQA setting. In particular, we establish a PF/#P-hard dichotomy for SVC for ontology-mediated queries (T,q) composed of an ontology T formulated in the description logic ELHI_\bot and a connected constant-free homomorphism-closed query q. We further show that the #P-hardness side of the dichotomy can be strengthened to cover possibly disconnected queries with constants. Our results exploit recently discovered connections between SVC and probabilistic query evaluation and allow us to generalize existing results on probabilistic OMQA.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France (0.04)