lsat score
dsld: A Socially Relevant Tool for Teaching Statistics
Abdullah, Taha, Ashok, Arjun, Estrada, Brandon, Matloff, Norman, Mittal, Aditya
The growing power of data science can play a crucial role in addressing social discrimination, necessitating nuanced understanding and effective mitigation strategies of potential biases. Data Science Looks At Discrimination (dsld) is an R and Python package designed to provide users with a comprehensive toolkit of statistical and graphical methods for assessing possible discrimination related to protected groups, such as race, gender, and age. Our software offers techniques for discrimination analysis by identifying and mitigating confounding variables, along with methods for reducing bias in predictive models. In educational settings, dsld offers instructors powerful tools to teach important statistical principles through motivating real world examples of discrimination analysis. The inclusion of an 80-page Quarto book further supports users, from statistics educators to legal professionals, in effectively applying these analytical tools to real world scenarios.
- North America > United States > California > Yolo County > Davis (0.05)
- North America > United States > New York (0.04)
- Instructional Material (1.00)
- Research Report > New Finding (0.68)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.93)
- Health & Medicine (0.93)
Deontological Ethics By Monotonicity Shape Constraints
We demonstrate how easy it is for modern machine-learned systems to violate common deontological ethical principles and social norms such as "favor the less fortunate", and "do not penalize good attributes." We propose that in some cases such ethical principles can be incorporated into a machine-learned model by adding shape constraints that constrain the model to respond only positively to relevant inputs. We analyze the relationship between these deontological constraints that act on individuals and the consequentialist group-based fairness goals of one-sided statistical parity and equal opportunity. This strategy works with sensitive attributes that are Boolean or real-valued such as income and age, and can help produce more responsible and trustworthy AI.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Italy > Sicily > Palermo (0.04)
- Law (1.00)
- Education > Educational Setting (0.69)
- Health & Medicine (0.68)
Avoiding Resentment Via Monotonic Fairness
Cole, Guy W., Williamson, Sinead A.
Classifiers that achieve demographic balance by explicitly using protected attributes such as race or gender are often politically or culturally controversial due to their lack of individual fairness, i.e. individuals with similar qualifications will receive different outcomes. Individually and group fair decision criteria can produce counter-intuitive results, e.g. that the optimal constrained boundary may reject intuitively better candidates due to demographic imbalance in similar candidates. Both approaches can be seen as introducing individual resentment, where some individuals would have received a better outcome if they either belonged to a different demographic class and had the same qualifications, or if they remained in the same class but had objectively worse qualifications (e.g. lower test scores). We show that both forms of resentment can be avoided by using monotonically constrained machine learning models to create individually fair, demographically balanced classifiers.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- (2 more...)