Audits as Evidence: Experiments, Ensembles, and Enforcement Machine Learning

We develop tools for utilizing correspondence experiments to detect illegal discrimination by individual employers. Employers violate US employment law if their propensity to contact applicants depends on protected characteristics such as race or sex. We establish identification of higher moments of the causal effects of protected characteristics on callback rates as a function of the number of fictitious applications sent to each job ad. These moments are used to bound the fraction of jobs that illegally discriminate. Applying our results to three experimental datasets, we find evidence of significant employer heterogeneity in discriminatory behavior, with the standard deviation of gaps in job-specific callback probabilities across protected groups averaging roughly twice the mean gap. In a recent experiment manipulating racially distinctive names, we estimate that at least 85% of jobs that contact both of two white applications and neither of two black applications are engaged in illegal discrimination. To assess the tradeoff between type I and II errors presented by these patterns, we consider the performance of a series of decision rules for investigating suspicious callback behavior under a simple two-type model that rationalizes the experimental data. Though, in our preferred specification, only 17% of employers are estimated to discriminate on the basis of race, we find that an experiment sending 10 applications to each job would enable accurate detection of 7-10% of discriminators while falsely accusing fewer than 0.2% of non-discriminators. A minimax decision rule acknowledging partial identification of the joint distribution of callback rates yields higher error rates but more investigations than our baseline two-type model. Our results suggest illegal labor market discrimination can be reliably monitored with relatively small modifications to existing audit designs.

US appeals court says Tinder Plus pricing is discriminatory


They say all's fair in love and war, but those that have used Tinder will probably disagree. And that includes Allan Candelore, a man suing the dating app over the pricing of its premium service, Tinder Plus. Candelore and his lawyers argue that charging $9.99 a month to users under 30, and $19.99 a month to those over 30, is age discrimination, and violates two California laws: the Unruh Civil Rights Act and the Unfair Competition Law.

Machine Learning, OSS & Ethical Conduct


"Machine learning is the subfield of computer science that "gives computers the ability to learn without being explicitly programmed" (Arthur Samuel, 1959).[1] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning explores the study and construction of algorithms that can learn" or be trained to make predictions on based on input "data[3]" such algorithms are not bound by static program instructions, instead making their predictions or decisions according to models they themselves build from sample inputs. "Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible; example applications include spam filtering, optical character recognition (OCR),[5] search engines and computer vision." Machine learning is also employed experimentally to detect patterns and linkages in seemingly random or unrelated data. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Does mitigating ML's impact disparity require treatment disparity? Machine Learning

Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit treatment disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact disparity when outcomes differ across subgroups, even if the correlation arises unintentionally. Naturally, we can achieve impact parity through purposeful treatment disparity. In one thread of technical work, papers aim to reconcile the two forms of parity proposing disparate learning processes (DLPs). Here, the learning algorithm can see group membership during training but produce a classifier that is group-blind at test time. In this paper, we show theoretically that: (i) When other features correlate to group membership, DLPs will (indirectly) implement treatment disparity, undermining the policy desiderata they are designed to address; (ii) When group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) In general, DLPs provide a suboptimal trade-off between accuracy and impact parity. Based on our technical analysis, we argue that transparent treatment disparity is preferable to occluded methods for achieving impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs vs. per-group thresholds.

Virtual reality goes behind bars to rehabilitate inmates


Virtual reality (VR), which may include the use of headsets, PC software, or mobile applications, is slowly being investigated in education and training. Google Glass did not take off within the consumer space but its augmented reality (AR) technology has found merit in industrial work and employee training, and recent research conducted by the University of Maryland suggests that VR environments may be more effective for the purpose of revision and memory retention than computer screens. When it comes to health and wellbeing, VR is also being piloted to improve end-of-life care. Learning and balanced well-being, together, can be important factors in rehabilitation. In 2012, the US Supreme Court ruled that mandatory life sentences without parole issued to young offenders was unconstitutional.