Audits as Evidence: Experiments, Ensembles, and Enforcement

arXiv.org Machine Learning

We develop tools for utilizing correspondence experiments to detect illegal discrimination by individual employers. Employers violate US employment law if their propensity to contact applicants depends on protected characteristics such as race or sex. We establish identification of higher moments of the causal effects of protected characteristics on callback rates as a function of the number of fictitious applications sent to each job ad. These moments are used to bound the fraction of jobs that illegally discriminate. Applying our results to three experimental datasets, we find evidence of significant employer heterogeneity in discriminatory behavior, with the standard deviation of gaps in job-specific callback probabilities across protected groups averaging roughly twice the mean gap. In a recent experiment manipulating racially distinctive names, we estimate that at least 85% of jobs that contact both of two white applications and neither of two black applications are engaged in illegal discrimination. To assess the tradeoff between type I and II errors presented by these patterns, we consider the performance of a series of decision rules for investigating suspicious callback behavior under a simple two-type model that rationalizes the experimental data. Though, in our preferred specification, only 17% of employers are estimated to discriminate on the basis of race, we find that an experiment sending 10 applications to each job would enable accurate detection of 7-10% of discriminators while falsely accusing fewer than 0.2% of non-discriminators. A minimax decision rule acknowledging partial identification of the joint distribution of callback rates yields higher error rates but more investigations than our baseline two-type model. Our results suggest illegal labor market discrimination can be reliably monitored with relatively small modifications to existing audit designs.


Object-oriented Bayesian networks for a decision support system for antitrust enforcement

arXiv.org Artificial Intelligence

We study an economic decision problem where the actors are two firms and the Antitrust Authority whose main task is to monitor and prevent firms' potential anti-competitive behaviour and its effect on the market. The Antitrust Authority's decision process is modelled using a Bayesian network where both the relational structure and the parameters of the model are estimated from a data set provided by the Authority itself. A number of economic variables that influence this decision process are also included in the model. We analyse how monitoring by the Antitrust Authority affects firms' strategies about cooperation. Firms' strategies are modelled as a repeated prisoner's dilemma using object-oriented Bayesian networks. We show how the integration of firms' decision process and external market information can be modelled in this way. Various decision scenarios and strategies are illustrated.


Artificial intelligence could 'evolve faster than the human race'

#artificialintelligence

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.


Professor Stephen Hawking warns of rogue robot rebellion evolving faster than humans

Daily Mail - Science & tech

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.


UN opens formal discussions on AI-powered autonomous weapons, could ban 'killer robots' - TechRepublic

#artificialintelligence

Many current fears around AI and automation center around the idea that superintelligence could somehow "take over," turning streets around the globe into scenes from The Terminator. While there is much to be gained from discussing the safe development of AI, there's another more imminent danger: Autonomous weapons. On Friday, after three years of negotiations, the UN unanimously agreed to take action. At the Fifth Review Conference of the UN Convention on Certain Conventional Weapons, countries around the world agreed to begin formal discussions--which will take place for two weeks at the 2017 UN convention in Geneva--on a possible ban of lethal, autonomous weapons. Talks will begin in April or August, and 88 countries have agreed to attend.