civil rights & constitutional law


AI expert warns against 'racist and misogynist algorithms'

Daily Mail - Science & tech

A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions. Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications. Military researchers are even exploring whether facial recognition technology could enable autonomous drones to identify their own targets. However, University of Sheffield computer expert Noel Sharkey told the Guardian that such algorithms are'infected with biases' and cannot be trusted. Calling for a halt on all AI with the potential to change people's lives, Professor Sharkey instead advocates for vigorous testing before they are used in public.


Facial recognition at Indian cafe chain sparks calls for data protection law - Reuters

#artificialintelligence

BANGKOK (Thomson Reuters Foundation) - The use of facial recognition technology at a popular Indian cafe chain that triggered a backlash among customers, led to calls from human rights advocates on Monday for the government to speed up the introduction of laws to protect privacy.


Facial recognition at Indian cafe chain sparks calls for data protection law - Reuters

#artificialintelligence

BANGKOK (Thomson Reuters Foundation) - The use of facial recognition technology at a popular Indian cafe chain that triggered a backlash among customers, led to calls from human rights advocates on Monday for the government to speed up the introduction of laws to protect privacy. Customers at Chaayos took to social media during the last week to complain about the camera technology they said captured images of them without their consent, with no information on what the data would be used for, and no option to opt out. While the technology is marketed as a convenience, the lack of legislative safeguards to protect against the misuse of data can lead to "breaches of privacy, misidentification and even profiling of individuals", said Joanne D'Cunha, associate counsel at Internet Freedom Foundation, a digital rights group. "Until India introduces a comprehensive data protection law that provides such guarantees, there needs to be a moratorium on any technology that would infringe upon an individual's right to privacy and other rights that stem from it," she told the Thomson Reuters Foundation from New Delhi. A statement from Chaayos said the technology was being tested in select cafes and was aimed at reducing purchase times for customers.


A kernel log-rank test of independence for right-censored data

#artificialintelligence

With the incorporation of new data gathering methods in clinical research, it becomes fundamental for survival analysis techniques to deal with high-dimensional or/and non-standard covariates. In this paper we introduce a general non-parametric independence test between right-censored survival times and covariates taking values on a general (not necessarily Euclidean) space X. We show that our test statistic has a dual interpretation, first in terms of the supremum of a potentially infinite collection of weight-indexed log-rank tests, with weight functions belonging to a reproducing kernel Hilbert space (RKHS) of functions; and second, as the norm of the difference of embeddings of certain finite measures into the RKHS, similar to the Hilbert-Schmidt Independence Criterion (HSIC) test-statistic. We study the asymptotic properties of the test, finding sufficient conditions to ensure that our test is omnibus. The test statistic can be computed straightforwardly, and the rejection threshold is obtained via an asymptotically consistent Wild-Bootstrap procedure.



Final report from the 14th Internet Governance Forum Digital Watch

#artificialintelligence

At IGF 2019, several policy initiatives, reports, and publications were launched or used as background material for discussions.


As Amazon Ring Partners With Law Enforcement on Surveillance Video, Privacy Concerns Mount

#artificialintelligence

While Amazon takes special care to position its Ring video doorbell product as a friendly, high-tech version of the traditional "neighborhood watch," U.S. lawmakers and privacy advocates are becoming increasingly skeptical. As they see it, Amazon Ring is putting into place few if any safeguards to protect personal privacy and civil rights. Now that Amazon Ring is partnering with hundreds of law enforcement and police agencies around the nation to share surveillance video, these privacy concerns are only mounting. In November, Amazon Ring released new details about its surprisingly extensive partnership agreements with law enforcement agencies. This update is a follow-up to a Washington Post article outlining Amazon Ring's new partnerships with law enforcement.


Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces

Neural Information Processing Systems

In extreme classification settings, embedding-based neural network models are currently not competitive with sparse linear and tree-based methods in terms of accuracy. Most prior works attribute this poor performance to the low-dimensional bottleneck in embedding-based methods. In this paper, we demonstrate that theoretically there is no limitation to using low-dimensional embedding-based methods, and provide experimental evidence that overfitting is the root cause of the poor performance of embedding-based methods. These findings motivate us to investigate novel data augmentation and regularization techniques to mitigate overfitting. To this end, we propose GLaS, a new regularizer for embedding-based neural network approaches.


AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing by Karen Yeung, Andrew Howes , Ganna Pogrebna :: SSRN

#artificialintelligence

In this paper, we (1) argue that the international human rights framework provides the most promising set of standards for ensuring that AI systems are ethical in their design, development and deployment, and (2) sketch the basic contours of a comprehensive governance framework, which we call'human rights-centred design, deliberation and oversight', for ensuring that AI can be relied upon to operate in ways that will not violate human rights.


Human Memory Search as Initial-Visit Emitting Random Walk

Neural Information Processing Systems

Imagine a random walk that outputs a state only when visiting it for the first time. The observed output is therefore a repeat-censored version of the underlying walk, and consists of a permutation of the states or a prefix of it. We call this model initial-visit emitting random walk (INVITE). Prior work has shown that the random walks with such a repeat-censoring mechanism explain well human behavior in memory search tasks, which is of great interest in both the study of human cognition and various clinical applications. However, parameter estimation in INVITE is challenging, because naive likelihood computation by marginalizing over infinitely many hidden random walk trajectories is intractable.