Goto

Collaborating Authors

Politics of Adversarial Machine Learning

arXiv.org Machine Learning

In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creating risks for civil liberties and human rights. In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems. To make this concrete, we use real-world examples of how attacks such as perturbation, model inversion, or membership inference can be used for socially desirable ends. Although the predictions of this analysis may seem dire, there is hope. Efforts to address human rights concerns in the commercial spyware industry provide guidance for similar measures to ensure ML systems serve democratic, not authoritarian ends


AI ethics is all about power

#artificialintelligence

At the Common Good in the Digital Age tech conference recently held in Vatican City, Pope Francis urged Facebook executives, venture capitalists, and government regulators to be wary of the impact of AI and other technologies. "If mankind's so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest," he said. In a related but contextually different conversation, this summer Joy Buolamwini testified before Congress with Rep. Alexandria Ocasio-Cortez (D-NY) that multiple audits found facial recognition technology generally works best on white men and worst on women of color. What these two events have in common is their relationship to power dynamics in the AI ethics debate. Arguments about AI ethics can wage without mention of the word "power," but it's often there just under the surface. In fact, it's rarely the direct focus, but it needs to be. Power in AI is like gravity, an invisible force that influences every consideration of ethics in artificial intelligence. Power provides the means to influence which use cases are relevant; which problems are priorities; and who the tools, products, and services are made to serve. It underlies debates about how corporations and countries create policy governing use of the technology.


AI ethics is all about power

#artificialintelligence

At the Common Good in the Digital Age tech conference recently held in Vatican City, Pope Francis urged Facebook executives, venture capitalists, and government regulators to be wary of the impact of AI and other technologies. "If mankind's so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest," he said. In a related but contextually different conversation, this summer Joy Buolamwini testified before Congress with Rep. Alexandria Ocasio-Cortez (D-NY) that multiple audits found facial recognition technology generally works best on white men and worst on women of color. What these two events have in common is their relationship to power dynamics in the AI ethics debate. Arguments about AI ethics can wage without mention of the word "power," but it's often there just under the surface. In fact, it's rarely the direct focus, but it needs to be. Power in AI is like gravity, an invisible force that influences every consideration of ethics in artificial intelligence. Power provides the means to influence which use cases are relevant; which problems are priorities; and who the tools, products, and services are made to serve. It underlies debates about how corporations and countries create policy governing use of the technology.


AI Weekly: Surveillance, structural racism, and the Biden 2020 presidential campaign

#artificialintelligence

In the United Kingdom there's been some landmark AI news recently involving government use of the technology. First, use of facial recognition by South Wales Police was ruled unlawful by a Court of Appeal judge in part for violating privacy, human rights, and failure by police to verify the tech did not exhibit race or gender bias. How the U.K. treats facial recognition is important since London has more CCTV cameras than any major city outside of China. Then, U.K. government officials used an algorithm that ended up benefiting kids who go to private schools and downgrading students from disadvantaged backgrounds. Prime Minister Boris Johnson defended the algorithm grading results as "robust" and "dependable for employers."


From whistleblower laws to unions: How Google's AI ethics meltdown could shape policy

#artificialintelligence

It's been two weeks since Google fired Timnit Gebru, a decision that still seems incomprehensible. Gebru is one of the most highly regarded AI ethics researchers in the world, a pioneer whose work has highlighted the ways tech fails marginalized communities when it comes to facial recognition and more recently large language models. Of course, this incident didn't happen in a vacuum. Case in point: Gebru was fired the same day the National Labor Review Board (NLRB) filed a complaint against Google for illegally spying on employees and the retaliatory firing of employees interested in unionizing. Gebru's dismissal also calls into question issues of corporate influence in research, demonstrates the shortcomings of self-regulation, and highlights the poor treatment of Black people and women in tech in a year when Black Lives Matter sparked the largest protest movement in U.S. history. In an interview with VentureBeat last week, Gebru called the way she was fired disrespectful and described a companywide memo sent by CEO Sundar Pichai as "dehumanizing." To delve further into possible outcomes following Google's AI ethics meltdown, VentureBeat spoke with five experts in the field about Gebru's dismissal and the issues it raises.