civil rights & constitutional law


Airlines take no chances with our safety. And neither should artificial intelligence

#artificialintelligence

You'd thinking flying in a plane would be more dangerous than driving a car. In reality it's much safer, partly because the aviation industry is heavily regulated. Airlines must stick to strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go wrong, we investigate and attempt to rectify the issue to improve safety in the future. Other industries where things can go very badly wrong, such as pharmaceuticals and medical devices, are also heavily regulated.


The race problem with AI: 'Machines are learning to be racist'

#artificialintelligence

Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Society's reliance on AI is set to increase at a pace that is hard to comprehend. AI isn't the kind of technology that is confined to futuristic science fiction movies – the robots you've seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify. Artificial intelligence is simply machine learning.


Alexa, Alex, or Al?

#artificialintelligence

Our tech world is fraught with troubling trends when it comes to gender inequality. A recent UN report "I'd blush if I could" warns that embodied AIs like the primarily female voice assistants can actually reinforce harmful gender stereotypes. Dag Kittlaus, who co-founded Siri before its acquisition by Apple, spoke out on Twitter against the accusation on Siri's sexism: It is important to acknowledge that the gender of Siri, unlike that of other voice assistants, was configurable early on. But the product's position becomes harder to define when you notice that Siri's response to the highly inappropriate comment "You're a slut" is in fact the title of the UN report: "I'd blush if I could." Therefore, in this article I'd like to discuss the social and cultural aspects of voice assistants, and specifically, why they are designed with gender, what ethical concerns this causes, and how we can fix this issue.


Checks and balances in AI ethics

#artificialintelligence

Ethics of AI: While artificial intelligence promises significant benefits, there are concerns it could make unethical decisions. Prefer to listen to this story? Here it is in audio format. Artificial intelligence (AI) is fast becoming important for accountants and businesses, and how it is used raises several ethical issues and questions. While autonomous AI algorithms teach themselves, concerns have been raised that some machine learning techniques are essentially "black boxes" that make it technically impossible to fully understand how the machine arrived at a result.


Speech recognition technology is racist, study finds

#artificialintelligence

New evidence of voice recognition's racial bias problem has emerged. Speech recognition technologies developed by Amazon, Google, Apple, Microsoft, and IBM make almost twice as many errors when transcribing African American voices as they do with white American voices, according to a new Stanford study. All five systems produced these error rates even when the speakers were of the same gender and age, and saying the exact same words. We can't know for sure if these technologies are used in virtual assistants, such as Siri and Alexa, as none of the companies disclose this information. If they are, the products will be offering a vastly inferior service to a huge chunk of their users -- which can have a major impact on their daily lives.


New Models of Governance Must Address the Human Rights Challenges Raised by Artificial Intelligence - The Geneva Academy of International Humanitarian Law and Human Rights

#artificialintelligence

Artificial intelligence (AI) is bound to enable innovation in the decades to come. On the one hand, AI technologies may be used to improve societal well-being and help fight human rights abuses. On the other hand, AI presents a variety of challenges that can profoundly affect the respect for and protection of human rights. Therefore, it is important to place international human rights law (IHRL) at the centre of discussions about AI governance. Our New Research Brief Human Rights and the Governance of Artificial Intelligence discusses the opportunities and risks that AI represents for human rights, recalls that IHRL should occupy a central place in the governance of AI and outlines two additional avenues to regulation: public procurement and standardization.


Catherine D'Ignazio: 'Data is never a raw, truthful input – and it is never neutral'

The Guardian

Our ability to collect and record information in a digital form has exploded as has our adoption of AI systems, which use data to make decisions. But data isn't neutral, and sexism, racism and other forms of discrimination are showing up in our data products. Catherine D'Ignazio, an assistant professor of urban science and planning at the Massachusetts Institute of Technology (MIT), argues we need to do better. Along with Lauren Klein, who directs the Digital Humanities Lab at Emory University, she is the co-author of the new book Data Feminism, which charts a course for a more equitable data science. D'Ignazio also directs MIT's new Data and Feminism lab, which seeks to use data and computation to counter oppression.


Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces

Neural Information Processing Systems

In extreme classification settings, embedding-based neural network models are currently not competitive with sparse linear and tree-based methods in terms of accuracy. Most prior works attribute this poor performance to the low-dimensional bottleneck in embedding-based methods. In this paper, we demonstrate that theoretically there is no limitation to using low-dimensional embedding-based methods, and provide experimental evidence that overfitting is the root cause of the poor performance of embedding-based methods. These findings motivate us to investigate novel data augmentation and regularization techniques to mitigate overfitting. To this end, we propose GLaS, a new regularizer for embedding-based neural network approaches.


AI Predicted to Take Over Privacy Tech

#artificialintelligence

More than 40% of privacy tech solutions aimed at ensuring legal compliance are predicted to rely on Artificial Intelligence (AI) over the course of the next three years, analysts from the business research and advisory firm Gartner Inc have found. The company--which is set to present these findings among others at the Gartner IT Symposium/Xpo 2020 in Toronto, Canada in May--has found that reliance on privacy tech to ensure compliance with various privacy laws is expected to increase by at least 700% between 2020 and 2023. This marks an increase from the 5% of privacy tech solutions that are AI driven today to the more than 40% that are predicted to become available within the next 36 months. This development comes as companies are increasingly exposed to the combined pressures of privacy legislations and data breach risks. An October 2019 study by Bitdefender, for example, found that nearly 60% of companies had experienced a data breach since the beginning of 2017, and that nearly a quarter of the companies surveyed had suffered such a breach within the first six months of 2019 alone.


Facial recognition is in London. So how should we regulate it?

#artificialintelligence

As the first step on the road to a powerful, high tech surveillance apparatus, it was a little underwhelming: a blue van topped by almost comically intrusive cameras, a few police officers staring intently but ineffectually at their smartphones and a lot of bemused shoppers. As unimpressive as the moment may have been, however, the decision by London's Metropolitan Police to expand its use of live facial recognition (LFR) marks a significant shift in the debate over privacy, security and surveillance in public spaces. Despite dismal accuracy results in earlier trials, the Metropolitan Police Service (MPS) has announced they are pushing ahead with the roll-out of LFR at locations across London. MPS say that cameras will be focused on a small targeted area "where intelligence suggests [they] are most likely to locate serious offenders," and will match faces against a database of individuals wanted by police. The cameras will be accompanied by clear signposting and officers handing out leaflets (it is unclear why MPS thinks that serious offenders would choose to walk through an area full of police officers handing out leaflets to passersby).