Civil Rights & Constitutional Law


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


International Conference on Artificial Intelligence and Information

#artificialintelligence

submission: We invite submission for a 30 minute presentation (followed by 10 minute discussion). An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information and short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: "First Name Last Name - Track - Title of Abstract" We intend to produce a collected volume based upon contributions to the conference.


Using Sound and Artificial Intelligence to Detect Human Rights Violations

#artificialintelligence

But video footage poses a "Big Data" challenge to human rights organizations. To take on this Big Data challenge, Jay and team have developed a new machine learning-based audio processing system that "enables both synchronization of multiple audio-rich videos of the same event, and discovery of specific sounds (such as wind, screaming, gunshots, airplane noise, music, and explosions) at the frame level within a video." I've been following Jay's applied research for many years now and continue to be a fan of his approach given the overlap with my own work in the use of machine learning to make sense of the Big Data generated during major natural disasters. Effective cross-disciplinary collaboration between computer scientists and human rights (or humanitarian) practitioners is really hard but absolutely essential.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender. Though Caliskan and her colleagues found language was full of biases based on prejudice and stereotypes, it was also full of latent truths as well. "Language reflects facts about the world," Caliskan told Ars.


Emerging Ethical Concerns In the Age of Artificial Intelligence

#artificialintelligence

Science fiction novels have long delighted readers by grappling with futuristic challenges like the possibility of artificial intelligence so difficult to distinguish from human beings that people naturally ask, "should these sophisticated computer programs be considered human? Tech industry luminaries such as Tesla CEO Elon Musk have recently endorsed concepts like guaranteed minimum income or universal basic income. Bill Gates recently made headlines with a proposal to impose a "robot tax" -- essentially, a tax on automated solutions to account for the social costs of job displacement. Technology challenges our conception of human rights in other ways, as well.


Police Using Technology To Fight Crime Threatens Black Neighborhoods

International Business Times

But the city's new effort seems to ignore evidence, including recent research from members of our policing study team at the Human Rights Data Analysis Group, that predictive policing tools reinforce, rather than reimagine, existing police practices. Machine-learning algorithms learn to make predictions by analyzing patterns in an initial training data set and then look for similar patterns in new data as they come in. Our recent study, by Human Rights Data Analysis Group's Kristian Lum and William Isaac, found that predictive policing vendor PredPol's purportedly race-neutral algorithm targeted black neighborhoods at roughly twice the rate of white neighborhoods when trained on historical drug crime data from Oakland, California. This should start with community members and police departments discussing policing priorities and measures of police performance.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.


Princeton researchers discover why AI become racist and sexist

#artificialintelligence

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.


Amazon: Virtual assistants and AI robots have free speech rights, too

#artificialintelligence

In George Orwell's classic dystopian novel, "1984," every house is equipped with a Telescreen, a monitoring device enabling government surveillance. Amazon is trying to prevent its Echo/Alexa from turning into just that. Amazon is hoping to keep its Alexa devices from being a tool of government listening, which could inhibit people from buying them. Accordingly, the Seattle-based company has filed a motion to prevent recorded audio from an Echo being used as evidence in a criminal trial. Last year, police in Arkansas sought to obtain recordings captured by Echo as evidence in a 2015 murder case.