Civil Rights & Constitutional Law


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


International Conference on Artificial Intelligence and Information

#artificialintelligence

submission: We invite submission for a 30 minute presentation (followed by 10 minute discussion). An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information and short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: "First Name Last Name - Track - Title of Abstract" We intend to produce a collected volume based upon contributions to the conference.


The White House Wants To End Racism In Artificial Intelligence

#artificialintelligence

In a section on fairness, the report notes what numerous AI researchers have already pointed out: biased data results in a biased machine. If a dataset--say, a bunch of faces--contains mostly white people, or if the workers who assembled a more diverse dataset (even unintentionally) rated white faces as being more attractive than non-white faces, then any computer program trained on that data would likely "believe" that white people are more attractive than non-white. "Ideally, every student learning AI, computer science, or data science would be exposed to curriculum and discussion on related ethics and security topics," the report states. Students should also be given the technical skills to apply this ethics education in their machine learning programs, the report notes.


Ethical AI predicts outcome of human rights trials

#artificialintelligence

Artificial intelligence researchers have developed software that is capable of making complex decisions to accurately predict the outcome of human rights trials. The AI "judge" was developed by computer scientists at University College London (UCL), the University of Sheffield and the University of Pennsylvania using an algorithm that analyzed the text of cases at the European Court of Human Rights. Despite the accuracy of the latest algorithm's predictions, the researchers do not predict it will replace human judges any time soon. "We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes," said Nikolaos Aletras, who led the study at UCL Computer Science.


well-keep-ai-safe-says-microsoft-google-ibm-facebook-and-amazon-on-new-partnership

#artificialintelligence

Some of the world's largest tech companies are coming together to form a partnership aimed at educating the public about the advancements of artificial intelligence and ensure they meet ethical standards. "We believe that artificial intelligence technologies hold great promise for raising the quality of people's lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education," the group stated in a series of "tenets." Another nexus of interest will be around ethics, with the group inviting academic experts to work with companies on AI for the best of humanity. But it's not clear whether this means opposing working with government surveillance authorities, or opposing forms of online censorship.


Racism, AI and Ethics - DATAVERSITY

#artificialintelligence

Andrew Heikkila recently wrote in TechCrunch, "Indeed, AI is here -- although Microsoft's blunder with Tay, the'teenaged girl AI' embodied by a Twitter account who'turned racist' shows that we obviously still have a long way to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics. Sydell calls upon Latanya Sweeney's 2013 study of Google AdWords buys made by companies providing criminal-background-check services. Sweeney's findings showed that when somebody Googled a traditionally "black-sounding" name, such as DeShawn, Darnell or Jermaine, for example, the ad results returned were indicative of arrests at a significantly higher rate than if the name queried was a traditionally'white-sounding' name, such as Geoffrey, Jill or Emma."