Civil Rights & Constitutional Law

Big Data will be biased, if we let it


And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.

Racist artificial intelligence? Maybe not, if computers explain their 'thinking'


Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "In fact, it can get much worse where if the AI agents are part of a loop where they're making decisions, even the future data, the biases get reinforced," he added. Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesn't pick up any gender or racial biases that humans have. But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.

If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.

The March on Austin: Washington Casts a Shadow on SXSW


For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. This year's dialog will focus on how "social media can drive organized protests and provide support for causes our current administration has reprioritized," like the environment, gender equality and women's rights, said Neil Carty, senior VP-innovation strategy at consultancy MediaLink. "There is a shift away from interruptive TV ads to content people want to watch in its own right," said Jody Raida, director-branded entertainment at McGarryBowen. Artificial intelligence and virtual reality will also be hot, with dozens of sessions dedicated to the technologies, along with the application of chatbots and live video.

This chatbot helps refugees claim asylum, for free


Refugees struggling with asylum applications can now use a chatbot to get free legal aid in the US, Canada and the UK. For example, the best answer for your situation will include a description when the mistreatment started in your home country," Browder said. In order to give free legal aid, DoNotPay relies on Facebook Messenger, which is not automatically end-to-end encrypted, as it is "the most accessible platform and the most appropriate to launch with". "All data is deleted from my server after ten minutes and it is possible to wipe your data from Facebook Messenger," he said, acknowledging that privacy is a "very important issue and it's important to be upfront with users".

Chatbot that overturned 160,000 parking fines now helping refugees claim asylum

The Guardian

The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.

Nowhere to hide

BBC News

And Russian app FindFace lets you match a photograph you've taken of someone to their social media profile on the country's popular social media platform Vkontakte. Carl Gohringer, founder and director at Allevate, a facial recognition firm that works with law enforcement, intelligence and government agencies, says: "The amount of media - such as videos and photos - available to us as individuals, organisations and businesses, and to intelligence and law enforcement agencies, is staggering. But Ruth Boardman, data privacy specialist at international law firm Bird & Bird, says individual rights still vary from one EU state to another. And the automation of security vetting decisions based on facial recognition tech raises serious privacy issues.

This Startup Is Teaching Machines To Think, Reason, And Communicate Like Us


Maluuba's current artificial intelligence is able to process words from a Wikipedia page, a George R.R. "Questions that have definite answers are what we've tackled to date," says Maluuba research scientist Adam Trischler, who leads the machine comprehension team. "If you get to the point where you can teach a system to solve a problem in a language with a generalized approach, in this case reading," says Musbah, "you've gotten to the point where it can scale in terms of how it applies in an AI fashion across different industries." Language comprehension then isn't just an artificial intelligence problem, but a human problem.

This AI predicts the outcome of human rights trials


"The court has a huge queue of cases that have not been processed and it's quite easy to say if some of them have a high probability of violation, and others have a low probability of violation," said Vasileios Lampos, also a UCL scientist and co-author of the study. To do this, the scientists fed a database of court decisions into a natural language processing neural network. He has written on culture, politics, travel, tech, business, human rights, for local, national, and international news services and magazines. He has a keen interest in the role technology is playing in the transformation of society, culture and politics, especially in developing nations.

Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective


In this paper, our particular focus is on the automatic analysis of cases of the European Court of Human Rights (ECtHR or Court). Our task is to predict whether a particular Article of the Convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. Accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ECtHR cases could potentially provide insights on whether judges follow a so-called legal model (Grey, 1983) of decision making or their behavior conforms to the legal realists' theorization (Leiter, 2007), according to which judges primarily decide cases by responding to the stimulus of the facts of the case. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely.