Civil Rights & Constitutional Law


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "In fact, it can get much worse where if the AI agents are part of a loop where they're making decisions, even the future data, the biases get reinforced," he added. Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesn't pick up any gender or racial biases that humans have. But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.


If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.


Just like humans, artificial intelligence can be sexist and racist

#artificialintelligence

Machine learning is one of the biggest drivers of artificial intelligence technology at present. Algorithms within machine learning applications have been able to write code, play poker, and are being used in attempts to solve cancer. Yet, there is a bias problem. Using the popular GloVe algorithm, trained on around 840 billion words from the internet, three Princeton University academics have shown AI applications replicate the stereotypes shown in the human-generated data. These prejudices related to both race and gender.


The 'robot lawyer' giving free legal advice to refugees

BBC News

A technology initially used to fight traffic fines is now helping refugees with legal claims. When Joshua Browder developed DoNotPay he called it "the world's first robot lawyer". It's a chatbot - a computer program that carries out conversations through texts or vocal commands - and it uses Facebook Messenger to gather information about a case before spitting out advice and legal documents. It was originally designed to help people wiggle out of parking or speeding tickets. But now Browder - a 20-year-old British man currently studying at Stanford University - has adapted his bot to help asylum seekers.


The March on Austin: Washington Casts a Shadow on SXSW

#artificialintelligence

For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. This year's dialog will focus on how "social media can drive organized protests and provide support for causes our current administration has reprioritized," like the environment, gender equality and women's rights, said Neil Carty, senior VP-innovation strategy at consultancy MediaLink. "There is a shift away from interruptive TV ads to content people want to watch in its own right," said Jody Raida, director-branded entertainment at McGarryBowen. Artificial intelligence and virtual reality will also be hot, with dozens of sessions dedicated to the technologies, along with the application of chatbots and live video.


This chatbot helps refugees claim asylum, for free

Mashable

Refugees struggling with asylum applications can now use a chatbot to get free legal aid in the US, Canada and the UK. For example, the best answer for your situation will include a description when the mistreatment started in your home country," Browder said. In order to give free legal aid, DoNotPay relies on Facebook Messenger, which is not automatically end-to-end encrypted, as it is "the most accessible platform and the most appropriate to launch with". "All data is deleted from my server after ten minutes and it is possible to wipe your data from Facebook Messenger," he said, acknowledging that privacy is a "very important issue and it's important to be upfront with users".


Chatbot that overturned 160,000 parking fines now helping refugees claim asylum

The Guardian

The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.


Nowhere to hide

BBC News

And Russian app FindFace lets you match a photograph you've taken of someone to their social media profile on the country's popular social media platform Vkontakte. Carl Gohringer, founder and director at Allevate, a facial recognition firm that works with law enforcement, intelligence and government agencies, says: "The amount of media - such as videos and photos - available to us as individuals, organisations and businesses, and to intelligence and law enforcement agencies, is staggering. But Ruth Boardman, data privacy specialist at international law firm Bird & Bird, says individual rights still vary from one EU state to another. And the automation of security vetting decisions based on facial recognition tech raises serious privacy issues.


This Startup Is Teaching Machines To Think, Reason, And Communicate Like Us

#artificialintelligence

Maluuba's current artificial intelligence is able to process words from a Wikipedia page, a George R.R. "Questions that have definite answers are what we've tackled to date," says Maluuba research scientist Adam Trischler, who leads the machine comprehension team. "If you get to the point where you can teach a system to solve a problem in a language with a generalized approach, in this case reading," says Musbah, "you've gotten to the point where it can scale in terms of how it applies in an AI fashion across different industries." Language comprehension then isn't just an artificial intelligence problem, but a human problem.