Civil Rights & Constitutional Law


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.


What privacy pros can take away from Uber's Greyball

#artificialintelligence

It combined data collected from its app and "other techniques" to locate, identify, and circumvent legal authorities. Through several means, the company surveilled government officials to avoid regulatory scrutiny and other law enforcement activity. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers. Regulatory officials and law-enforcement officers are people with privacy rights, too.


It's Too Late--We've Already Taught AI to Be Racist and Sexist

#artificialintelligence

Miltenburg hasn't tested whether software trained on these image descriptions actually generates new, and biased, descriptions. Annotating images to teach machines should, Miltenburg wrote, be treated more as a psychological experiment, and less like a rote data collection task. By tightening the guidelines for crowdworkers, researchers would be able to better control what information deep learning software vacuums up in the first place. "One could certainly create annotation guidelines that explicitly instruct workers about gender or racial stereotypes," wrote Hockenmaier.