Civil Rights & Constitutional Law

Give Saudi women a license not just to drive, but to run their own lives

Los Angeles Times

It'll be close, but it looks like women will be allowed to drive in Saudi Arabia with some time to spare before the automobile industry converts entirely to self-driving cars. A royal decree announced Tuesday that women would finally be allowed behind the wheel, heralding a preposterously overdue end to the most high-profile and infamous of the repressive kingdom's restrictions on women. Even a woman in prison requires a male guardian to agree to her release, according to the monitoring group Human Rights Watch, which described the guardianship system as the most significant impediment to women's rights in Saudi Arabia -- and even a barrier to the government's own plans to improve the economy. The abolition of the male guardianship system should be the next announcement we hear from the Saudi government.

Robots are really good at learning things like racism and bigotry


The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn't make it science.

Stanford professor says face-reading AI will detect IQ

Daily Mail

Stanford researcher Dr Michal Kosinski went viral last week after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos. Stanford researcher Dr Michal Kosinki claims he is working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive.

AI robots are sexist and racist, experts warn


He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.

If you weren't raised in the Internet age, you may need to worry about workplace age discrimination

Los Angeles Times

Although people of both genders struggle with age discrimination, research has shown women begin to experience age discrimination in hiring practices before they reach 50, whereas men don't experience it until several years later. Just as technology is causing barriers inside the workplace for older employees, online applications and search engines could be hurting older workers looking for jobs. Many applications have required fields asking for date of birth and high school graduation, something many older employees choose to leave off their resumes. Furthermore, McCann said, some search engines allow people to filter their search based on high school graduation date, thereby allowing employers and employees to screen people and positions out of the running.

How not to create a racist, sexist robot


Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. With the federal government recently announcing a $125 million investment in Canada's AI industry, Duhaime says now is the time to make sure funding goes towards pushing women forward in this field. "There is an understanding in the research community that we have to be careful and we have to have a plan with respect to ethical correctness of AI systems," she tells Tremonti.

Freshly Remember'd: Kirk Drift


I am trapped at a dull dinner following a dull talk: part of a series of dinners and talks that grad students organise, unpaid (though at considerable expense to themselves--experience! The pop culture idea of Kirk, Captain of the Enterprise for the first Star Trek series (ST:TOS) and the original run of films, has become almost synonymous with Zapp Brannigan from Futurama. The article "Captain Kirk's 8 Most Impressive Love Conquests" gives us such bon mots as these: For three glorious seasons, Star Trek's Captain James T. Kirk boldly seduced and explored women no Earth-man had been with before. Kirk's storied history of womanising seemingly consists of his having seriously dated a fairly small number of clever women in Uni.

If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?


Our devices are connected, personal digital assistants answer our queries, algorithms track our habits and make recommendations, AI is sparking advancements in medicine, cars will soon be driving themselves, and robots will be delivering our pizza etc. An AI-judged beauty contest went through thousands of selfies and chose 44 fair skin faces and only one dark face to be the winners. Tools are usually designed for men, women clothing have no pockets, seat belts were till recently only tested on male dummies, thus putting women at greater risk in case of a crash. Artificial intelligence gives us the incredible opportunity to wipe out human bias in decision making.

How to Keep Your AI From Turning Into a Racist Monster


Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. Tay's embrace of humanity's worst attributes is an example of algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women.