Civil Rights & Constitutional Law


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


Banksy donates funds from anti-arms artwork sale

BBC News

The artwork, Civilian Drone Strike, was on display at the Stop the Arms Fair art exhibition in east London. The exhibition was held alongside the world's largest arms fair, the Defence and Security Equipment International - both exhibitions closed on Friday. The money raised from the Banksy sale will be split between one of the exhibition's organisers, Campaign Against Arms Trade (CAAT), and the human rights group Reprieve. During last week's Defence and Security Equipment International (DSEI) fair, Mr Smith said it was "shameful that the government is welcoming despots and dictatorships to the UK to buy weapons".


Robots are really good at learning things like racism and bigotry

#artificialintelligence

The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn't make it science.


Chatbot helps asylum seekers prepare for their interviews Springwise

#artificialintelligence

MarHub is a new chatbot developed by students at the University of California-Berkeley's Haas School of Business to help asylum seekers through the complicated process of applying to become an official refugee – which can take up to 18 months – and to avoid using smugglers. Finding the right information for the asylum process isn't easy, and although most asylum seekers are in possession of a smartphone, a lot of the information is either missing or out of date. The development team quickly realized that there was a gap in the market which they felt they could fill. What other original ways could we implement to help refugees and the process of asylum?


Stanford professor says face-reading AI will detect IQ

Daily Mail

Stanford researcher Dr Michal Kosinski went viral last week after publishing research (pictured) suggesting AI can tell whether someone is straight or gay based on photos. Stanford researcher Dr Michal Kosinki claims he is working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive. Dr Kosinki claims he is now working on AI software that can identify political beliefs, with preliminary results proving positive.


Google's comment ranking system will be a hit with the alt-right

Engadget

The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. To broadly determine what is and isn't toxic, Disqus uses the Perspective API--software from Alphabet's Jigsaw division that plugs into its system. Pasting her "Dear white people" into Perspective's API got a score of 61 percent toxicity. It's possible that the tool is seeking out comments with terms like black, gay, and woman as high potential for being abusive or negative, but that would make Perspective an expensive, overkill wrapper for the equivalent of using Command-F to demonize words that some people might find upsetting.


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


AIs that learn from photos become sexist

Daily Mail

In the fourth example, the person pictured is labeled'woman' even though it is clearly a man because of sexist biases in the set that associate kitchens with women Researchers tested two of the largest collections of photos used to train image recognition AIs and discovered that sexism was rampant. However, they AIs associated men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment. 'For example, the activity cooking is over 33 percent more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68 percent at test time,' reads the paper, titled'Men Also Like Shopping,' which published as part of the 2017 Conference on Empirical Methods on Natural Language Processing. A user shared a photo depicting another scenario in which technology failed to detect darker skin, writing'reminds me of this failed beta test Princeton University conducted a word associate task with the algorithm GloVe, an unsupervised AI that uses online text to understand human language.