Results


Artificial Intelligence: ARTICLE 19 calls for protection of freedom… · Article 19

#artificialintelligence

The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. Machine learning – the most successful subset of AI techniques – enables an algorithm to learn from a dataset using statistical methods. As such, AI has a direct impact on the ability of individuals to exercise their right to freedom of expression in the digital age. Development of AI is not new but advances in the digital environment – greater volumes of data, computational power, and statistical methods – will make it more enabling in the future.


Around the world with Ai Weiwei: Where to get your fix of the artist's work

Los Angeles Times

A multiple-exposure portrait of Chinese contemporary artist and human rights activist Ai Weiwei, made on film in Beverly Hills, on the occasion of his new documentary, "Human Flow." A multiple-exposure portrait of Chinese contemporary artist and human rights activist Ai Weiwei, made on film in Beverly Hills, on the occasion of his new documentary, "Human Flow." He spent the better part of 2016 traveling around the globe visiting refugee camps for his new documentary feature film, "Human Flow," debuting in theaters this month. In New York, the contemporary artist and social justice activist is installing some 300 works across the city's five boroughs for the Public Art Fund exhibition "Good Fences Make Good Neighbors," opening Oct. 12.


Egyptian Concertgoers Wave a Flag, and Land in Jail

NYT > Middle East

In an apparently separate case, a student who attended the Mashrou' Leila concert was arrested hours later after being "caught in the act," the police said. Homosexuality is not illegal in Egypt, but the authorities frequently prosecute gay men for homosexuality and women for prostitution under loosely-worded laws that prohibit immorality and "habitual debauchery." The Arab Spring ushered in a brief period of respite, with a sharp rise in the use of dating apps as gay people socialized openly at parties and in bars. On Monday a court convicted Khaled Ali, a lawyer and opposition figure, for making an obscene finger gesture outside a Cairo courthouse last year after he and other lawyers won a case against the government.


Big Data will be biased, if we let it

@machinelearnbot

And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.


FaceApp removes 'Ethnicity Filters' after racism storm

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist'


'Racist' FaceApp photo filters encouraged users to black up

The Independent

FaceApp has removed a number of racially themed photo filters after being accused of racism. The app, which uses artificial intelligence to edit pictures, this week launched a number of "ethnicity change filters". FaceApp has attracted fierce criticism for launching the filters, with some users claiming they were racist, and encouraged users to "black up" digitally. Responding to the backlash, FaceApp founder and CEO, Yaroslav Goncharov, said, "The ethnicity change filters have been designed to be equal in all aspects.


Rise of the racist robots – how AI is learning all our worst impulses

#artificialintelligence

Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages. Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland's police department. As if that wasn't bad enough, the researchers also simulated what would happen if police had acted directly on PredPol's hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

#artificialintelligence

The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.


The latest NSA leak is a reminder that your bosses can see your every move

Washington Post

The answer, according to some former NSA analysts, is that the agency routinely monitors many of its employees' computer activity. It is a $200 million-a-year industry, according to a study last year by 451 Research, a technology research firm, and is estimated to be worth $500 million by 2020. Employee monitoring recently came to light in a high-profile lawsuit involving Uber and Waymo, the self-driving car company owned by Google's parent firm, Alphabet. Privacy advocates have been pushing for years to have Congress review various communications privacy laws in light of updates to technology.


Pew Research Center: Internet, Science and Tech on the Future of Free Speech

#artificialintelligence

They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. She said, "Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil.